[ad_1]
What about outside the EU?
GDPR, the EU’s data protection regulation, is the block’s most famous tech export and has been copied everywhere from California to India.
Targeting the riskiest AI, the EU’s approach to AI is one that most developed countries agree on. If Europeans can create a coherent way to regulate technology, it could work as a template for other countries that hope to do so.
“US companies will raise the standards for American consumers on transparency and accountability in their compliance with the EU Artificial Intelligence Act,” says Marc Rotenberg, president of the Center for Artificial Intelligence and Digital Policy, a nonprofit that monitors artificial intelligence. policy.
The bill is also closely watched by the Biden administration. The US is home to some of the largest AI labs in the world, such as those at Google AI, Meta, and OpenAI, and has multiple different global leaders. standings In AI research, the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo, and Lynne Parker, who is leading the White House’s AI efforts, have welcomed Europe’s efforts to regulate AI.
“This contrasts sharply with how the United States sees the development of GDPR, which people in the US say will end the internet, eclipse the sun and end life on the planet as we know it,” says Rotenberg.
Despite some inevitable cautions, the United States has good reasons to welcome the bill. He is extremely concerned about China’s growing influence in technology. The official stance for America is that maintaining Western dominance of technology is a matter of perpetuating “democratic values.” He wants to keep the EU, “like-minded ally,” close.
What are the biggest challenges?
It is technically impossible to comply with some of the bill’s requirements at this time. The first draft of the bill requires datasets to be error-free and for people to be able to “completely understand” how AI systems work. The datasets used to train AI systems are vast, and human checking that they are completely error-free would require thousands of hours of work if it were possible to verify such a thing. And today’s neural networks are so complex that even their creators don’t quite understand how they arrive at results.
Tech companies are also extremely uncomfortable with their need to provide external auditors or regulators access to their source code and algorithms to enforce the law.
[ad_2]
Source link