The European Union has reached a significant milestone by agreeing on one of the first major comprehensive artificial intelligence laws worldwide. The AI Act, a landmark piece of legislation, aims to balance the promotion of AI development with the mitigation of its potential risks. This law introduces a ban on AI practices that pose a “clear threat to people’s safety, livelihoods and rights.”
This development comes as global concern grows over the disruptive potential of artificial intelligence. In a news conference, European Parliament President Roberta Metsola described the law as “a balanced and human-centered approach” and predicted that it would “no doubt be setting the global standard for years to come.”
The AI Act, first proposed in 2021, categorizes AI applications based on risk and imposes stricter regulations on those with higher risk. The law prohibits the riskiest AI uses, including systems that target vulnerable groups, biometric identification for law enforcement, and AI that employs deceptive “subliminal techniques.”
AI systems with limited risk, such as chatbots like OpenAI’s ChatGPT and technologies generating digital content, will be subject to new transparency requirements.
Thierry Breton, the EU Commissioner for Internal Market, expressed his enthusiasm on social media, stating, “The #AIAct is much more than a rulebook – it’s a launchpad for EU startups and researchers to lead the global AI race. The best is yet to come.”
The widespread adoption of AI, highlighted by the launch of OpenAI’s ChatGPT in November 2022, has triggered a surge in generative AI technology. This rapid growth has impacted various sectors, from education grappling with AI’s ability to complete assignments to the arts and media industries facing challenges due to AI-generated content.
The companies pioneering these technologies have faced their challenges too. OpenAI’s CEO, Sam Altman, experienced a brief but turbulent ousting and reinstatement in November, with the reasons for these leadership shifts remaining unclear.