The EU’s landmark deal aims to regulate artificial intelligence (AI) technologies, including ChatGPT and governments’ use of AI in biometric surveillance.
After two and a half years of debates, negotiations, and political wrangling, the European Union (EU) has finally reached a historic agreement on the AI Act. This monumental legislation is set to become the world’s first comprehensive AI law, promising to bring significant changes to the tech industry landscape. Let’s dive into the key takeaways from this groundbreaking development.
- Binding Rules on Transparency and Ethics
Tech companies often talk about their commitment to AI ethics, but the AI Act is set to hold them accountable, in theory. It introduces legally binding rules, requiring tech firms to notify users when they interact with AI systems, chatbots, biometric categorization, or emotion recognition tools. The Act also mandates the labeling of deepfakes and AI-generated content, ensuring transparency and accountability in AI-generated media. Moreover, organizations offering essential services will need to conduct impact assessments on how AI systems affect fundamental rights.
- Room for Innovation
While the AI Act imposes regulations on powerful foundation models and AI systems built on them, there is still room for innovation. Stricter rules will apply only to the most powerful AI models, determined by the computing power required for their training. This allows companies some flexibility in assessing whether their models fall under these stricter rules. The EU acknowledges that the definition of powerful AI models may evolve as technology advances, so therein lies some ambiguity.
- EU as the Premier AI Authority
The AI Act establishes a European AI Office to oversee compliance, implementation, and enforcement. This office will be the first of its kind globally, responsible for enforcing binding AI rules. The EU aims to position itself as the world’s leading tech regulator with this initiative. Additionally, a scientific panel of independent experts will offer guidance on AI’s systemic risks and model classification and testing. Noncompliance with the Act can result in hefty fines, ranging from 1.5% to 7% of a company’s global sales turnover.
- National Security Concerns
Certain AI uses are entirely banned in the EU, including biometric categorization systems using sensitive characteristics, untargeted scraping of facial images, emotion recognition at work or in schools, and more. Predictive policing is restricted unless it involves human assessment and objective facts. Notably, the AI Act does not apply to AI systems developed exclusively for military and defense purposes, reflecting a prioritization of national security.
- What’s Next?
While the AI Act represents a significant step forward, there is still work to be done. The final wording of the bill requires technical refinement and approval from European countries and the EU Parliament before becoming law. Once in force, tech companies will have two years to implement the rules, with AI use bans taking effect after six months and compliance for foundation models within a year.
The AI Act is set to reshape the tech landscape, placing the EU at the forefront of AI regulation. With binding rules on transparency and ethics, room for innovation, strict enforcement mechanisms, and a focus on national security, it paves the way for a new era in AI governance. As the EU leads the way, the rest of the world will be watching closely, and the AI Act may well become a global standard for AI ethics and governance.
Jan Iverson is Head of Studio at FS Studio and an award-winning product leader with over 20-years of extensive experience in digital media and marketing, with a specialization in the design and development of AR, VR and 3D activations: mobile apps, games, LBE, sales tools, digital twins; with XR cross-platform content development, and a track record of success in leading award-winning digital creative teams. Virtually Human is her bi-weekly series.