The European Union recently released the details of the EU AI Act, a significant step in regulating the use and development of artificial intelligence. The Act, which was published in the Official Journal, will have a full impact in two years.
What’s Happening & Why This Matters
The law will come into force on August 1, with provisions fully applicable in two years. However, some bans and codes of practice for AI developers will be implemented much earlier, impacting how AI is used and developed.
- 6 months from now: The EU will start implementing bans on prohibited applications for AI, such as the use of social credit ranking systems, facial recognition, and real-time emotion recognition systems in schools and workplaces.
- 9 months from now: The EU AI Office will begin working with consultancy firms to draft codes of practice for AI developers. They also plan to collaborate with companies providing general-purpose AI models carrying systemic risks.
- 1 year from now: Makers of general-purpose AI models will have to comply with transparency requirements and demonstrate that their systems are safe and easily explainable to users. The Act also includes rules for generative AI and manipulated media, ensuring that deepfakes and other AI-generated content are clearly labeled.
In addition to these implementations, the EU AI Act also addresses copyright laws for companies training their AI models. Overall, the Act aims to govern AI development and usage, providing guidelines for ensuring transparency, safety, and accountability.
TF Summary: What’s Next
The implementation of the EU AI Act is an important milestone in artificial intelligence regulation. As the deadlines for key provisions draw closer, AI developers, companies utilizing AI models, and the AI industry as a whole will need to ensure compliance with the new rules and regulations. This development will likely pave the way for similar regulations in other regions and prompt further discussions on the responsible use of AI technology.