The European Union’s AI Act, aimed at regulating artificial intelligence to protect society from potential risks, officially took effect this August. Despite the stringent regulations, many major AI models from companies like OpenAI, Meta, and Alibaba are not yet fully compliant. This gap in compliance has spurred the development of a compliance tool, known as the “LLM Checker,” designed to assess these models and help companies understand the EU’s expectations. Alongside these developments, the European Parliament has appointed two chairs to lead a new committee responsible for overseeing the Act’s enforcement.
What’s Happening & Why This Matters
Europe’s new AI Act sets comprehensive standards for AI systems. However, companies such as OpenAI, Meta, and Alibaba face difficulties aligning their models with the Act’s requirements. Research from ETH Zurich, INSAIT (Institute for Computer Science, AI and Technology, Bulgaria), and LatticeFlow AI identified several compliance gaps, especially concerning discrimination and cybersecurity. The “LLM Checker” tool assesses AI models across criteria like cybersecurity and privacy, assigning scores between 0 and 1. Many models, though effective in minimizing harmful content, scored lower on issues related to discrimination, with OpenAI’s GPT-4 Turbo receiving a low score of 0.46 in this area.
To support compliance efforts, the LLM Checker evaluates large language models (LLMs) against EU standards, providing tech companies with measurable feedback on their alignment with the Act. For example, Alibaba’s AI model scored just 0.37 in the discrimination category. Petar Tsankov, CEO of LatticeFlow, expressed that many companies avoid deploying their models in Europe due to uncertainty about meeting the Act’s technical requirements. The tool, Tsankov notes, “is a crucial step toward transparency,” offering companies a structured approach to self-assessment.
European Commission’s Stance and Consultation
The European Commission welcomed the LLM Checker’s findings and acknowledged the challenges companies face in complying with the Act. In response, the Commission has launched a consultation process to create a Code of Practice that will help companies clarify their responsibilities. The Code will provide guidelines on managing transparency, copyright issues, and risk assessments for high-risk AI models. This initiative reflects the Commission’s intent to help companies implement AI systems responsibly.
The European Parliament has established a group to monitor AI Act compliance, led by co-chairs Michael McNamara from Ireland and Brando Benifei from Italy. Their responsibilities will include reviewing company reports and coordinating with the European Commission to address compliance challenges. Benifei, having previously worked on the AI Act, brings experience, while McNamara, a former Irish parliament member, provides a fresh perspective on managing AI risks. Their team will work closely with experts from the EU, the US, and Canada to further develop the regulatory framework.
Additionally, an open-source tool is available to evaluate LLMs against the EU’s requirements. Martin Vechev, a researcher from ETH Zurich and head of INSAIT, urges AI researchers and developers to engage with the project, calling it an evolving collaboration between regulators and tech innovators.
TF Summary: What’s Next
As the EU’s AI Act implementation progresses, the compliance gap remains a central concern. Companies will need to rigorously adapt their models to meet the Act’s requirements, especially with hefty penalties in place for non-compliance. The newly appointed parliamentary AI committee, led by McNamara and Benifei, will play a critical role in shaping AI’s regulatory landscape in Europe. Future developments include the establishment of a comprehensive Code of Practice and increased collaboration between European, American, and Canadian experts to address the complexities of AI governance.
— Text-to-Speech (TTS) provided by gspeech