Over twenty tech companies, including Amazon, Google, and Microsoft, commit to the European Commission’s voluntary AI Code of Practice on General-Purpose AI (GPAI). The Code addresses transparency, copyright, safety, and security to help companies comply with the EU AI Act. This collective move reflects industry efforts to balance innovation with responsible AI deployment amid rising regulation.
What’s Happening & Why This Matters
The European Commission launched the AI Code of Practice earlier in 2025 to provide voluntary guidelines for AI providers. It complements the legally binding AI Act, which enforces transparency and copyright rules starting 2 August 2025.

Among the first signatories are industry leaders such as Amazon, Google, Microsoft, IBM, and AI specialists like OpenAI, France’s Mistral AI, and Germany’s Aleph Alpha. The organisations commit to upholding principles covering the safe and ethical development of general-purpose AI models, including frameworks for user transparency and respect for intellectual property.
Notably, Meta chooses not to sign the Code. Meta argues that the Code’s restrictions hinder innovation and warns that Europe’s regulatory approach “is heading down the wrong path on AI.” Despite this, Meta must still meet AI Act requirements when they come into force.
xAI, developer of X’s Grok AI chatbot, only signs the Code’s Safety and Security chapter. It commits to other AI Act obligations, such as transparency and copyright, through alternative compliance routes.
Google voices support the Code but expresses concerns. Alphabet’s global affairs president, Kent Walker, states that while the Code better supports European innovation goals than prior drafts, it still risks slowing AI development and deployment across Europe.
The voluntary Code targets providers with GPAI models already on the market, who need to sign by 1 August 2025. Others can join later, reinforcing ongoing engagement between regulators and industry.
From 2 August, each of the 27 EU member states must appoint national oversight authorities. These authorities will ensure companies comply with the AI Act. Failure to comply risks fines up to €15 million or 3% of annual turnover, whichever is higher.

Today is a critical innovation milestone for Europe’s AI governance. The Code provides a flexible framework allowing companies to self-regulate while governments prepare enforcement mechanisms. However, the mixed responses exemplify the tensions between innovation, control, and regulatory certainty.
TF Summary: What’s Next
With over 20 innovators signing the European Commission’s AI Code, voluntary AI governance gains momentum in Europe. The Code is a standard for transparency, copyright, and safety ahead of the full AI Act’s enforcement.
As member state authorities begin oversight, companies face pressure to keep innovation in check with compliance. Future developments focus on harmonising collaboration and regulatory enforcement to support responsible AI growth.
— Text-to-Speech (TTS) provided by gspeech