A third draft of the Code of Practice was published on Tuesday ahead of the May deadline for completing guidance for providers of general purpose AI (GPAI) models compliant with the provisions of the EU AI Act. The code has been developed since last year, and this draft is expected to be the last.
A website has also been launched with the aim of increasing code accessibility. Written feedback regarding the latest draft must be submitted by March 30, 2025.
The risk-based rulebook for AI blocks includes a subset of obligations that apply only to the most powerful AI model manufacturers. This covers areas such as transparency, copyright, and risk mitigation. This code is intended to help GPAI model makers understand how to meet their legal obligations and avoid the risk of sanctions for non-violations. AI Act penalties for violations of GPAI requirements can reach up to 3% of global annual revenue.
rationalization
The latest revision of the code will be billed as “a more streamlined structure with sophisticated commitment and measurements” compared to previous iterations, based on feedback from the second draft published in December.
Further feedback, working group discussions and workshops feed into the process of turning the third draft into final guidance. And experts say they hope to achieve more “clearness and consistency” with the final adoption version of the code.
The draft is categorized into a small number of sections covering GPAI commitments, along with detailed guidance on transparency and copyright measurement. There is also a section on safety and security obligations that apply to the most powerful models (using so-called systemic risks, or GPAISR).
Regarding transparency, guidance may be expected to be filled out by GPAI to ensure that GPAI has access to important information to help support their compliance.
Elsewhere, the copyright section may be the most instantly controversial area for Big AI.
Current drafts are supplemented with terms such as “best effort,” “reasonable measures,” and “appropriate measures,” when complying with compliance with commitments such as respecting rights requirements when you crawl the web to retrieve data for model training.
The use of mediated languages like this suggests that the AI Giants of Data Mining may feel there is enough room for wiggling the models to train and grab protected information to ask for forgiveness later, but it remains to be seen whether the final draft of the code will enhance the language.
The language used in previous iterations of the code – GPAI states that GPAI should provide a single contact and complaint handling to facilitate “directly and quickly” conveying complaints. Now there is just a line that says, “The signatories specify contacts for communication with the affected right-wingers and provide information that is easily accessible about them.
The current text also suggests that GPAI may be able to refuse to act on copyright complaints by right sholders, especially if it is “unfounded” or excessive due to its repetitive nature. It suggests Creative attempts to flip the scale by using AI tools to detect copyright issues and automate complaints against Big AI.
In terms of safety and security, the requirements of the EU AI Act to assess and mitigate systemic risks apply only to a subset of the already most powerful models (trained using total computing power above 10^25 flops), but in this latest draft some previously recommended measurements are further narrowed in response to feedback.
Our pressure
The EU press release on the latest draft is a fierce attack on European law production in general and the bloc’s rules on AI, which have come out of the US administration led by President Donald Trump in particular.
At the Paris AI Action Summit last month, US Vice President JD Vance rejected the need for AI to regulate its safety. He then warned Europe that excessive restrictions could kill golden geese.
Since then, the block has been moving to kill one AI safety initiative. This places the AI responsibility directive in the chopping block. EU lawmakers also tracked an “omnibus” package that simplifies reforms to existing regulations aimed at reducing deficits and bureaucracy for businesses, focusing on areas such as sustainability reporting. However, as AI methods are still in the process of implementing, diluting the requirements clearly puts pressure on them.
At the Mobile World Parliament fair held earlier this month in Barcelona, French GPAI model maker Mistral – during negotiations to conclude the law in 2023, Arthur Mensh, particularly the biggest opponent of EU AI law, argued that finding a technology solution that complies with some rules would be difficult. He added that the company is “working with regulators to make sure this is resolved.”
Although this GPAI code is written by independent experts, the Commission has in parallel produced “clear” guidance that shapes the way law is applied, via its AI office, which oversees enforcement and other activities related to the law. Includes definitions of GPAI and its responsibility.
So, look for further guidance from the AI office “soon” that the committee states that it will “clear… clarify the scope of the rules.”