The AI Act establishes harmonized rules governing the development, marketing, and use of AI in the EU. Centered on the risks AI systems pose to society, the Act aims to ensure that AI in the EU is safe and respects fundamental rights and values. Specific provisions apply to general-purpose AI (GPAI) models, placing detailed obligations on providers to ensure transparency, copyright compliance, and the prevention of systemic risks. Among these obligations is the requirement for GPAI model providers to implement internal policies ensuring compliance with EU copyright rules. This includes identifying and respecting any reservation of rights expressed by rightsholders regarding the use of their content for AI training purposes.
To guide GPAI model providers in meeting these obligations, the AI Office has commissioned a group of external and impartial experts to draft a Code of Practice – in consultation with multiple stakeholders, including rightsholders, civil society organizations, human rights groups, academics, and, of course, the tech industry.
However, despite repeated concerns raised by FIA and other creators’ organizations throughout this process—conducted entirely online over three successive phases—the consultation has been heavily skewed in favor of the tech industry. The drafting committee has maintained a preferential and exclusive line of consultation with industry representatives, granting them disproportionate influence. As a result, the third and final version of the Code of Practice is deeply disappointing and has clearly lost sight of its core mission. Rather than upholding the rule of law, the draft appears driven by a reluctance to challenge powerful industry players, ultimately granting them preferential treatment and setting the bar for copyright compliance far lower than what is expected of other users of copyright-protected content.
Instead of imposing clear obligations, the Code is riddled with vague language about making “best efforts” or “reasonable efforts,” reflecting the minimal level of responsibility that the tech industry is willing to accept—practically none.
One of the most revealing and troubling changes in the latest version of the Code is the removal of a critical provision. Previous drafts explicitly stated that any reservation of rights must be identified and respected “regardless of where training takes place.” This language has now mysteriously disappeared. This omission is especially alarming given that many of the companies in question are non-EU entities with overwhelming market dominance, exploiting EU content without authorization or compensation to train models intended for the European market.
The draft significantly weakens GPAI providers’ responsibility to conduct due diligence on third-party datasets to ensure they do not infringe copyright. As a result, it fails to provide the legal certainty that both rightsholders and GPAI model providers need for the confident deployment of these models in the EU.
In response, FIA and the broader creative sector have issued a joint statement categorically rejecting the Code unless a fundamental shift in approach occurs. The Code must be significantly revised to require tech companies to adhere to the highest standards if they wish to do business in the EU and, by signing the Code, benefit from a assumption of compliance with copyright rules. FIA has also raised the concerns of the performer community at a meeting held on April 4, 2025, with the cabinet of Commissioner Virkkunen, the Executive Vice-President responsible for Tech Sovereignty, Security, and Democracy, together with other creators’ orgatisaitons.
The final version of the Code of Practice is expected to be released by the end of April. It will then be assessed by the AI Board, composed of representatives from all EU Member States, as well as the European Commission. If adopted, it will be formalized through an implementing act and made available for GPAI providers to sign.




