FIA and several creators’ organizations, representing performers, writers, translators, composers, songwriters, screenwriters, screen directors, visual artists, and journalists, have collectively called upon EU policymakers to embed robust transparency protections into the core of the EU AI Act. This legislation, currently in the final “trilogue” phase of negotiation, involves the EU Parliament, the Council, and the Commission aligning their positions to reach a consensus. It marks a groundbreaking effort by a significant legislative body to regulate the development and implementation of AI, encompassing general-purpose AI systems and models, while safeguarding fundamental human and societal rights.
Comprehensive and robust transparency obligations are vital to empower creators to safeguard their image, likeness, personal features, and artistic work against unauthorized exploitation and lack of compensation for AI training purposes. This includes the creation of synthetic digital replicas and clones capable of speaking words never spoken and performing actions never taken by a real person. Whilst “deep fakes” are a serious threat to the personal and professional reputation of the creators we represent, the ability to generate and spread misinformation also engenders serious societal and political challenges, prompting FIA and its coalition partners to advocate for mandatory labeling requirements for all AI-generated or manipulated content.
Currently, the draft AI Act falls short on both transparency and labeling fronts, with provisions that do not yet meet the necessary standards for an AI rollout that is both respectful for creators and responsible for society at large. While some compromise proposals suggest enhancing transparency around generative AI, they should be reinforced to make technology deployers fully accountable for how their models are trained. This entails proving compliance with the EU acquis, including copyright and personal data protection rules, and implementing comprehensive record-keeping obligations concerning the data used to train their systems.
Regarding the “output,” FIA and its coalition partners vehemently oppose any attempts to introduce exceptions in the name of “freedom of expression” and “freedom of the arts and science.” Such exceptions would render labeling obligations largely ineffective, causing significant confusion among citizens regarding what is genuine and what is not.
Read the full content of the declaration here.