Artificial Intelligence (AI) is ubiquitous, and its use, powered by ever-more sophisticated algorithms, is spreading rapidly. In simple terms, this technology is designed to train a machine to deliver the same results that a human would generate in carrying out certain tasks, using human intelligence as a paradigm. Whereas it initially developed to automate time-consuming and uncreative tasks, deep learning technologies have enabled it to pervade the creative process and make it possible to generate synthetic audio, audiovisual or written content, fast and cheaply. Today, AI is permeating the entire content-value chain, from the creation to the production, distribution and consumption of text, images, music, and audiovisual content. Search and recommender services, content indexing, classification and curation, speech recognition, post-production editing and enhancement, lip-synchronization, mechanic subtitling and machine translations, music and video generation through text-to-audio and/or video technologies are but some examples.
Generative AI raises many legal, ethical, employment and societal questions: it is testing the boundaries of copyright, which is intended to incentivize and reward human creativity, skills and labour rather than an output solely bred by machines. AI feeds on vast amounts of data, which are sourced from a variety of places, from publicly available datasets to user-generated content, e-commerce data or private datasets. Boosted by deep data mining exemptions where they exist and other limitations or exceptions in national copyright systems, AI processes data in the public domain but also copyright-protected content and other media that is unevenly protected by other legal regimes, e.g. privacy, personality or publicity laws. Deepfakes, i.e. artificially generated content in which a person’s voice and/or likeness is replaced by someone else’s, are one common example of how AI can exploit these loopholes, damaging people’s reputation, undermining fundamental democratic principles or even threatening national security.
The use of AI can generate important questions when the datasets are tainted by sampling, label, measurement or pre-existing biases, that can perpetuate or even amplify stereotypes and discriminatory behaviors or decisions. AI systems can make decisions that have significant impacts on individuals and society as a whole. It can be difficult to understand how these decisions are made and to hold the responsible parties accountable. This can lead to mistrust and a lack of transparency in the decision-making processes. They can also automate tasks that were previously carried out by humans, causing unemployment in some sectors – with a significant economic and societal impact where workers cannot, or are unable to, reskill.
FIA strongly believes that AI cannot be left to thrive on a legal vacuum and that a strong accountability, transparency and regulatory framework is essential to minimize the risks that the use of this technology can pose to society and to enable it to develop in a manner that is ethical, fair and unbiased.
It is for this reason that FIA welcomes the EU regulation on AI as a step in the right direction. However, we stress that all use of AI should be fully transparent and accountable. We therefore object to making users of an AI system that generates or manipulate image, audio or video content stakeholders exempt from minimal reporting obligations where the content “is part of an evidently creative, satirical, artistic or fictional work or programme, subject to appropriate safeguards for the rights and freedoms of third parties”, as recently suggested in the Council’s general approach (art. 52, §3). This exemption is completely unjustified and ambiguous. It would unduly threaten the legitimate interests of the performers we represent and could lead to consumer deception.
We also strongly believe that the AI Act should strongly anchor consent and compensation at the heart of how content, including our members’ voices and likenesses, are used or re-used to train AI systems. This is reflected in a statement jointly released by FIA, the European Composer and Songwriter Alliance, the European Writer’s Council, the Federation of European Screen Directors, the International Federation of Musicians, and the Federation of Screenwriters in Europe.
Our fundamental claims with respect to a safe, ethical, and fair development, and use, of AI systems are also echoed by a recent initiative bringing together more than 40 stakeholder organizations in the US and beyond, aiming to put AI at the service of human artistry rather than having machines replace human talent, labour and creativity.
Download the Joint FIA/ECSA/EWC, FERA, FIM and FSE statement (EN) here.