Throughout history, performers have learned to live with technology, skillfully adjusting to change and seizing fresh opportunities to express their art, refine their talent, and connect with new audiences. Whilst doing so, they have also advocated for new contractual and regulatory safeguards, strengthening their safety, fostering inclusive and respectful work environments, enabling them to retain control over the use of their performances and to secure a sustainable income from their endeavors.
Until recently, artificial intelligence had not brought about any noteworthy changes in this regard. This technology has been a longstanding presence in our industry, consistently producing increasingly realistic content by drawing on performances by human actors. Examples include age regression software and performance capture techniques.
Only with the advent of the latest AI generation, especially the introduction of generative AI, have these technologies advanced to a point where they no longer merely enhance human creativity but may threaten to replace it entirely. Present-day AI algorithms, especially deep learning models, assimilate vast datasets to imitate human behavior, producing synthetic or cloned performances that are remarkably deceptive and realistic. Personal attributes such as voice, likeness, and past work of performers constitute a substantial portion of these datasets.
For the first time, performers are confronted with the prospect of having to compete for work against highly realistic synthetic replicas of themselves or others. Similarly, their digital clones may be employed in multiple ways without consent, devoid of restrictions and proper compensation. Unauthorized deep fakes using their likeness are frequently deployed in sexually explicit or politically sensitive contexts, impacting both their personal and professional reputation. The potential for job displacement due to modern AI technologies, including generative AI, is already evident in the voice acting field, with audiobooks, animation, and dubbing starting to be executed synthetically or by cloning the voices of performers.
Artificial intelligence cannot be left to develop without oversight. It is imperative to establish regulations and contractual safeguards to ensure maximum transparency of training datasets. These measures should recognize and protect the rights of performers, ensuring they provide informed consent for the cloning or synthesis of their performances and subsequent use. Additionally, there is a need to devise revenue models that fairly compensate performers. While normative efforts play a role, collective bargaining can also be instrumental in implementing effective protections, thereby embedding these safeguards within contractual relationships.
This guide is intended to help FIA affiliates get a better understanding of these technological developments and their potential implications for the performers they represent. It outlines FIA’s core principles for the responsible deployment and use of AI, offering guidance on how trade unions can structure their bargaining strategy to enhance protections and compensation for their members. Additionally, the guide offers suggestions on how FIA affiliates can actively advocate for an appropriate policy and regulatory environment.
See FIA Guide to AI in English
See FIA Guide to AI in French
See FIA Guide to AI in Spanish