Deepfakes, Rights of Publicity and Proposed Legislation

By Carolyn Wimbly Martin and Isabel Jonson

Advancements in deepfake artificial intelligence (“AI”) are increasingly being used to swap faces from an image or video, manipulate facial expressions and synthesize faces or speech. However, these replicas of the voice, image or likeness of individuals raise growing concerns as to the patchwork of right-of-publicity laws and the need for a national framework to regulate the technology.

Deepfake technology has been applied in various ways — some more nefarious than others. For instance, AI has been used to fabricate celebrity endorsements for dental services and weight loss products as well as to create deepfake pornographic videos and images of celebrities. These fabrications often, but not always, violate state right-of-publicity laws, which regulate an individual’s right to control the commercial use of their persona.

As a state-based property right, right of publicity may be governed by statute or common law, or not addressed at all. For example, some states extend protections to all individuals; others only safeguard those with commercially valuable identities, such as celebrities or public figures. Similarly, there are significant differences in how states protect the rights of deceased individuals. For example, New York’s right-of-publicity protections end at death, whereas California extends these protections for an additional 70 years after an individual has passed away.

Though many celebrities have been relying solely on their social media platforms to disavow unauthorized uses of their name, voice, signature, photograph or likeness, others have taken legal action. For example, former “Big Brother” contestant Kyland Young filed suit against NeoCortext, Inc., the Ukrainian company which created the ‘Reface’ app. This app “allows users to swap their faces with actors, musicians, athletes, celebrities, and other well-known individuals in scenes from popular shows, movies, and other short-form Internet media.” Young v. Neocortext, Inc., No. 2:23-cv-02496-WLH(PVCx), 2023 U.S. Dist. LEXIS 171050 (C.D. Cal. Sep. 5, 2023), quoting Complaint. Young, who alleges NeoCortext “used his identity to solicit the purchase of paid subscriptions to the Reface application,” initiated the class action in an attempt to represent “California residents whose name, voice, signature, photograph, or likeness was displayed on a Reface application…” Id. The case is currently pending after the court denied NeoCortext’s motion to dismiss. California has one of the most robust right-of-publicity laws in the U.S., so while a victory for Young would not be indicative of outcomes in other states, a loss may have a chilling effect on such lawsuits being brought in other states with less stringent right-of-publicity laws.

Federal agencies and several states have begun to take action specifically addressing the use of AI technology. A recent example of AI misuse occurred when a fake audio message — allegedly from President Joe Biden — urged individuals not to vote in the New Hampshire primary. In response to the incident, the Federal Communications Commission (“FCC”) issued a Declaratory Ruling that recognizes that calls made with AI-generated voices are “artificial” under the Telephone Consumer Protection Act (“TCPA”). According to the FCC’s News Release, the ruling, which took effect immediately on February 8, 2024, “makes voice cloning technology used in common robocall scams targeting consumers illegal” and gives State Attorneys General new tools to go after bad actors behind these robocalls. Several states — including Minnesota and Texas — have also passed legislation criminalizing the use of deepfakes to influence elections. Washington enacted a similar law targeting the use of generated audio or visual recordings in political advertisements. Additionally, Georgia, Hawaii, New York, Virginia, Florida, Minnesota and Texas have either passed or amended statutes criminalizing deepfake revenge porn.

At the national level, several initiatives have been proposed. The No AI Fraud Act, which was introduced in December by U.S. Reps. María Elvira Salazar, R-Fla. and Madeleine Dean, D-Pa., aims to establish a cause of action for victims of deepfake technology. Concurrently, the Senate Judiciary Subcommittee on Intellectual Property has held hearings on the No Fakes Act, supported by U.S. Sens. Chris Coons, D-Del., Amy Klobuchar, D-Minn., Marsha Blackburn, R-Tenn. and Thom Tillis, R-N.C. The No Fakes Act would protect the image, voice and likeness of all individuals against unauthorized AI-generated replicas.

The Software Alliance (“BSA”), a leading advocate for the global software industry, has urged Congress to pass legislation addressing the proliferation of deepfakes and to counteract technologies primarily designed to create or disseminate unauthorized digital replicas. BSA’s policy statement emphasizes the harm unauthorized deepfakes cause to artists’ reputations and public recognition. The group also highlighted the need for protections against AI-generated forgeries that could misrepresent the work of writers, photographers and graphic artists.

Despite these efforts to protect the public, the patchwork of state laws is expected to increase in advance of federal law. Lutzker & Lutzker will continue to provide updates on the much needed judicial and legislative responses to deepfakes.