Deepfakes and the 2024 Election
In March 2023, a photograph of Pope Francis showed him uncharacteristically fashionable in a long white cinched puffer coat. That same month, photographs of former President Trump being tackled by New York City police officers circulated on Twitter, attracting five million views. If these events seem implausible, or you’re surprised that they didn’t make major headlines, it’s because neither of these events actually occurred. Deepfake technology uses artificial intelligence (“AI”), specifically “deep” learning, to create realistic videos, images and audio of events which never occurred. While deepfakes can be used for harmless amusement, the threat of deepfakes also carries significant risks, given the speed with which volumes of misinformation can be shared online.
Deepfakes: The Role of Deep Learning in Spreading Misinformation
The term “deepfake” was coined in 2017 by a Reddit user who posted AI-edited videos inserting the faces of celebrities on to pornographic material. The deep learning technology is a subset of machine learning, a branch of AI that uses data and algorithms to imitate the ways humans learn, draw conclusions and make classifications and predictions. The technology draws data from images and videos of people, audio and text and makes predictions based on that information. For example, on-demand music streaming services like Spotify use music you’ve previously listened to make suggestions and create playlists of songs you’re likely to enjoy and refreshes its recommendations based on the feedback you give. Driverless vehicles like Tesla and virtual assistants like Alexa or Siri are other examples of deep learning models. The distinguishing feature between deep learning and machine learning is that while machine learning still requires human intervention to make corrections, the algorithm in a deep learning model can makes its own adjustments without human intervention. In deepfakes known as a “deep video portraits,” the facial expression, head rotation, 3D head position, eye gaze and audio are transferred from one person, typically a celebrity or public figure, to another person to generate a realistic video. It tracks the contours of the face, eyebrows, corners of the mouth and background and how the person normally moves.
Deepfake technology is extremely accessible and easy to use. For example, ElevenLabs began offering free voice cloning technology that required minimal editing and could clone targets’ voices in seconds, creating audio samples on topics from threats of violence to racist comments. Earlier this year, ElevenLabs claimed it was addressing the ethical concerns engendered by malicious content by only offering the instant voice cloning to paying subscribers at a price of $5 a month, hardly a meaningful barrier to entry for users. Apps like the Chinese Zao, DeepFace Lab, FakeApp, Face Swap and deepfake software found on Github, an open-source development community, are equally user-friendly.
Deepfakes in the 2024 Election
Deepfakes can spread misinformation to manipulate public opinion, via an official campaign or unaffiliated third party, to alter the outcome of elections. There is already widespread use of this technology in anticipation of the 2024 election. After President Biden announced his reelection bid in April, the Republican National Committee released a video generated completely with AI to simulate a dystopian version of the future if Biden were reelected. The 30-second video ended with a disclaimer in small white font that the video was “[b]uilt entirely with AI imagery.” Following the launch by Florida Governor Ron DeSantis of his presidential campaign, Trump responded with a deepfake video of DeSantis’s Twitter Spaces event co-hosted by Elon Musk. Democratic donor George Soros, Adolf Hitler and the devil were also artificially generated to appear in the video. While the latter two “guests” on the panel makes it obvious that the video is fake, it still demonstrates the capability of AI to create content that plays to certain voters. In June, the “DeSantis War Room” campaign video sharply criticized Trump for “not firing” Anthony Fauci, the former head of the Centers for Disease Control and Prevention. Experts and fact checkers found that some of the images in the advertisement, including one where Trump is hugging and kissing Fauci on the cheek, were created using deepfake technology.
While deepfakes can spread misinformation, they can also provide plausible deniability for egregious statements and conduct that actually did happen. For example, if this technology had existed in 2016 when a video recording resurfaced of Trump having a vulgar conversation about women, Trump and his supporters might have claimed that the video was a deepfake.
PolitiFact, owned by the nonprofit Poynter Institute for Media Studies and a member of the International Fact-Checking Network, is one of the many organizations working to debunk misinformation and deepfakes. They fact check speeches, stories, press releases, television and social media and have formed partnerships with social media platforms including Facebook and TikTok. They select “facts” that may be misleading and are likely to be repeated. While Facebook and TikTok will flag posts they believe are materially false or misleading, PolitiFact fact-checkers examine the posts and provide feedback to the social media platforms as to the accuracy of the content. PolitiFact displays their findings on a scale, called the “Truth-O-Meter,” to reflect the relative accuracy of the content. For example, a 20-second video that surfaced on TikTok showed DeSantis announcing his candidacy for the presidency behind a podium that read “Stealing Your Future,” and ending his speech with “Hail Hydra!” PolitiFact rated the deepfake video as completely false, known as “Pants on Fire!” on their scale. Other content that has been verified as deepfakes include a video of President Joe Biden telling a transgender woman, “You will never be a real woman,” a video of Nancy Pelosi on January 6, 2021 saying, “I’ve given a shoot-to-kill order for any breach of the speaker’s lobby” and a video of President Biden “recorded” talking about an imminent bank collapse.
While some of these examples may seem too absurd to be believable, the rapidly advancing technology and blurred lines between real and fake content can have potentially dangerous consequences. Even with the increasing awareness of deepfakes and detection tools, it won’t be enough. If damaging deepfakes are spread on the Internet on the eve of an election or right before a presidential debate, there may not be enough time to debunk the content and the damaging impact will be irreversible.
United States Government Efforts to Combat Deepfakes
The U.S. government is taking steps to prevent deepfake technology from being weaponized for political purposes or other malicious intent. The Pentagon, through the Defense Advanced Research Projects Agency (“DARPA”), is working with several research institutions, including the University of Colorado in Denver and SRI International, to identify and combat deepfakes. Given the necessity of understanding how deepfakes are made, researchers at the University of Colorado are creating convincing deepfake content so other researchers developing technology can detect what is real and what is fake. Researchers at SRI International are feeding computers examples of real video and audio as well as deepfake videos and audio to train computers to detect deepfakes. Researchers at Carnegie Mellon, the University of Washington, Stanford University and The Max Planck Institute for Informatics are also studying deepfake technology to mitigate the harm.
How to Spot a Deepfake
Some indicators of a deepfake include: if the details or the target person(s) are blurry or unfocused, the light is unnatural or inconsistent, the words or sounds aren’t matching with the visual content or the source of the content is dubious. At that point it may be prudent to determine the source of the image and if there are other versions or perspectives of the scene. Using Google or another search engine, you can describe the event or scene of the image to see if any reliable news outlets have reported on it. Reverse image search tools, like Google Lab or TinEye, can also be used to locate the origin of the image. Most legitimate public AI image generators include watermarks that are automatically added to any content created with their technology.
Legislative and private and public efforts to combat and identify deepfakes may be too late and insufficient ahead of the 2024 elections. On June 14th, the progressive group Arena held a Zoom meeting to discuss ways to educate voters on identifying misinformation and disinformation. Earlier this year, the bipartisan Board of Directors of the American Association of Political Consultants unanimously condemned the use of AI deepfake technology in political campaigns. While these checks and balances are important, the yet unsolvable danger of deepfakes is how effectively information on the Internet and other media is spread quickly and pervasively, making abuses inevitable. The entertainment value and convenience of social media is that the sharing of information is just a repost or retweet away. Even the savviest media consumers are unlikely to expend time or energy in ensuring the accuracy of the information, particularly when the content validates their points of view and preconceived biases. And if supposedly credible sources, like experts or friends or family, are spreading this information, the problem is exacerbated. This manufactured media and content is especially targeted (and more effective) to those with preconceived notions about the candidates.
Deepfakes and the First Amendment
Despite the efforts to address technical solutions to misinformation from deepfakes, the First Amendment may limit legislative and judicial remedies, as such efforts are likely to be challenged as a restriction on free speech. The Supreme Court has held that, absent obscenity, defamation, fraud, incitement and true threats where the law allows content-based regulation of speech, false statements are protected expression under the First Amendment and are “inevitable if there is to be an open and vigorous expression of views in public and private conversation.” United States v. Alvarez, 567 U.S. 709, 718 (2012). Deepfakes on social media can be considered both false statements and artistic expression. Yet, even under Alvarez, there may be recourse against the creators of deepfakes. In his plurality opinion, Justice Kennedy concluded that false speech may be punished if that falsity is accompanied by a legally cognizable harm or if there is a compelling government interest in restricting those false statements. Congress will need to strike a difficult balance between regulating deceptive and harmful deepfakes while allowing deepfakes that parody, satirize or challenge political speech. This issue will be fact specific and legal challenges are inevitable, despite the fact that honest and fair elections should be one of the most compelling government interests.
Lutzker & Lutzker is providing up to date guidance on how news organizations, platforms, legislators and eventually the courts will address the proliferation of deepfakes in future elections and on how the average citizen can become a more informed consumer.