Deepfakes in Litigation
Deepfake technology uses artificial intelligence (“AI”), specifically “deep” learning, to create realistic videos, images and audio of fabricated events. Deepfakes have already found their way into social media, the entertainment industry, the political landscape and are now beginning to impact legal disputes. AI poses the risks that parties could intentionally fabricate evidence or unknowingly believe that a deepfake is credible evidence. As a result, prosecutors may secure wrongful convictions and litigators may purposely instill doubt into the minds of the jury about the validity of evidence. The later, known as the “deepfake defense” is a litigation strategy where audiovisual material introduced as evidence is claimed to be fake. The existence of deepfakes is creating opportunities for lawyers to raise objections and plant doubt about ALL evidence, whether justifiably so or not.
Journalists, bystanders, members of Congress and law enforcement captured thousands of audiovisual images documenting the events at the United States Capitol on January 6, 2021. Two of the defendants on trial for their participation tried, albeit unsuccessfully, to claim that the video evidence placing them at the scene had been created or manipulated by AI. Guy Reffitt, one of the insurrectionists who led the attack on the Capitol, was implicated through video, audio, text messages and the testimony of his son and other insurrectionists at the Capitol. Reffitt’s lawyer, however, attempted to undermine the evidence by telling the jury that the evidence against Reffitt included deepfakes.
Elon Musk, speaking at the Code Conference in 2016, stated that the Model S and Model X Tesla’s self-driving functionalities “can drive autonomously with greater safety than a person. Right now.” The video of the interview was uploaded to YouTube and has remained on the platform for more than seven years. In a lawsuit filed by the family of a Tesla driver who was killed because of an alleged failure of Tesla’s automated driving software, the attorneys introduced Musk’s remarks from the conference. Tesla responded that Musk could not recall details about the statements and questioned the authenticity of the recording of the interview. Tesla argued that Musk “like many public figures, is the subject of many ‘deepfake’ videos and audio recordings that purport to show him saying and doing things he never actually said or did.” The Santa Clara County Superior Court judge presiding over the wrongful death lawsuit ordered a deposition of Musk under oath to address which statements were authentic and/or which were manufactured by deepfake technology. In the order, the judge wrote that Tesla’s arguments were “deeply troubling” to the court, and that, if the court were to set a precedent by accepting Tesla’s approach, Musk and others would be able to “hide behind the potential for their recorded statements being a deep fake to avoid taking ownership for what they actually say and do.”
As the technology becomes increasingly sophisticated, accessible and easy to use, bad actors and lawyers can use this skepticism to their advantage. Juries may demand more proof that the evidence is real, making it more expensive and time-consuming to get evidence admitted. While it is not new for e-discovery professionals to find forged evidence, edited pictures or fabricated messages and emails, advancements in AI add to litigation costs, conflicting expert opinions and delay in e-discovery. The pool of qualified forensic experts able to assess potential deepfakes with acceptable accuracy is limited and, therefore, cost prohibitive for less resourced litigants, putting them at a further disadvantage at trial.
Although there are new challenges, photo manipulation is far from a new issue. In 1896, William Mumler, the world’s first known “spirit photographer,” was charged with fraud, and other photographers and an expert in “humbuggery” identified nine different methods by which the effects in Mumler’s photographs could be imitated. After Adobe Photoshop was introduced in the 1990s, juries often learned to recognize doctored photographs. However, AI tools require a smaller pool of data to generate even more convincing subject matter. Research has demonstrated that humans are unable to reliably distinguish between AI-generated faces and real faces, and that jurors presented with both oral and video testimony are 650% more likely to retain that information than with oral testimony alone. Even if people are aware that the audiovisual evidence might be a deepfake, it can still have an impact.
The Federal Rules of Evidence (“FRE”) will still be used to address the admissibility of evidence, to determine (i) relevance pursuant to FRE 401; (ii) authenticity pursuant to FRE 901 and 902; (iii) the judge’s role as evidentiary gatekeeper and decider of contested facts per FRE 104(a) and (b); and the need to exclude unfairly prejudicial evidence under FRE 403. However, attorneys and courts will also need to consider the role of juries in validating the authenticity of deepfakes. Questionnaires and strategies for jury selection will need to be updated to determine the potential jurors’ knowledge, education and prejudices around AI.
Courts are just beginning to tackle the question of potential deepfake evidence. In Soderberg v. Carrión, No. RDB-19-1559, 2022 U.S. Dist. LEXIS 222645 (D. Md. Dec. 9, 2022), the Maryland District Court addressed whether Section 1-201 of the Criminal Procedure Article of the Maryland Code, known as the “Broadcast Ban,” was consistent with the First Amendment and survives strict scrutiny. The Broadcast Ban prohibits the recording or broadcasting of “any criminal matter, including a trial, hearing, motion, or argument, that is held in trial court or before a grand jury.” Md. Code Ann., Crim. Proc. § 1-201. The state of Maryland argued that the Broadcast Ban was necessary to preserve the protection of witnesses and the integrity of criminal trials and to deter witness intimidation and the broadcasting of altered recordings and deepfakes. The state claimed it was already challenging enough to convince victims of and witnesses to violent crimes to testify in court, and they would be even more reluctant to do so if there were a danger that their testimony could be doctored. Also, if witnesses’ images and voiceprints could be posted and shared online, their recorded voices could be used to create other deepfakes. While the court acknowledged that the state had a strong argument with regard to witness protection and cooperation, the court held that the state could not sanction the press for broadcasting “lawfully obtained, truthful information” that the state has already disclosed to the public. In Footnote 18, the court stated that Maryland’s concern about deepfakes and other manipulations of trial recordings is based on “mere speculation about serious harm” and that the state failed to show “even a single example of trial recordings being manipulated in this manner in Maryland or in any other jurisdiction.”
Whether this deepfake footnote will age well remains to be seen. Lutzker & Lutzker will be watching and adapting our litigation practices and advisories accordingly.