Business of Law

Is the legal system ready for AI-generated deepfake videos?

  •  
  •  
  •  
  • Print

Photo illustration of a deepfake

Photo illustration by Sara Wadford/Shutterstock.

The problem of deepfake videos entering courtrooms as evidence is a timeless challenge.

“This is all what’s old is new again,” says Damien Riehl, vice president with vLex, a global legal intelligence platform. “We’ve had the authenticity question since Stalin edited people out of his photos back in the 1900s.” Today, people use Photoshop to do the same thing, he adds.

Or they can use generative artificial intelligence. In February, OpenAI unveiled Sora, an artificial intelligence model that can create realistic videos from text instructions.

Naturally, Sora makes it easier to create deepfake videos that can realistically depict someone’s likeness and voice to show them doing and saying something they didn’t do or say, says Cat Casey, chief growth officer at Reveal, an AI-powered e-discovery review and investigations platform. And that could wreak havoc in the legal system.

“Ways deepfakes could infect a court proceeding run the gamut and include parties fabricating evidence to win a civil action, government actors wrongfully securing criminal convictions and lawyers purposely exploiting a lay jury’s suspicions about evidence,” Rebecca A. Delfino wrote in a 2023 Hastings Law Journal story titled “Deepfakes on Trial: A Call to Expand the Trial Judge’s Gatekeeping Role to Protect Legal Proceedings from Technological Fakery.”

“Although some deepfakes are harmless and amusing artistic expressions, more than 90% of deepfakes on the internet are pornographic depictions of women,” Delfino wrote. “Female celebrity faces have been digitally added to pornographic content, creating deepfake porn videos. Scarlett Johansson, Meghan Markle and Taylor Swift have all been victims of deepfake pornography.”

But celebrities are not the only targets, Casey says. It’s happening in junior high schools and high schools nationwide as students create child pornography using “nudify” mobile phone apps to Photoshop the heads of classmates onto nude bodies.

Overall, Casey notes that the rise of deepfake technology is eroding public trust in digital media and creating a need for more experts to authenticate video evidence. The digitally manipulated photo of Kate Middleton, the Princess of Wales, and her children led the public to quickly question whether other videos and pictures of her were fake too.

“As the video content gets more consumer-grade, less expensive to use and more compelling, it’s going to make it harder for people to even trust their eyes,” she says.

Old as time

Photographic fraud dates back to the 1800s and the invention of the camera. But fraudsters have been trying to deceive others for eons. Riehl says current laws against fraud and forgery can be applied to deepfakes.

People also create fake emails and other documents, Riehl says. In a breach of contract case he worked on, “the other side produced an email that they said, ‘This is where you agreed to the thing,’ and my client said, ‘Oh, I’ve never seen that email before in my life,’ and we used a forensic investigator to say that that person had made up the email and that he had saved it as an Outlook file,” Riehl adds.

Riehl notes the same thing will happen when deepfake videos are introduced as evidence in court today. As a result, he said a new class of video verification experts might emerge as forensic experts.

Casey points out two court cases from last year in which lawyers tried to claim real video evidence as deepfake videos. One case involved Tesla’s Elon Musk and his lawyers trying to report a video entered as evidence might be a deepfake, but it turned out to be a real video, Casey says. Another case related to video evidence during a Jan. 6 U.S. Capitol riot trial also brought a claim of it being a manipulated video, and again, that was disproved, she adds. Delfino reported in her study that these are called “deepfake defenses” or the “liar’s dividend,” and they sow seeds of doubt about what is real and what is fake when it comes to digital images and videos in jurors’ minds.

According to Delfino, deepfake allegations have arisen in other cases. There was a Pennsylvania case of a cheerleader’s mom who was accused of creating deepfake videos of her daughter’s cheerleader rivals in 2020. Subsequent reporting seemed to indicate at least one video was authentic and could not have been faked with the technology available when it was recorded. There also was a 2019 child custody case in the United Kingdom in which a woman produced an audio recording purportedly proving that her husband was abusive—only for the metadata to show that it had been tampered with.

Battling the bots

Right now, Casey says that some tells exist to prove a video or image is a deepfake, such as weird hands, misplaced jewelry and strange lighting. But as deepfake technology progresses, the naked eye may be unable to detect fake images or videos, she says.

One answer to that conundrum would be to “battle the bots with more bots”—that is, employ AI to identify AI in images and videos, Casey says.

In 2006, the Federal Rules of Civil Procedure were amended to add new rules covering electronically stored information. Thus, the multibillion-dollar e-discovery market was created, she says—and the same thing could happen with AI.

Google already is working on creating watermarks to identify deepfake videos and has placed information in the metadata of photos and documents that reveal artificial intelligence created them, says David Graff, Google’s vice president of global policy and standards.

Google, OpenAI, Adobe, BBC, Intel, Microsoft, Sony and others are part of the Coalition for Content Provenance and Authenticity, a project of the nonprofit Joint Development Foundation that aims to increase transparency around digital media as AI-generated content becomes more prevalent.

OpenAI has stated that it plans to attach content credentials to video generations from Sora using its text-to-video model.

Greg Lambert, chief knowledge services officer with Jackson Walker in Houston, says federal and most state laws have not kept pace with deepfake technology. For instance, in March, Tennessee passed the nation’s most comprehensive bill to date aimed at combating the illegal use of AI to impersonate someone. The Ensuring Likeness Voice and Image Security Act, known as the ELVIS Act, protects songwriters, performers and music industry professionals’ voices from the misuse of AI.

On March 21, Rep. Anna G. Eshoo, D-Calif., co-chair of the Congressional Artificial Intelligence Caucus; and Rep. Neal Dunn, R-Fla., both members of the bipartisan Task Force on Artificial Intelligence, introduced the Protecting Consumers from Deceptive AI Act. It mandates that the National Institute of Standards and Technology create criteria for recognizing and marking AI-generated content using methods like metadata, watermarking and digital fingerprinting. The act also requires AI developers to embed machine-readable disclosures in the audio or visual content produced by applications. It also obligates online platforms to display these disclosures to label AI-generated content.

“Deception from AI-generated content threatens our elections and national security, affects consumer trust and challenges the credibility of our institutions,” Eshoo said in a news release.

This story was originally published in the August-September 2024 issue of the ABA Journal under the headline: “Real Problem: With generative AI, making deepfake videos has never been easier. Is the legal system ready?”

Give us feedback, share a story tip or update, or report an error.