LAW SCRIBBLER

As deepfakes make it harder to discern truth, lawyers can be gatekeepers

  •  
  •  
  •  
  • Print

Jason Tashea

Jason Tashea. Photo by Saverio Truglia.

"We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things," former President Barack Obama announced in a video PSA for BuzzFeed News last year.

Calling attention to deepfakes, a technology that uses AI to manipulate images, Obama went on to reference Black Panther and said U.S. Secretary of Housing and Urban Development Ben Carson “is in the Sunken Place,” an allusion to the movie Get Out.

After Obama insulted President Donald Trump, a viewer would be excused thinking that the ex-president was relaxed in retirement and watching a lot of Netflix.

“Now, I would never say these things—at least not in a public address—but someone else would, someone like Jordan Peele,” he continued, noting the actor-director known for his Obama impersonation.

The video, now in split screen, shows Obama and Peele, where it becomes apparent that Peele has been been the one talking all along, putting words in the Obama’s mouth thanks to deepfake technology.

“Moving forward we need to be more vigilant about what we trust from the internet,” said both men in unison. “It may sound basic, but how we move forward in the Age of Information is going to be the difference between whether we survive or whether we become some f—-ed up dystopia.”

Levity aside, the warning from Obama-Peele is serious. Deepfakes, also called “AI synthesized fakes,” are rapidly evolving and proliferating. While many websites banned the use of the technology, new forensic tools are being developed to root out fakes. Meanwhile, lawmakers are pushing for new regulations while many lawyers argue that the law is already able to manage the illegal use of the emerging technology.

At its core, a deepfake is what happens when neural networks, a type of AI, are merged with image, video or audio manipulation. Think of it as Photoshop on steroids.

In its most common usage, the technology allows a person’s face to be superimposed onto an other’s. This has received the most attention in the form of adult videos featuring celebrities’ or average people’s faces transposed with the actors in the scene.

And it’s not just used on faces. Emma Gonzalez, a survivor of the 2018 Parkland shooting in Florida, was featured in a video for Teen Vogue where she ripped up an image of a gun range target. Within short order, the bull’s-eye was replaced with a copy of the Constitution and quickly made its rounds on social media in an attempt to discredit the gun control advocate.

Receiving less attention, the same technology can be used to create fingerprints to weaken biometric security features, according to recent research from New York University and Michigan State University.

Collectively, this technology could continue to erode people’s trust in information.

“I think the deepfakes issue goes to what is truth and what is fact in a post-truth and post-fact world,” says Damien Riehl, vice president at Stroz Friedberg in Minneapolis.

While the technology is novel and the specter of fake information is ever-present, this problem is not entirely new, explains Sam Gregory, program director at Witness, a nonprofit that teaches human rights advocates to use video.

Using the term “shallow fake,” Gregory says he has seen people relabel the same video of a public lynching, for example, to be shared in locations as diverse as the Republic of Georgia, Myanmar and South Sudan, all with an intent to incite local violence.

To root out fakes, his organization relies on traditional vetting, like comparing the video to location, sun and weather records, and modern approaches, like analyzing the file’s metadata, to verify a photo or video.

As deepfake tools evolve, however, there is a concern that fakes will be more numerous and harder to spot.

“The quality of many deepfake generated videos makes it relatively easy to detect a manipulation without requiring an extensive forensic investigation,” explains Matt Turek, a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA), over email. As the technology progresses, however, “detecting the manipulations will require more sophisticated technologies and forensic techniques.”

As part of a four-year project called the Media Forensics program, DARPA is developing these very tools to automate the assessment of media en masse. The end goal is to easily point out inconsistencies at the pixel level and incongruities with the laws of physics, and compare videos with external information.

As emerging technologies continue to create gray areas and confound judges and lawyers, U.S. law may actually be ready for legal conflicts arising from deepfakes.

“I think, generally, that certain uses of deepfakes are going to be actionable under existing law,” says David Greene, the civil liberties director at the Electronic Frontier Foundation. He added in a blog post on EFF’s website and in conversation with the Journal that misappropriation of someone’s likeness through a deepfake could be actionable under civil extortion and defamation law, similar to traditional photo-manipulation. Criminal fraud and harassment statutes are also potential tools.

Even so, state and federal lawmakers want to create new sanctions.

During the previous session of Congress, a bill was introduced in the Senate to criminalize the malicious creation and dissemination of deepfakes; however, the Senate did not act on it before the end of the session. According to Axios, Sen. Ben Sasse of Nebraska, the bill’s sponsor, plans to reintroduce the legislation. Meanwhile, in New York, a state lawmaker has put forward a bill to regulate a “digital replica” of a person through a new right to privacy, which has received push back from First Amendment and media advocates.

At the practice level, Tara Vassefi, legal officer at TruePic, a digital forensics company, told the Journal that deepfakes will likely increase the cost of litigation because new forensic techniques and expert witnesses aren’t cheap.

However, she believes that recent changes to the Federal Rules of Evidence (FRE) might inadvertently help lawyers manage this moment.

In a blog post, Vassefi argued that amended FRE 902, which covers self-authenticating evidence, “speeds up the process and lowers the cost of using digital evidence”—including video.

While she says amended FRE 902-13 and 902-14 were not intended for deepfakes, she reasons these rules, which apply to electronic information broadly, can control costs while keeping out manipulated photos from legal proceedings with the use of certified forensic technologies.

“However, to date, lawyers are either unaware or not taking advantage of these amendments and only a handful of cases have drawn on the new rules,” she writes.

While there is a need to further educate lawyers and policymakers as technologists continue to grapple with this technology, attorneys may see a familiar role for themselves in battling deepfakes, according to Riehl at Stroz Friedberg.

“[Lawyers] have a particularly important role to help our fellow citizens—to help everybody—distinguish truth from fakery,” he says. “So, this is just another thing we will have to be diligent on as a duty to our profession.”


Jason Tashea is the author of the Law Scribbler column and a legal affairs writer for the ABA Journal. Follow him on Twitter @LawScribbler.

Correction: The spelling of Damien Riehl’s last name was corrected in two places on Feb. 26.

Give us feedback, share a story tip or update, or report an error.