ABA Techshow

How accurate is AI in legal research?

  •  
  •  
  •  
  • Print

Thomas Hamilton

Thomas Hamilton and co-panelist Damien Riehl offer insight during “Open the Pod Bay Doors: Problems with AI.” Photo by Saverio Truglia.

Humans can make mistakes, but so can machines. If we use artificial intelligence for legal work, what sort of quality is needed?

A Friday ABA Techshow panel at the Hyatt Regency Chicago titled “Open the Pod Bay Doors: Problems with AI” discussed this question in detail.

“What’s the goal? Is the goal to do something quickly? If the goal is to do something better than a human, which human? Does the technology have to be better than a first-year associate, or a 20-year partner? Where is the threshold where we say technology is finally good enough that we could use it?” asked panelist Damien Riehl, managing director of the Fastcase Legal Research Platform.

Thomas Hamilton, vice president of strategy and operations at ROSS Intelligence, also spoke on the panel. Like Riehl, Hamilton is an attorney.

“Lawyers have an exceptionalism fallacy, and they’re trained to do things completely perfect,” he said, mentioning legal research. “Lawyers are far better at research than most humans, but that doesn’t mean we’re good at it. It means we’re less horrible at it than other humans.”

That being said, Hamilton argued there may come a time when courts demand lawyers use artificial intelligence to research arguments. He noted how quickly lawyers have gone from books to computer programs to online services for legal research.

Follow along with the ABA Journal’s coverage of the ABA Techshow 2020 here.

“I think you would be pretty hard-pressed to go before a judge, have a bunch of holes in your arguments, and say to the judge ,‘I’ve got two research books, and those were the only ones I used,’ ” Hamilton said. “The day will come when it becomes harder and harder to get away with not using AI.”

But at present, building a bias-free artificial intelligence product is complicated, no matter in what form, according to Riehl and Hamilton. They discussed scenarios with machine learning and natural language processing, and as an example, used gender stereotypes in advertising.

“If you put advertisements from the 1950s backward through a machine-learning algorithm, you would say that is the way the world has been seen, which is different than the way we want our world to be,” Riehl said.

If you were working on that sort of algorithm, he added, the next question would be how do you fix the bias issue, and who decides how to fix it?

A related issue comes up in designing algorithms for automated sentencing, according to Hamilton. The technology would likely be built from prior cases, he added, and courts are historically prejudiced toward defendants of color.

“I would say ‘don’t use them.’ If there’s something available that’s extremely easy to use, and you can run it as some sort of red flag, there’s some value on that. But decision-making by a judge—I personally don’t think that sort of thing should ever be automated,” Hamilton said.

Give us feedback, share a story tip or update, or report an error.