Technology

Lawyers learn too late that chatbots aren't built to be accurate; how are judges and bars responding?

  •  
  •  
  •  
  •  
  • Print.

Robots

When a chatbot is asked to write briefs, it knows that it should include legal citations, but it hasn’t read the relevant caselaw to be accurate, according to Suresh Venkatasubramanian, a computer scientist. Photo illustration by Sara Wadford/ABA Journal.

At least two lawyers who apparently used artificial intelligence chatbots to write legal documents ended up losing their jobs after the output included phony case citations.

The Washington Post has a story on the lawyers and efforts to stave off more problems.

One lawyer, 29-year-old Zachariah Crabill, used ChatGPT to help him write a motion. He realized after filing the document with a Colorado court, however, that several of the case citations were fake.

Crabill apologized to the judge, but he was referred to attorney disciplinary authorities and fired from his Colorado Springs, Colorado, law firm. Now, he has a company and said he would still use AI in writing and research—but the tool has to be developed specifically for lawyers.

Another newly hired lawyer with the Dennis Block firm resigned after she apparently used AI to write a legal brief in an eviction case. An opposing attorney realized that the brief cited nonexistent caselaw, leading a judge in the case to impose a $999 penalty.

Suresh Venkatasubramanian, a computer scientist and director of the Center for Technology Responsibility at Brown University, told the Washington Post that he’s not surprised that AI chatbots are making up case citations.

“What’s surprising is that they ever produce anything remotely accurate,” he said.

The chatbots are designed to make conversation and to come up with plausible-sounding answers to questions, Venkatasubramanian said. When a chatbot is asked to write briefs, it knows that it should include legal citations, but it hasn’t read the relevant caselaw to be accurate.

Judges are responding in a variety of ways, the article reports. Some ban lawyers from using AI, while others are requiring disclosure.

Bar associations are also responding. The ABA announced the creation of the ABA Task Force on Law and Artificial Intelligence in August. Among the issues that it will explore are the risks of using AI, the use of AI to increase access to justice, and the role of AI in legal education.

The task force chair is lawyer and cybersecurity engineer Lucy Thomson. She told the Washington Post that the ABA hasn’t taken a formal position on whether AI should be banned from courtrooms, but the issue is being discussed.

“Many of them think it’s not necessary or appropriate for judges to ban the use of AI,” Thomson told the Washington Post, “because it’s just a tool, just like other legal research tools.”

The chair of a State Bar of Texas task force on AI, former Judge John G. Browning, said his group is considering options, such as requiring technology classes for lawyers or creating rules governing when AI-generated material can be used.

See also:

“How ChatGPT and other AI platforms could dramatically reshape the legal industry”

Give us feedback, share a story tip or update, or report an error.