Judge finds out why brief cited nonexistent cases—ChatGPT did research
Image from Shutterstock.
A federal judge in New York City has ordered two lawyers and their law firm to show cause why they shouldn’t be sanctioned for submitting a brief with citations to fake cases, thanks to research by ChatGPT.
Senior U.S. District Judge P. Kevin Castel of the Southern District of New York said in a May 4 order the firm’s legal filing was “replete with citations to nonexistent cases.”
When Castel ordered one of the lawyers to submit an affidavit with the cited opinions, he complied—but six of the decisions “appear to be bogus” with “bogus quotes and bogus internal citations,” Castel said.
The fake cases were provided by ChatGPT, according to a May 25 affidavit by lawyer Steven A. Schwartz of Levidow, Levidow & Oberman. He has been practicing law in New York for more than 30 years.
“Affiant has never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false,” Schwartz wrote.
ChatGPT had assured Schwartz that the cases that it cited were real “and can be found in reputable legal databases, such as LexisNexis and Westlaw,” according to queries and answers Schwartz submitted to the court.
Another lawyer who signed Schwart’s brief, Peter LoDuca, was not aware of Schwartz’s research method, Schwartz said. LoDuca became attorney of record after their case was removed to the Southern District of New York, where Schwartz has not obtained admission.
The show cause hearing is scheduled for June 8, according to a May 26 order by Castel.
Publications covering the case include the New York Times and the Volokh Conspiracy (here and here), which links to a case page from CourtListener.
Schwartz did not immediately reply to the ABA Journal’s request for comment, which was sent by email and voicemail. LoDuca told the ABA Journal that he doesn’t have any comment at this time.
Schwartz and LoDuca represent the plaintiff Roberto Mata in a lawsuit against airline Avianca Inc. Mata said he was injured when he was struck by a metal serving cart.
“The real-life case of Roberto Mata v. Avianca Inc. shows that white-collar professions may have at least a little time left before the robots take over,” according to the New York Times.
The Volokh Conspiracy pointed out that some litigants representing themselves are also using ChatGPT.