Lawyers who 'doubled down' and defended ChatGPT's fake cases must pay $5K, judge says
A federal judge in New York City has ordered two lawyers and their law firm to pay $5,000 for submitting a brief with fake cases made up by ChatGPT and then standing by the research.
In a June 22 decision, U.S. District Judge P. Kevin Castel of the Southern District of New York imposed the penalty on Peter LoDuca, Steven A. Schwartz and the firm Levidow, Levidow & Oberman.
The lawyers will also have to send letters to each judge falsely identified as the author of six nonexistent opinions. The letter must include a copy of Castel’s opinion imposing the sanctions, the fake opinion attributed to the judge and a copy of an April 25 affirmation that continued to cite the cases.
The lawyers had cited the fake cases in a March 1 court submission. According to Castel, the “record now would look quite different” if the matter had ended with the lawyers “coming clean about their actions” shortly after their opponent’s brief questioned the existence of the cases or after court orders required production of the fake cases.
Instead, Castel wrote, the lawyers “doubled down and did not begin to dribble out the truth until May 25”—after Castel issued an order to show cause why one of the lawyers shouldn’t be sanctioned.
At the sanctions hearing, Schwartz testified that he was “operating under the false perception that [ChatGPT] could not possibly be fabricating cases on its own.”
Levidow, Levidow & Oberman uses Fastcase but does not have Westlaw or LexisNexis accounts. At the time, the firm’s Fastcase account had limited access to federal cases, Schwartz told the court. Schwartz said he had heard of “this new site which I assumed—I falsely assumed was like a super search engine called ChatGPT, and that’s what I used.”
“My reaction was, ChatGPT is finding that case somewhere,” Schwartz testified. “Maybe it’s unpublished. Maybe it was appealed. Maybe access is difficult to get. I just never thought it could be made up.”
LoDuca and Schwartz were representing Roberto Mata, who alleged that he was injured when a metal serving cart struck his left knee during a flight from El Salvador to the John F. Kennedy Airport. The defendant, Avianca Inc., removed the case to federal court. LoDuca filed a notice of appearance in the case, even though Schwartz performed all the substantive legal work, because Schwartz is not admitted to practice in the federal district court.
Avianca filed a motion to dismiss on the ground that the lawsuit was filed too late under the Montreal Convention.
Schwartz’s first prompt to ChatGPT was, “Argue that the statute of limitations is tolled by bankruptcy of defendant pursuant to Montreal Convention.” ChatGPT responded with broad descriptions of the Montreal Convention, statutes of limitations and the federal bankruptcy stay. It then asserted that the statute of limitations under the Montreal Convention is tolled by a bankruptcy filing. It did not cite caselaws backing up that statement.
ChatGPT supplied fake cases in response to prompts such as, “Show me specific holdings in federal cases where the statute of limitations was tolled due to bankruptcy of the airline.” The chatbot provided only excerpts from the fake cases.
Castel found bad faith by both lawyers “based upon acts of conscious avoidance and false and misleading statements to the court.”
LoDuca cited the fake cases in a March 1 affirmation that opposed dismissal. The affirmation was actually written by Schwartz. LoDuca did not review the cases cited, relying on the belief that his colleague of more than 25 years would produce reliable work.
Avianca’s reply memo said it was unable to find most of the cases cited in the affirmation, and the few cases that it did find “do not stand for the propositions for which they are cited.”
As it turned out, Schwartz “had used ChatGPT, which fabricated the cited cases,” Castel said.
The judge ordered LoDuca on April 11 and 12 to file an affidavit that included copies of nine cited opinions. LoDuca sought a deadline extension, telling the court that he was on vacation. Actually, Schwartz was the lawyer out of the office.
LoDuca filed an April 25 affidavit with copies of purported excerpts from all but one of the decisions. Schwartz actually prepared the court filing. The affidavit said the excerpts were “what is made available by online database.” The database wasn’t identified.
Schwartz did not have prior experience regarding the Montreal Convention and bankruptcy stays.
“Indeed, at the sanctions hearing,” Castel wrote, “Mr. Schwartz testified that he thought a citation in the form ‘F.3d’ meant ‘federal district, third department.’”
In one fake opinion referred to as “Varghese,” the legal analysis “is gibberish,” Castel wrote. It contained internal citations and quotes from other nonexistent cases. Other cases cited in “Varghese” are real cases, but they don’t stand for the propositions cited.
Schwartz stated in a May 25 affidavit that he used ChatGPT to “supplement” his research, but in reality, he turned to ChatGPT after Fastcase was not helpful, Castel said.
Castel said that, in determining the sanctions, he “weighed the significant publicity” surrounding the case and the sincerity of the lawyers “when they described their embarrassment and remorse.” The lawyers have no history of disciplinary action, “and there is a low likelihood that they will repeat the actions” described in the case.
Levidow, Levidow & Oberman gave a statement to the New York Times and Law.com that said it fully intends to comply with Castel’s order, but it respectfully disagrees with the finding that lawyers at the firm had acted in bad faith.
“In the face of what even the court acknowledged was an unprecedented situation, we made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” the firm said.
Other publications covering the sanctions include the Volokh Conspiracy, Reuters and Law360.
Several publications note that Castel dismissed Mata’s suit on statute of limitations grounds.