Artificial Intelligence & Robotics

Law firms moving quickly on AI weigh benefits with risks and unknowns

  •  
  •  
  •  
  • Print

robot law scales

“If you think about its ability to gather, analyze and summarize lots of data, it’s a huge head start to any legal project,” says DLA Piper partner and data scientist Bennett B. Borden. Image from Shutterstock.

Updated: In the fall of 2022, David Wakeling, head of law firm Allen & Overy’s Markets Innovation Group in London, got a glimpse of the future. Months before the release of ChatGPT, he demoed Harvey, a platform built on OpenAI’s GPT technology and tailored for major law firms.

“As I unpeel the onion, I could see this is pretty serious. I’ve been playing with tech for a long time. It’s the first time the hair stood up on the back of my neck,” Wakeling says.

Soon Allen & Overy became one of Harvey’s earliest adopters, announcing in February that 3,500 lawyers were using it across 43 offices. Then in March, accounting firm PricewaterhouseCoopers announced a “strategic alliance” with the San Francisco-based startup, which recently secured $21 million in funding.

Other major law firms have adopted generative AI products at a breathtaking pace or are developing platforms in-house. DLA Piper partner and data scientist Bennett B. Borden calls the tech “the most transformative technology” since the computer. And it is well suited to lawyers because it can speed up mundane legal tasks, helping them focus on more meaningful work.

“If you think about its ability to gather, analyze and summarize lots of data, it’s a huge head start to any legal project,” says Borden, whose firm is using Casetext’s generative AI legal assistant, CoCounsel, for legal research, document review and contract analysis. (In June, Thomson Reuters announced it had agreed to purchase Casetext for $650 million.)

Yet, generative AI is forcing firms to wrestle with the risks of using the new technology, which is largely unregulated. In May, Gary Marcus, a leading expert on artificial intelligence, warned a U.S. Senate Committee on the Judiciary subcommittee on privacy, technology and the law that even the makers of generative AI platforms “don’t entirely understand how they work.”

Firms and legal technology companies are confronting the unique security and privacy challenges that come with using the software and its tendency to produce inaccurate and biased answers.

Those worries became obvious after it emerged a lawyer relied on ChatGPT for citations in a brief filed in March in New York federal court. The problem? The cases cited did not exist. The chatbot had made them up.

Cautious, proactive

Harvey representatives did not respond to multiple requests for an interview. But to guard against inaccuracies and bias, Allen & Overy’s New York partner Karen Buzard says the firm has a robust training and verification program, and lawyers are greeted with “rules of use” before using the platform.

“Whatever level you are—the most junior to the most senior—if you’re using it, you have to validate the output or you could embarrass yourself,” Wakeling says. “It’s really disruptive, but hasn’t every big technological change been disruptive?”

But other law firms are more wary. In April, Thomson Reuters surveyed mid-to-large law firms’ attitudes toward generative AI and suggested a majority are “taking a cautious, yet hands-on approach.” It found 60% of respondents had no “current plans” to use the technology. Only 3% said they are using it, and just 2% are “actively planning for its use.”

David Cunningham, chief innovation officer at Reed Smith, says his firm is being proactive as it looks at generative AI. The firm is currently piloting Lexis+ AI and CoCounsel and will try Harvey in the summer and BloombergGPT when it comes out.

“I wouldn’t say we’re being more conservative,” Cunningham says. “I would say we’re being more serious about making sure we’re doing it with guidance and policy and training and really focused on the quality of the outputs.”

He says the law firm’s pilot program is focused on commercial systems where the firm knows “the guardrails, we know the security, we know the retention policies” and “we know the governance issues.”

“The reason we’re moving cautiously is because the products are immature. The products are not yet yielding the quality, reliability, transparency and consistency that we would expect a lawyer to depend on,” he says.

Pablo Arredondo, co-founder and chief innovation officer at Casetext, says there is a stark difference between “generic chatbots” like ChatGPT and CoCounsel, which is built on OpenAI’s large language model GPT-4 but trained on legal-focused datasets, and where data is secure and monitored, encrypted and audited.

He understands why some are taking a more cautious approach but predicts the benefits will soon be “so palpable and undeniable I think you’ll see an increase in the rate of adoption.”

New rules

Meanwhile, regulators are playing catch-up. In May, OpenAI CEO and co-founder Sam Altman urged lawmakers in Congress to regulate the technology. He initially said OpenAI could pull out of the European Union because of the proposed Artificial Intelligence Act, which included requirements to prevent illegal content and disclose copyrighted works makers used to train their platforms.

In October, the White House released a Blueprint for an AI Bill of Rights. which includes protections against “unsafe or ineffective” AI systems; algorithms that discriminate; practices violating data privacy; a system of notification so people know how AI is being used and its impacts; and the ability to opt out of AI systems altogether.

In January, the National Institute of Standards and Technology released an AI Risk Management Framework to promote innovation but help organizations create trustworthy AI systems by governing, mapping, measuring and managing the risks.

But the public had to wait until June for Senate Majority Leader Chuck Schumer to outline a much-awaited strategy for regulating the technology. He introduced a framework for regulation and said the Senate would hold a series of forums with AI experts before formulating policy proposals. Then the Washington Post reported in July that the Federal Trade Commission was investigating OpenAI’s data security practices and whether it had harmed consumers.

All the same, DLA Piper partner Danny Tobey argues there is a danger of over regulation because of scaremongering and misconceptions about how advanced the tech is.

“I worry about regulations that become obsolete before they’re even enacted or stifle innovation and creativity,” he says.

However, speaking to lawmakers in May, Marcus said AI systems must be bias free, transparent, safeguard privacy and “above all else be safe.”

“Current systems are not transparent, they do not adequately protect our privacy, and they continue to perpetuate bias,” Marcus said. “Most of all, we cannot remotely guarantee they are safe.”

Others are calling for a halt on the development of large language models until the risks are better understood. In March, the technology ethics group the Center for AI and Digital Policy filed a complaint with the FTC asking it to stop further commercial releases of GPT-4. The complaint followed an open letter signed by thousands of tech experts, including SpaceX, Tesla and Twitter CEO Elon Musk, calling for a six-month pause on research of generative AI language models more powerful than GPT-4.

Ernest Davis, a professor of computer science at New York University, was among those who signed the letter and believes a moratorium is a “very good idea.”

“They’re releasing software before it’s ready for general use just because the competitive pressures are so enormous,” he says.

But Borden says there is “no global authority” or worldwide governance of AI, so even if a freeze was a good idea “it’s not possible.”

“Hitting pause on AI is like hitting pause on the weather,” Tobey adds. “We have an imperative to innovate because countries like China are doing it at the same time. That said, companies and industries have a role to play in shaping their own internal governance to make sure that these tools are adopted safely, just like any other tool.”

Updated July 20 at 11:20 a.m. to include additional reporting and information on the Federal Trade Commission’s investigation into OpenAI and Senate Majority Leader Chuck Schumer’s announcement on a framework for regulation. Updated Aug. 9 at 11:23 a.m. to reflect that Allen & Overy announced in February that 3,500 lawyers were using Harvey across 43 offices.

Give us feedback, share a story tip or update, or report an error.