Artificial Intelligence & Robotics

What cybersecurity threats do generative AI chatbots like ChatGPT pose to lawyers?

  •  
  •  
  •  
  • Print

cybersecurity concept

“We use ChatGPT differently than the way we use other types of searches, and therefore any vulnerabilities in ChatGPT become exacerbated and are much more likely to lead to the exposure of privileged information,” says Mark D. Rasch, a lawyer and cybersecurity and data privacy expert. Image from Shutterstock.

Lawyers are among the millions who have rushed to ChatGPT, the generative artificial intelligence platform that has taken the world by storm. But after a bug revealed some users’ chat histories and payment data, some attorneys might be thinking, “not so fast.”

After the March leak, OpenAI, the artificial intelligence research company behind the software, told users it could not delete “specific prompts” from users’ histories and warned them not to share sensitive information on the platform. Experts say these chatbots could supercharge a host of other security threats, including phishing, social engineering and malware.

The incident served as a cautionary tale, especially for lawyers and other legal professionals who are already using generative AI chatbots for their work. Mark D. Rasch, a lawyer and cybersecurity and data privacy expert, says there are risks, especially for lawyers who turn to chatbots for research, case strategy or to produce first drafts of motions and other sensitive documents.

Unlike queries legal professionals make with legacy legal research tools or with Google search, lawyers can use generative AI chatbots to draft memos, correspondence and court documents that cite the details of a case or parties to litigation.

“We use ChatGPT differently than the way we use other types of searches, and therefore any vulnerabilities in ChatGPT become exacerbated and are much more likely to lead to the exposure of privileged information,” Rasch says.

Andrew Burt, a lawyer specializing in AI, says the new tech blurs the line between security and privacy. The conversational South Korean chatbot Lee-Luda, introduced in 2020 after training on a massive dataset of messenger conversations, is a prime example of what can go wrong, he says. Lee-Luda ended up sharing people’s names and aliases. The chatbot’s maker, ScatterLab, announced a relaunch in October 2022 using generative AI and said it had put safeguards in place to protect privacy.

The data “ended up going from user to user in ways that fundamentally violated privacy and, in the process, the confidentiality of that data, which is one of the core tenets of information security,” Burt notes.

Scaling attacks

But it’s not just vulnerabilities in the software that security experts are worried about. Cybercriminals and other bad actors now have a tool at their fingertips to make scam-ready content with a sheen of legitimacy. Even the most tech-savvy lawyer could fall victim to phishing emails ChatGPT produces.

The usual telltale signs such as poor grammar and spelling are missing, and the software can create natural-sounding and convincing text, according to Sharon Nelson, the president of digital forensics and cybersecurity company Sensei Enterprises.

“Phishing emails are one of the primary ways you get into a law firm. Now you’re going to see phishing emails that are so believable you don’t know you’re talking with a machine,” Nelson says, adding that when it comes to social engineering, “humans are always the weakest link.”

Frank J. Gillman, a principal of the professional services firm Vertex Advisors Group, agrees social engineering could be the primary threat the tech poses to firms because “the biggest problem at any law firm is humans’ ability to be tricked.”

In late March, the Europol Innovation Lab published a report stating that even before ChatGPT, phishing emails were “already worryingly sophisticated” and that generative AI will make them more “authentic, complex and difficult to discern from human-produced output.” It noted ongoing efforts to develop technology capable of detecting AI-created text.

“At the time of writing of this report, however, the accuracy of known detection tools was still very low,” according to the March 27 report, titled ChatGPT: The Impact of Large Language Models on Law Enforcement.

Nelson says bad actors could use GPT-based software to generate attack code for malware; and large language models rely on large amounts of training data, making them a “prime target” for hackers.

The Europol Innovation Lab’s report says the technology could allow cybercriminals with little knowledge of coding to create malware and that the newest version of Open AI’s language model, GPT-4, is already more advanced.

“The newer model is better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. For a potential criminal with little technical knowledge, this is an invaluable resource,” the report states.

According to a 2019 white paper, Warning Signs: The Future of Privacy and Security in an Age of Machine Learning, the “poisoning” of machine learning models can occur if a bad actor is “able to insert malicious data into training data in order to alter the behavior of the model at a later point in time.”

Cybercriminals can also use generative AI tools for social engineering and to dupe victims with deepfake audio and video. Like phishing content, these tools have been around for a long time. In 2019, scammers used AI to clone a German CEO’s voice and persuade a U.K.-based executive to make an urgent transfer of 220,000 euros. Now the technology is even more sophisticated.

Earlier this year, someone posing as a lawyer called a Canadian couple and told them their 39-year-old son was in trouble for killing a U.S. diplomat in a car accident and needed money to pay his legal fees. According to the Washington Post, the scammer used an AI-generated voice to make the parents believe they were talking to their son and persuaded them to send $15,449.

AI research company ElevenLabs offers voice cloning services for free; paid plans start at $5 a month. And John W. Simek, vice president of Sensei Enterprises, says such services only require a sliver of audio to clone someone’s voice.

“The AI is so fast that it can react to whatever validation [a victim is] asking for,” Simek says.

Weak links

Nelson, who teaches cybersecurity and training to law firm employees, doesn’t expect to tear up the rulebook and start over. But she does believe security experts need to be nimble. Even AI experts are struggling to keep up with the threats.

“If they can’t keep up with it, we all have a problem,” she says.

In the meantime, what can law firms do to protect themselves and their clients? Burt says firms should bolster the traditional security measures they already have in place, such as multifactor authentication.

“AI systems keep challenging old conceptions of things like security and privacy and fairness. But at another level, they just reinforce existing best practices,” Burt says.

Gillman suggests the popularity of the new tech should concern lawyers and law firms who trade in their clients’ confidential information.

“For firms that are falling behind, the gap between where they are and where they need to be is widening by the month,” he says.

Gillman says firms must have up-to-date cybersecurity awareness programs and create a “culture of security and privacy” through actionable policies and practices. But firms must also focus their attention on human error and behavior.

“Humans are … involved in more than 80% of data breaches, whether they’ve clicked on a phishing email or they’ve just done something stupid,” Nelson says.

Give us feedback, share a story tip or update, or report an error.