Technology

AI certification initiatives could prove very useful to legal industry, experts say

  •  
  •  
  •  
  • Print

shutterstock_AI lawyer

Image from Shutterstock.com.

Artificial intelligence is supposed to be a countervailing force against human errors and biases. But AI-enhanced tools are only as good as the data they rely on.

Take, for example, products that assist companies with recruiting, screening and hiring candidates. According to Reuters, Amazon’s recruiting program using AI to rate candidates for software developer and other technical jobs was supposed to be an efficient, automated engine that pinpointed the best resumés.

One problem: It reportedly disfavored female applicants because it looked at patterns in resumés submitted to Amazon over a 10-year period in which the applicants were predominantly male. The company ultimately scrapped its program, Reuters reported.

In those cases when AI systems produce problematic results, the companies that implemented them are not the only entities that could face potential liability. So, too, could lawyers who recommended the deploying of specific AI tools without properly advising clients of the possible perils in doing so.

Gillian HadfieldGillian Hadfield.

University of Toronto law professor Gillian Hadfield says she suspects many attorneys currently “would have a hard time giving much of an answer about what the risks are” for clients wading into the AI waters amid a lack of extensive government regulation.

However, Hadfield hopes an initiative she is helping lead to create a global certification mark for trustworthy AI systems could help attorneys offer clients more substantive advice about using AI tools. She says the project could also aid law firms in determining which AI technologies they want to utilize internally.

The AI certification project is a partnership between the University of Toronto’s Schwartz Reisman Institute for Technology and Society, which Hadfield directs, and AI Global, a nonprofit focused on advancing responsible and ethical AI.

The overarching goal of the effort launched late last year is to build an international framework to designate AI systems across a wide range of industries as responsible, ethical and fair. Proponents argue a third-party certification regime is needed to limit the harms that AI technologies can produce, including bias and privacy breaches.

“We know there are a lot of potential risks and issues to address with AI that are new on the horizon,” Hadfield says. “One of the things we are hoping for with a global certification mark is both to develop some of the techniques for evaluating AI systems and also provide an independent third-party verification process.”

Members of the legal industry say the University of Toronto/AI Global certification project is one of several similar efforts underway that would be beneficial to the law firms implementing AI tools or advising their clients about how to do so.

The IEEE Standards Association is among the other organizations active in the realm of certifying AI systems. The association announced in February 2020 it had completed the first phase of work for its Ethics Certification Program for Autonomous and Intelligent Systems.

Miriam Vogel, president and CEO of EqualAI, says her organization also hopes to eventually launch a certification program for responsible AI governance.

In the interim, EqualAI is seeking to have companies and organizations across industries publicly pledge to take steps to reduce bias in AI. In January, Sheppard, Mullin, Richter & Hampton announced it was the first law firm to take the pledge.

EqualAI also offers a Continuing Legal Education program for law firms and lawyers wanting to learn more about the role they can play in ensuring AI is used responsibly. Vogel, a former U.S. associate deputy attorney general and White House advisor, says attorneys “can be a key ingredient in solving for bias in AI” but have not yet stepped up to the degree she would like to see.

Jenn Betts, co-chair of the technology practice group at Ogletree, Deakins, Nash, Smoak & Stewart, is among the lawyers who would like to see AI certification initiatives move from concept to reality.

She says a certification system would be particularly helpful amid the growing number of new technologies in the law and other industries that rely on AI.

“I think there is a lot of value in having a gold standard that any company, law firm or other organization can rely on,” Betts says. “That doesn’t really exist right now.”

Ali ShahidiAli Shahidi.

Ali Shahidi, chief innovation & client solutions officer at Sheppard Mullin, says he also supports efforts to independently test and certify AI systems.

His firm and Ogletree Deakins have both deployed Casetext’s automated legal-writing product called Compose, which features the company’s CARA artificial intelligence technology.

Shahidi says Sheppard Mullin already reviews whether the AI products it is considering deploying are free of bias, accurate and fair. However, he says a well-regarded AI certification program would assist in making such determinations.

“We would still do some diligence, but having the certification will give us a higher degree of confidence that the AI solution is sound and safe to use,” Shahidi says.

Anthony E. Davis, of counsel at Clyde & Co in New York City, says a third-party AI certification system could also help lawyers comply with their ethical obligation of technological competence. This duty requires lawyers to possess knowledge of how to use technology, as well as how to select the appropriate tools to deploy.

Davis says an AI certification program could potentially serve as “a basis for establishing the reasonableness of the choice” a lawyer made in recommending a particular AI solution to clients.

Legal technology companies are also closely monitoring the ongoing discussions about AI certification, according to Dave Lewis, Reveal-Brainspace’s executive vice president for AI research, development, and ethics.

Reveal-Brainspace offers an AI-powered e-discovery platform, and Lewis says it would ultimately be a business decision as to which AI certifications the company may seek.

“There is a whole set of competition going on out there over who is going to be the one that puts ‘Safe AI’ stamps on things,” Lewis says. “I think we are in very early days of understanding how that’s going to shake out.”

The certification mark project Hadfield is helping lead projected a 12-to-18-month timeline when it formally launched in December.

The initiative’s leaders are coordinating with the World Economic Forum to bring together members of industry, policymakers, civil society representatives and academics to build internationally agreed-upon standards for AI, according to Hadfield. She hopes to see attorneys be among the participants in such discussions.

“We will definitely need lawyers involved in the business of thinking through what certifications would be valuable because lawyers play such a key role in risk management in organizations,” Hadfield says.

Give us feedback, share a story tip or update, or report an error.