Law Scribbler

Be competent in AI before adopting, integrating it into your practice

  •  
  •  
  •  
  •  
  • Print.

Jason Tashea and Nicolas Economou

Jason Tashea and Nicolas Economou.

When the ABA Model Rules of Professional Conduct included the duty of technology competence in Rule 1.1, Comment 8 in 2012, lawyers were propelled headlong into a complex world of fast-changing technological growth.

Perhaps the most challenging area to understand is artificial intelligence. Not only must lawyers advise their clients on the legal risks associated with AI, they increasingly need to evaluate whether and when to include AI technologies in their practices.

Today, 36 states have adopted a duty on technology competence, and yet, for many, understanding AI is an ongoing challenge. Helping lawyers meet these standards was part of our work with the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, a multiyear, international, multidisciplinary effort focused on the ethics of AI. This first-of-its-kind set of comprehensive principles for ethical adoption of AI in legal systems includes accountability, effectiveness, transparency and, yes, competence.

Competence is a key principle for the trustworthy adoption of AI in the law. However, this does not require attorneys to be experts. Rather, to be competent in this subject matter area, it’s as much about personal education as it is knowing one’s limits, asking for help and demanding more transparency from software developers and vendors.

The chapter on AI in legal systems in this multidisciplinary tome is different from the others because of its emphasis on competence. This is because the degree to which a lawyer maintains technology competence can have a direct impact on other rules of professional responsibility.

Why Competence?

For example, Model Rules 1.4(b) (Communications) or 1.5 (Fees) are triggered when a lawyer explains how AI-enabled software may affect client representation or fees. Informing the court and opposing counsel of completeness of e-discovery implicates Rules 3.3 (Candor to the Tribunal) and 3.4 (Fairness to Opposing Party & Counsel). A firm considering a new hiring algorithm to vet candidates may run the risk of discrimination and come up against Rule 8.4(g) (Misconduct). The outcome of these potential ethical conflicts depends not just on understanding what a technology can and cannot do, it also rests on the extent to which the operator of the AI—for example, an attorney using machine learning in discovery to find responsive documents—has actually accomplished the intended objective.

For lawyers, AI competency can be confounding because understanding what an AI tool can do, operating it effectively and determining if the desired objective was effectively met requires specialized knowledge in—at least—computer science and statistics.

Take, for example, technology-assisted review, which relies on various types of AI to identify relevant data in litigation. Between 2006 and 2011, the National Institute of Standards and Technology (NIST) conducted a series of studies to assess its effectiveness.

Whereas results varied widely among participants, two processes that demonstrated “better than human” performance were designed and executed by teams that combined lawyers with scientifically trained experts in fields such as computer science, linguistics and statistics. (Disclosure: One of us took part in these trials.) These results suggest that multidiscipline expertise is important to effective execution of a TAR process.

For legal applications of AI to be fruitful, lawyers must therefore understand what tasks and under what circumstances can be performed better by AI-enabled technologies and associated scientific processes. But the need for technical expertise was underscored even more powerfully by how poorly the participating teams generally performed at measuring the accuracy of their TAR: Their actual accuracy tended to be substantially different from the accuracy they estimated that they had achieved.

As reliably measuring accuracy requires sound statistical methods, which have no part in the law school curriculum, a discrepancy between estimated and actual accuracy can only be explained by the subpar application of statistics, as well as a likely lack of recognition of the need for statistical expertise in the first place.

The takeaway, not surprisingly, is that legal prowess and good intent do not create a sufficient level of competence to effectively evaluate, operate or measure the efficacy of algorithmic tools.

Even though TAR is increasingly common, lawyers still make inadvertent misrepresentations about what AI tools have accomplished because they are unlikely to have an adequate understanding of how they work, and because they often don’t have an adequate understanding of what competencies are needed to operate them effectively and assess their efficacy.

An understanding of AI training methods, factors that introduce bias and statistical likelihood of inapplicability would all be useful, but this is not in the purview of the average lawyer or judge.

Understanding the concepts

The science underpinning effective and measurable results of AI is not for the faint of heart. Governed by computer science and statistics, these are complex academic disciplines in which lawyers are generally untrained and cannot become experts on the fly. Yet without them, lawyers cannot know if what they are unleashing enhances or hinders the legal system. The growing complexity of AI, the scientific nature of the domain and its increasing pervasiveness across the legal system only render the challenge more formidable.

This is why, as the competence principle indicates, it is incumbent on the creators of these tools to create transparency and specify the skills and knowledge required for their effective operation. In cases where this isn’t possible, lawyers must ask for it. Opening this door can lead to improved interdisciplinary communication between legal and technical fields, while bettering the science behind AI and its application in legal systems.

This does not mean that lawyers should abdicate responsibility or authority. But it does require them, as the IEEE principles suggest, to recognize that being competent means, first and foremost, knowing the limits of their own knowledge and skills, and thus when to enlist the aid of those skilled in the relevant field.

Lawyers should become facile with the conceptual risks and benefits of AI, including issues ranging from data privacy to effectiveness and which experts to retain to review the AI application in question. Legal professionals should become aware of how they might acquire or hire such expertise. They should call for and rely on standards and accreditations to procure the services they need, so as to ensure that their use of AI complies not just with the competence principle, but with all four IEEE principles and legal ethics generally.

By embracing this approach to AI competence, lawyers can improve the representation of their clients and efficiency of their practices ethically.

Jason Tashea is the author of the Law Scribbler column and a legal affairs writer for the ABA Journal. Follow him on Twitter @LawScribbler.

Nicolas Economou is the CEO of H5. He chairs the law committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and leads the Law and Society Initiative at the Future Society.

See also:

ABAJournal.com: “Courts need help when it comes to science and tech”

Give us feedback, share a story tip or update, or report an error.