ABA Journal

New Normal

Lawyers, the algorithms are better than you (at some things)


By D. Casey Flaherty

  •  
  •  
  •  
  • Print

image

D. Casey Flaherty

One of the joys of watching amazing lawyers at work is how quickly they get to the heart of the matter. Five minutes with an experienced partner can be worth 500 hours of associate time not only because of the accumulated knowledge but because of the attendant know-how. They are able to zero in on the gravamen of an issue and provide the sage counsel the client actually needs, rather than just answer the question the client initially asked. It’s incredible. And it’s incredibly valuable.

All humans are constrained by bounded rationality. We constantly make snap judgments based on limited information within finite periods of time. We compensate by exploiting the structural regularity of our environment. That is, we learn to recognize patterns and develop heuristics—strategies derived from experience—to take mental shortcuts and ease cognitive load. Experts, including experienced lawyers, not only know more than others, they have seen more than others and have developed the attendant ability to recognize the salient features within the repeated patterns.

Yet our ability to create mental shortcuts for extracting the signal from the noise does not always serve us well. We develop blind spots, prejudices, biases, and an aversion to information that does not fit our preconceptions. As with our literal blind spot, our minds fill in the blanks by filtering the available information through ingrained patterns. Most of the time, this works just fine. Sometimes not. We see things that are not there or miss things that are. Machines, too, have blind spots. But they are the result of different constraints.

“Algorithm” is another one of those terms that gets thrown around quite often but inspires either fear or boredom in those who aren’t sure what it means. An algorithm is just a step-by-step set of instructions. If X then Y is a simple algorithm. You can think of setting an alarm as an algorithmic exercise. When (X) the clock hits the time, then (Y) the alarm goes off. Algorithms, of course, can be much more complex.

The original web-search algorithms looked at how often words appeared on a page. The more hits on a word, the likelier it was the page you were looking for. These simple algorithms began generating bad results as the web expanded and website developers learned how to game the system. Google’s formative innovation was introducing hierarchy of authority. PageRank cataloged who linked to whom and how frequently. Google could thereby direct users to websites with the highest level of perceived authority. Google now also filters by freshness of content and the region from which a search originates among more than 200 additional signals. That is, Google’s algorithms reach conclusions based on similar, recent precedents from the highest authority in your jurisdiction. Sound familiar?

In many areas, algorithms operate much like our heuristics. They are formalized versions of our informal rules. They therefore tend to be constrained by their programming. Great strides have been made in machine learning—i.e., an inductive, rather than deductive, approach where machines derive rules from observed behavior instead of being specifically programmed. But the algorithms we interact with today generally lack the fluidity and flexibility to deal with nuance, subtlety, and subtext. They excel at scale but are limited when it comes to scope. They are deep but narrow. For now.

These are real constraints. But they are different constraints than those that necessitate bounded rationality. Time and limited cognitive load, two of our greatest pitfalls, are areas where the machines already exceed us. Computing power keeps getting faster and cheaper. Humans not so much.

Last post, I introduced a simple-to-understand application of analytics using billing data. Next post, I will introduce a related application of algorithms. I am using billing quite deliberately. Just about everyone hates invoice generation and review. Streamlining or improving the billing process is perceived as an opportunity to be rid of oppressive labor, not as a threat.

One of the best ways I’ve found to talk to lawyers about our increasingly symbiotic relationship with machines is to focus on the areas where the machines seem most like the appliances they are. ATMs, sprinklers, and dishwashers are, after all, robots. Yet none of them stoke the Promethean fear that we are taking our relationship to nature too lightly, introducing power we cannot control, and sowing the seeds of our own destruction.

As an inveterate tinkerer, I spend much time in and around legal tech. I am aware of no company that is actually working on a technology to replace the seasoned, skilled practitioner from my opening paragraph. I am only familiar with technologies that would augment, leverage, support, scale, enhance, or streamline what they do. Some of these technologies are promising. Some are not.

I would strongly suggest that the next time you come across a headline about an artificially intelligent robot lawyer, you actually read the text and discern whether the technology described therein comes anywhere close to the popular conception of intelligence, a robot, or a lawyer. The answer is likely to be “no” on all three counts. But that the technology is neither a robot, nor intelligent, nor a lawyer does not mean that the technology is not a useful tool for a lawyer.


D. Casey Flaherty is a consultant at Procertas. Casey is an attorney who worked as both outside and inside counsel. He also serves on the advisory board of Nextlaw Labs. He is the primary author of Unless You Ask: A Guide for Law Departments to Get More from External Relationships, written and published in partnership with the ACC Legal Operations Section. Find more of his writing here. Connect with Casey on Twitter and LinkedIn. Or email [email protected].

In This Podcast:

Give us feedback, share a story tip or update, or report an error.