What is AI technology? Experts don't always agree
Jason Tashea
Last year, I had my eyes opened.
As a participant at two forums on artificial intelligence and public policy—one held by the Government Accountability Office and the other by the National Academies of Sciences, Engineering, and Medicine—I was surrounded by AI and policy experts representing topics as varied as autonomous vehicles, criminal justice, cybersecurity, education, finance and social services.
Up until that point, my post-law school career had been cloistered away in the criminal justice system, including my tech work. So, it was fascinating to see the diversity and complexity that AI represented across industries and subject-matter areas that I didn’t often think about.
While there are numerous common interests and priorities across industries and policy areas, one takeaway from these events was a lack of uniformity about what speakers meant by “AI.” To hear this honest lack of agreement on what constituted AI from experts was eye-opening and, ironically, clarifying for me.
From these experiences and writing more about AI’s role in the legal system for ABA Journal, it is evident that both law and journalism can struggle to express concepts around artificial intelligence. In such a broad field, language can be imperfect and actual technical capabilities can be obscured by hucksterism and intellectual property protections.
Acknowledging these limitations, I sought to better understand how experts talk about AI with the hope that I can provide better coverage on artificial intelligence software.
Originally, I thought this would entail building a taxonomy of commonly used words discussing AI for a legal audience. This proved to be the wrong approach and not as illuminating as I hoped. However, through the conversations I had with experts in the field, I realized that there were tenets I could instill in my reporting to improve how I write about AI.
In the most generic and basic sense, AI is a field of study that broadly asks the question: “Can machines process information in a way similar to a human?”
The field is a dynamic and technical subject-matter area that encapsulates a seemingly endless list of technologies, techniques and competing points of view. Popular press is often—and correctly—derided for coverage that relies on hyperbolic and platitudinal language that obfuscates what the technology is actually capable of.
The way I see it, writing about AI is confounded by four different factors.
First, like many reporters covering AI, I am not a scientist. I have a law degree and a B.A. in History. This means no matter how much I read up, a level of translation has to take place between the AI expert and myself, and then again between my knowledge and what I ultimately write.
This filtration process hopefully distills the issue and adds clarity, but undeniably a level of precision will be lost when trying to make the article approachable to the widest set of readers.
Second, the definitions that do exist for AI are intentionally broad and squishy, making their utility limited.
The GAO report defined AI “as computers or machines that seek to act rationally, think rationally, act like a human, or think like a human.”
Ryan Calo, a law professor at the University of Washington, wrote that “AI is best understood as a set of techniques aimed at approximating some aspect of human or animal cognition using machines.”
Brian Kuhn, co-founder and global co-leader of IBM Watson Legal, uses the euphemism “cognitive technology” to describe AI. For him this means that software takes on a reasoning or qualitative judgment role. But like human cognition, this software equivalent is on a spectrum, he says.
In all three cases, these definitions are broad and fairly vague, which makes them a good starting place. But AI is a “suitcase” term, as many have called it, which allows it to apply to a multitude of instances.
These definitions also reinforce the idea that AI has some human nature to it. This is a hotly debated topic, however. I don’t think that using humanizing terms when reporting about AI helps people understand that the article is ultimately about software.
Third, even if we could agree to a definition, what constitutes “AI” is under constant revision.
As certain applications of AI become rote or commonplace, like optical character recognition—the process of a machine reading text—or Clippy, the popularly derided Microsoft Word assistant—we yawn at their existence and recoil when someone calls a now banal mechanical task “intelligent.”
Called the AI effect, Kevin Kelly, founding executive editor of Wired, summarized this phenomenon in 2014, writing: “Every success in AI redefines it.”
So, while natural language processing and deep learning are firmly considered AI today, as problems are solved and new issues are tackled, these, too, will be thrown to the ash heap of “obviously, a machine can do that.” We’ll then find ourselves talking about a new cutting edge when referring to AI.
Last, even if an agreed-upon definition of AI didn’t shift around, it still doesn’t tell us much about a specific tool.
AI-enabled technologies have a variety of applications and approaches, and the field can’t be treated as a monolith. However, many companies trafficking an AI product don’t want to describe their tool in particular detail.
Often, companies merely want journalists to focus on the narrative of their tool or business: “AI can rewrite news free of bias,” “AI can outthink lawyers” or “AI can find and neutralize hate speech online.” But they may decline to talk about the dataset their tool is trained on or what factors the algorithm considers—ostensibly to protect the company’s intellectual property.
This forces the press to write reductively and lean on literary devices that are more narrative than nuance.
Claims that companies make about a tool’s accuracy, bias and transparency need to be questioned, says Amanda Levendowski, fellow at the NYU Technology Law and Policy Clinic. The same light needs to be shone on the data used by the tool. A company’s unwillingness to share this information is a part of the story and should not be shrugged off, she argues.
Acknowledging there are not perfect words to describe AI, I’ve concluded that a writing more about a tool’s functionality and features can improve transparency and understanding. To that end, I created eight basic tenets to improve my reporting on AI.
When writing about artificial intelligence:
I will confirm that a tool is in fact using a form of AI.
I will qualify AI when speaking about a specific tool or technology.
I will never talk about AI, its applications or research goals as a monolith.
I will stop using anthropomorphizing verbs to describe a computer function.
I will stop using pictures of robot arms, except when talking about robots.
I will ask questions about training data, including how it is sourced, cleaned and if it is audited.
I will note if a company will not answer questions about their tool and data and the reason for their secrecy.
I will continue to expand my knowledge regarding AI.
I will not settle on a fixed approach to writing about AI but will continue to adapt based on feedback from experts, my editors and our readership.
As noted in the last tenet, this list is a beginning. I welcome feedback and input in the comments section below or over email to refine and improve this approach.
That is, of course, until I’m replaced by a robot.
Thank you to Dr. Alex Hudek at Kira Systems, Brian Kuhn at IBM Watson, Amanda Levendowski at NYU and Jonathan Unikowski at LexLoci for speaking with me and offering their expertise.