Coming this fall to NBC, the newest addition to the Law & Order universe: Law & Order: CPU.
In the criminal justice system, crimes are stopped before they happen by the Crime Prediction Unit. Psychically enhanced humans see the future to determine when and where crimes will take place; and mass surveillance and artificial intelligence determine the who, why and how. Self-aware robots take the perpetrators into custody so that humans no longer have to risk death or injury bringing dangerous criminals to justice. These are their stories. [Dun dun!]
Science fiction is full of tropes and themes relating to technology, artificial intelligence and mass surveillance—especially when it comes to the legal system. Whether it’s people with augmented abilities, such as the “precogs” in Minority Report who could determine whether someone would commit a crime before they actually did so; advanced cyborgs and robots to help keep order in I, Robot; or complicated algorithms and surveillance networks such as those used in Person of Interest to single out potential threats; science fiction films, TV shows and books often focus on the delicate balance between maintaining order and protecting civil liberties.
When it comes to real life, science fiction has morphed into science fact. Several of the above-mentioned concepts have made their way into our actual legal system, changing the way law enforcement officers investigate crimes and perform their jobs. Many of these films and shows may have imagined the future and how technology both helps and hurts society. When it comes to some parts of the legal system, it’s clear that the future is now.
According to Atlas of Surveillance, more than 100 police departments throughout the country—including those in Los Angeles and New York City—use some form of predictive policing, in which algorithms and historical data are analyzed to predict where certain types of crimes are likely to be committed. Gunshot-detection services also rely on historical data to help cities determine where to set up their microphones and algorithms to figure out whether a noise is a gunshot.
Meanwhile, artificial intelligence-enabled chatbots can speak like and converse with humans to such an extent that they are already being used within the legal industry.
“Artificial intelligence makes it easier to access expertise and skills that might otherwise be unavailable,” says Nicholson Price, a professor at the University of Michigan Law School who has written about the possibilities of AI in the medical field and taught a seminar about science fiction and the law. “On the other side, this technology is hugely expanding the reach of the state to the extent that surveillance algorithms have become more a part of everyday life and have changed the way our behavior is observed, measured and influenced.”
The 2002 film Minority Report centers around a prototype policing program in which three humans (dubbed “precogs”) use their clairvoyant abilities to see someone commit a homicide in the immediate future. Police analyze the visions to try to connect the dots—notably when and where a crime will take place. Of course, the system is abused for nefarious purposes—to say nothing of how its very existence flips the criminal justice system on its head by punishing people for crimes before they commit them—forcing protagonist John Anderton (played by Tom Cruise) to save the day.
According to author and novelist Jon Cohen, Minority Report producer Jan de Bont (who directed Speed, Twister and Speed 2: Cruise Control) liked one of his spec scripts and approached him to write the screenplay, adapting the original 1956 novella by Philip K. Dick. Cohen hadn’t done much screenwriting or science fiction writing but was intrigued by Dick’s novella and the concept of the precogs. Cohen, who is credited as a co-writer of the screenplay with Scott Frank, says he tried to humanize the precogs and added a major subplot involving Anderton going on the run with one of them.
He also used vision, seeing and eyes as overarching and recurring themes within the film—motifs he’s used throughout his career, inspired by what he calls “his extreme myopia” as well as a condition known as cerebral polyopia, which causes individuals to see two or more duplicated images instead of one.
For instance, in Minority Report, there are the precogs’ visions, the use of retinal scanners to monitor and surveil individuals, and Anderton getting eye surgery before going on the run so he could move around undetected.
The movie was a box office smash, earning nearly $360 million worldwide. It also has become part of the cultural zeitgeist, emerging as a shorthand term for any type of predictive policing or attempt to stop a crime before it actually happens.
“It’s constantly being referenced or taught, or I’ll be watching cable news and some talking head will mention it,” Cohen says. “It’s always associated with future dread—as in, if you’re a fan of civil liberties and privacy, then boy, is your future going to suck.”
The form of predictive policing seen in Minority Report looks to the future, but what is currently being used by police departments and cities is focused more on the past. Brian MacDonald, CEO at Geolitica (formerly known as PredPol) says his company focuses on trying to predict where a limited class of crimes—namely, burglaries, thefts, assaults and auto-related crimes—will take place by analyzing historical data of instances in which an officer responded to a call and took a report.
“We got started with idea that policing could be done better and more fairly by focusing on these areas where these crimes concentrate,” MacDonald says. “For instance, if you have a parking lot where many car crimes happen, you put an officer there and deter it.”
According to MacDonald, focusing on data from police reports is far more reliable than calls for service or arrest data. “We look at confirmed crimes, not just, ‘I think I saw this’ or ‘I heard this,’” he says. “This is the most carefully vetted data we can get. We look at what was the crime type, the location of the crime and the time. What, where and when. No names, no demographic information or socioeconomic makeup of the neighborhood.”
Likewise, SoundThinking (formerly known as ShotSpotter) relies on historical data to help cities determine where to set up their gunshot-detection microphones. The microphones pick up loud explosive sounds, and a proprietary algorithm analyzes each sound to make sure it’s gunfire and not a comparable noise like fireworks, a backfiring car or a helicopter.
“About 80% of gunfire goes unreported to the police, so that means they can’t even investigate,” says Tom Chittum, senior vice president of analytics & forensic services at SoundThinking and former associate deputy director of the Bureau of Alcohol, Tobacco and Firearms. “I think most people assume police know about these incidents, so when people who have suffered from gun violence don’t see the police respond, they assume they don’t care.”
Chittum adds that the technology allows police to rely on actual data to do their job better.
“Patrolling has long been gut-based. No one wants police to just blindly and randomly patrol around,” he says. “It allows human biases to creep into their decision-making. We want them to apply their resources properly, and this technology allows us to do that on a greater scale and with greater accountability too.”
According to SoundThinking, its technology is currently deployed in more than 150 cities. It has been criticized by civil liberties advocates who worry the microphones and detection devices might pick up private conversations. With that in mind, the company submitted itself for independent auditing in 2019 by the Policing Project at the New York University School of Law and was found to be a “low privacy risk.” (See “Open Book,” page 20, April-May 2020.)
Others argue that the technology doesn’t work as well as it should. In 2021, the City of Chicago Office of Inspector General’s Public Safety section found that ShotSpotter rarely led to evidence of gun-related crimes, and in July 2022, the MacArthur Justice Center at Northwestern University’s law school brought a lawsuit against the city of Chicago over its continued use of the technology, alleging it led to false arrests and promoted discrimination.
A spokesperson for SoundThinking says much of that data has been misrepresented and claims then-Chicago Police Superintendent David Brown praised ShotSpotter for its help saving lives and getting guns off the streets.
“The data from a ShotSpotter alert allows police to investigate a gunfire incident in a more precise area, compared to a 911 call that often requires officers to patrol entire neighborhoods for victims and evidence, limiting the risk of unnecessary stops and searches,” the spokesperson said.
Price argues that the approach of focusing on historical data only reinforces whatever people already believe about crimes and the people who commit them.
“If you look at algorithms to predict crime or policing, they’re typically trained on arrest or complaint data, and that reflects where they’ve been in the past; it’s not representative of where crime actually happens,” he says. “If AI predictors are based on that skewed data set, you’ll bake that bias into the new tools. We think machines can’t be racist or can’t have bias, but if [they are] trained on slanted data, then the outcomes will be racist and biased.”
In fact, predictive algorithms aren’t just used for figuring out where certain crimes might take place. Courts use them to make bail determinations, calculating the likelihood that someone will return to court and recommending whether or how much bail should be set.
“In most places, bail is there to make sure defendants come back; it’s not about whether they’re dangerous,” says David Colarusso, director of Suffolk University Law School’s Legal Innovation and Technology Lab. “Someone on pretrial release has yet to be adjudicated as guilty. In many ways, it’s like the hypothetical behind Minority Report—how do we treat people who haven’t yet committed crimes?”
In the futuristic world depicted in the 2004 film I, Robot, highly intelligent robots serve humanity and perform many of the tasks and jobs once the sole province of humans. These robots are kept in line by the Three Laws of Robotics (adapted from Isaac Asimov’s stories, including his 1950 collection, I, Robot, upon which the movie is loosely based), which prevent them from harming or killing humans.
In the film, lead character Detective Del Spooner (played by Will Smith) is deeply suspicious of robots and lives a low-tech existence in his personal life. Unfortunately for him, police officers have to work alongside robots, and he’s no exception.
“When the film opens up, you could miss it if you’re not looking closely,” says Jeff Vintar, one of the screenwriters of I, Robot. “He wakes up in an apartment with a regular shower, CD player, etc. But then he walks outside, and you realize he’s in a robotic world.”
Vintar says that in his original script, this dichotomy was emphasized more, and humans in the outside world were so technologically dependent that it seemed like they were actually the robots.
“When he stepped out into the world, everyone was fully integrated with computers—they were even wearing them, like computers were part of their bodies,” he says. “The people were barely human. Meanwhile, the robots are taking out the garbage, walking the dog, doing all of the things people didn’t have to do anymore. So they’re the ones who look human.”
Vintar notes that AI has only become more advanced in the nearly two decades since the I, Robot film came out and has raised many issues and questions within the entertainment industry.
“The robots are coming, and we have to deal with this,” Vintar says. “AI is everywhere. It’s in art. It’s even being used to generate story ideas in Hollywood. So it’s an issue that is more real to me than ever before.”
The Writers Guild of America has been calling for protections for humans in the face of AI-generated content. The guild went on strike starting May 2 after failing to come to terms with studios over this and other issues; the strike was ongoing at press time.
"The argument always went: 'The robots will come, and they'll do the menial labor we don't want to do, letting us be artists.' But now, AI is threatening to take away the stuff that artists do—look at ChatGPT." - Jeff Vintar
And in an echo to a similar debate in the legal industry about how technology can help lawyers do the menial tasks they don’t want to do so they can focus on more important things, Vintar recalls how he once did a Q&A about I, Robot, and some audience members expressed concern about AI taking away people’s jobs.
“The argument always went: ‘The robots will come, and they’ll do the menial labor we don’t want to do, letting us be artists,’” he says. “But now, AI is threatening to take away the stuff that artists do—look at ChatGPT.”
ChatGPT is one of several large language models that have hit the market in recent months. It is able to mimic human speech and naturally converse and interact with people in ways that previous chatbots and programs were unable to. ChatGPT and other large language models already are having a disruptive effect on the legal industry. Lawyers can use it to do research, draft and analyze contracts, create documents or templates, and market their practices faster and more efficiently. Access-to-justice advocates also see potential for large language models to help millions of people who can’t afford legal representation. (See “Words with Bots,” June-July, page 34.)
Miguel Willis, innovator in residence at the University of Pennsylvania Carey Law School Future of the Profession Initiative, says tools like ChatGPT are still largely unknown and could have both positive and negative uses.
“ChatGPT is open source, so companies will jump to market in different areas,” he says. For instance, Willis points out that he could use ChatGPT’s open API (which stands for application programming interface and is a way for two computers to communicate with each other) to create a tool for landlords to evict tenants faster. “I’m sure I would get much more funding from a venture capitalist than if I came up with a tool to help people fight eviction.”
Jonathan Nolan has co-written several films with his older brother, Christopher, including Memento (2000), The Dark Knight (2008) and The Dark Knight Rises (2012). In The Dark Knight, Batman creates a 1984 Big Brother-esque surveillance network by hacking into and tracking cellphone signals so he can locate his archenemy, the Joker.
Jonathan Nolan took it up several notches when he created the television series Person of Interest, which aired on CBS from 2011 to 2016. In this show, a massive video and electronic communications surveillance network simply named “the Machine” was like Batman’s on steroids. In addition to keeping a watchful eye on everyone, the Machine also could predict future terrorist activities. Like the precogs in Minority Report, the Machine would provide information to its two main handlers, billionaire genius and Machine inventor Harold Finch (Michael Emerson) and his partner, former soldier and CIA operative John Reese (Jim Caviezel). The two would then investigate and stop any potential terrorist acts.
Sean Hennen, a writer on the show for the first three seasons, recalls that Nolan was going for a combination of old 1970s-style paranoia thrillers like Three Days of the Condor and The Conversation, and a futuristic show that utilized technology.
“There was an article he read at the time that said lower Manhattan was most surveilled piece of real estate in the world, even including London and Paris,” Hennen says. “He found that fascinating, so the show was set in New York because of that.”
Hennen recalls that the ethos on the show was to look five minutes into the future to make things look as realistic and plausible as possible.
“The technology was always within our view,” he says. “We actually did the research to see what was actually being discussed right now or coming up. For instance, Bluetooth was relatively new at the time, and there’s always a new hack for new technology, so we coined a term, maybe incorrectly: ‘bluejacking.’ So one of the recurring themes we used was that if a cellphone was open, hackers could use Bluetooth to listen in. That was something we had read about and used it all the time.”
Hennen, who served as a producer on a Minority Report TV series that aired on Fox in 2015 and the NBC show The Blacklist, most recently joined Alert: Missing Persons Unit on Fox. He notes that they tried to avoid ethical debates on the show but made sure to portray the characters as understanding how dangerous their technology could be.
“What we wanted to examine was how the people that invented the tech that used it to the best of their abilities were a little afraid of it,” he says. “That’s what I would believe personally. This thing exists, we’re going to start using it, but with precaution and wariness about what it’s capable of. That was the ultimate punchline of the show.”
"I think a healthy dose of caution and anxiety about what it could mean isn't necessarily a bad thing." - Sean Hennen
He adds: “We find ourselves talking about AI and sentient machines right now. I think a healthy dose of caution and anxiety about what it could mean isn’t necessarily a bad thing.”
In real life, lower Manhattan is, indeed, highly surveilled. There are 18,000 closed-circuit cameras posted at various points throughout New York City. The Domain Awareness System, a joint partnership between the New York Police Department and Microsoft, also has access to data from license plate readers and court documents and various police records, including arrest reports, warrants, complaints and 911 calls.
The NYPD did not respond to a request for comment; Microsoft declined comment.
Critics have pointed out that the Domain Awareness System potentially violates people’s right to be protected from warrantless surveillance. Additionally, they point out that it includes gunshot detection and predictive policing, which brings up concerns about those platforms. And, in fact, they argue that the Domain Awareness System could exacerbate existing tensions between the NYPD and racial and religious minorities by putting them under more surveillance and scrutiny than they already are. In 2018, for instance, the NYPD settled the last of three major lawsuits accusing them of unlawfully spying on the Muslim-American community. Meanwhile, a New York Civil Liberties Union analysis of the city’s “stop and frisk” data found that the controversial policy disproportionately impacted racial minorities, particularly Black people.
The Domain Awareness System “is an unparalleled invasion of New Yorkers’ privacy rights,” Surveillance Technology Oversight Project Inc. said in a 2019 report. “It operates like Big Brother—tracking New Yorkers’ every movement throughout the five boroughs.”
Adrian Hon, writer and CEO of Six to Start, a smartphone fitness game developer based in London, questions whether mass surveillance really is achievable or reliable.
“It’s one of these things that’s so seductive: If we put up enough drones and security cameras, can we know everything?” asks Hon, who wrote You’ve Been Played: How Corporations, Governments and Schools Use Games to Control Us All. “Can we have some sort of mission control where we can see everything? A lot of companies are willing to sell that dream, but it’s not really achievable. People will get around them or learn to hide from cameras.”
Hon also points out that technology is never so reliable that it can be protected from false positives or malicious actors. And he wonders whether people might just get fed up with all the technology that tracks data and monitors them in ways they don’t agree with. “In 20-30 years, a lot of people will be wearing cameras or augmented reality glasses,” he says. “And the cameras are already around; what happens when you multiply that by 1,000? If I’m walking through a city and I have video of someone running a red light, then what do I do? Do I send it to the police? Do I keep it quiet? What happens if that evidence gets altered?”
With many sci-fi films, TV shows or books, the common theme is fear. What happens if machines become self-aware and take away our jobs, freedoms or lives? Whether it’s HAL 9000 calmly telling his crewmate in 2001: A Space Odyssey that he wasn’t going to save his life by opening the pod bay doors for him; Terminators traveling back in time to kill one person to ensure humanity’s mass enslavement to machines; or humans losing a war and becoming a bunch of mindless drones content to live in the Matrix, there’s plenty of precedent on the large and small screens that advanced technology could be bad for humanity.
For some, these fears are well-grounded. “There’s a deep distrust of too much power in any one hand, and one thing that’s anxiety-filling is as that technology becomes more powerful, it allows a small number of people to have a lot of power,” Colarusso says. “I think it’s naive to think that when you make these powerful tools that they’ll always be used for good.”
MacDonald and Chittum see their products as far less intrusive than other forms and applications of predictive or surveillance technology.
“There is a danger of using too much data or too many technological sources to violate people’s personal liberties,” MacDonald says, pointing out that Geolitica refuses to use facial recognition or cull cellphone data. “I don’t think effective law enforcement means we have to give up rights,” Chittum says. “I think we can have both.”
Nevertheless, the common denominator with all these forms of technology is that they aren’t much good on their own and need people to train and fine-tune them. As such, humans should have no fear of being replaced or enslaved any time soon.
“There’s certainly fear about technology that arises from movies and shows, not only when you watch them but when they make their way into contemporary debate,” Nicholson says. “There’s real concern about them now.”
However, Nicholson also points to an oft-cited quote from Pedro Domingos, a University of Washington professor emeritus of computer science and engineering and author of The Master Algorithm, about the capability of machines compared to humans: “The problem isn’t that they will become superintelligent and take over the world; the problem is they’re stupid and they already have.”
This story was originally published in the August-September 2023 issue of the ABA Journal under the headline: “Sci-Fact: Elements of futuristic films and TV shows about the law are here, raising legal questions about tech and freedom.”