As machines play a greater role in criminal justice, third-party auditing and oversight is essential
Jason Tashea. Photo by Saverio Truglia.
In August, the Justice Ministry of Denmark announced it would review more than 10,000 criminal convictions because of a software error.
At issue was a technology that converts cellphone carriers’ geo-location data into evidence used for prosecution. During that automated process, accuracy was unknowingly lost but still used in criminal investigations and prosecutions. Compounding this problem, some of the location data linked cellphones to the wrong cell towers, putting innocent people at crime scenes.
While the conversion issue was fixed in March, authorities will now assess 10,700 convictions dating back to 2012, according to coverage from the New York Times. At the same time, Danish prosecutors have put a two-month moratorium on the use of any cell geo-location data in criminal cases.
“On the basis of the new serious information, the Attorney General has decided to pull the handbrake in order that telecommunications data temporarily may not be used in court as evidence that the defendant is guilty or as the basis for pre-trial detention,” said Danish Justice Minister Nick Haekkerup in a statement, as translated by Martin Clausen, ex-general counsel, legal tech thought leader and CEO of Syngrato in Copenhagen.
The minister added that the experience “shakes our confidence in the justice system.”
As scalable technology plays a bigger role in the investigation, arrest and prosecution of people, mass conviction reviews will be more common. This creates a hidden but substantial human and monetary cost to hardware and software adoption in the criminal justice system.
Knowing that this is increasingly the new normal, police, prosecutors and courts must assume errors will occur and regularly have their technology and data systems audited by third parties. Doing so will improve faith in the criminal justice system, support the adoption of trustworthy technology that assists public safety and open trials, and avoid the human and monetary cost of mass post-conviction review.
In part, audits are needed because the criminal justice system’s adversarial nature is insufficient to ferret out systemically bad evidence.
While the justice system is replete with examples, perhaps none is more egregious than the scandal the rocked a Massachusetts crime lab earlier this decade.
From 2003 to 2012, a lab analyst falsified thousands of results and tampered with evidence. While an investigation concluded that the analyst’s corner-cutting and falsifications worried co-workers at the time, those worries went unheeded by supervisors.
In most jurisdictions in the U.S., defense counsel (oftentimes, public defenders) usually do not have access to a county’s crime lab, and that was no different in this case. Compounding matters, this analyst’s outputs were utilized by seven counties. With this deficit, the results from the state crime lab were often taken on good faith and most defendants pled out before the analyst’s illegal actions came to light, according to a 2017 NBC News report.
Even after the analyst’s crimes became known in 2012, prosecutors wanted to preserve the convictions and put the onus on individual defendants to re-open their cases. After a five-year legal battle that went to the state’s supreme court, the tainted convictions were tossed.
In total, 21,587 cases ending in conviction were overturned, while about 320 were left standing or were retried, according to ProPublica.
Not just a forensics issue, these problems can arise with routine technology that courts have trusted for decades.
Just last year, the New Jersey Supreme Court threw into question nearly 21,000 DUI convictions because an officer in charge of calibrating the machines used in five counties failed to do so properly. Now with the breathalyzer results inadmissible, a panel of four judges is reviewing the cases to determine the fate of the standing convictions.
While it may be easy to pass off both the Massachusetts and New Jersey examples as the fault of human error or criminal negligence, that isn’t the point. Tens of thousands of people were deprived of due process and liberty because the actions of humans went unchecked for years. Meanwhile, the adversarial process, which only focuses on a single case or defendant at a time, was insufficient to discover and a rectify a systemic error.
In both instances, the single point of failure could have been caught if outside auditing of breathalyzers and drug analysis facilities was required or followed, as opposed to relying on the state to monitor its own technology and processes.
Currently, larger prosecutors’ offices are creating conviction integrity units to investigate fact-based—as opposed to legal-based—wrongful convictions. These units are laudable and should be replicated, however, by their nature, they can only be useful after a person has been convicted.
Knowing that proactive and preventative solutions are needed, however, little is changing as we move from the era of rogue actors to rogue software. While it may be worth considering the possibility of a malicious actor, like a programmer or hacker, affecting these tools, it’s unnecessary. Software has more than enough unintentional errors to keep the justice system busy.
Some errors will be harmless, but others will ripple through the criminal justice system as police, prosecutors and courts become more reliant on third-party software and hardware, much of which is obscured from defendants and the public.
Already, investigators receive leads from facial recognition software, bail decisions are informed by algorithmic risk assessments, and convictions are secured through geo-location. All of these approaches are hotly debated, and whether or not one favors or disfavors these technologies, the fact is that a justice system without third-party audits increases the chance that a systemic error, intentional or not, leads to the loss of liberty or a criminal conviction and all the associated consequences for thousands of Americans.
To avoid this future, there are some examples worth heeding. When adopting facial recognition, the Michigan State Police and the Seattle Police Department opened up their tools and processes for outside researchers to audit and test for accuracy. Before adopting a risk assessment tool recently, Pennsylvania went through a decade-long, rigorous public process to develop the algorithm, as I learned last week at the Predicting Justice symposium at the Santa Clara University School of Law. This process tested and refined the tool before adoption.
Taking steps like these provide an added layer of verification to confirm that justice system technologies do what it is they claim to, help us understand their accuracy rates and whether or not they engender unintentional bias in their outcomes.
Unfortunately, this is not the standard as untested technologies proliferate throughout the justice system. Other agencies would do well to heed the examples set by their brethren in Michigan, Pennsylvania and Washington.
Acknowledging that the future of criminal investigation and prosecution is increasingly digital, third-party auditing is table stakes for any technology being used to convict people of a crime or deprive them of their liberty.
As we learned in Denmark, anything else will be too little too late.
Thank you to Andrew Ferguson, a visiting professor of law at American University Washington College of Law, for helping me think through this issue.
Jason Tashea is the author of the Law Scribbler column and a legal affairs writer for the ABA Journal. Follow him on Twitter @LawScribbler.
See also:
ABAJournal.com: “Trade secret privilege is bad for criminal justice”