Courts need help when it comes to science and tech
Jason Tashea
Social science research is “sociological gobbledygook,” at least according to Chief Justice John Roberts. The Chief Justice made this comment during the Oct. 3 oral argument for the political redistricting case Gill v. Whitford.
The gobbledygook in question is a theory called “the efficiency gap,” a standard used to measure partisan gerrymandering that is at the core of the plaintiff’s case.
In response to Roberts’ comment, Eduardo Bonilla-Silva, the president of the American Sociological Association, a national organization of sociologists, wrote an open letter to the Chief Justice offering examples of how social science data has informed the law. Social science was at the heart of the court’s landmark decision in Brown v. Board of Education. Additionally, social science has been used to find terror suspects and detect credit card fraud, among other accomplishments. Bonilla-Silva went so far as to offer to have a panel of experts come to the U.S. Supreme Court and provide background information on a variety of issues pertaining to social science. As of publication, the Chief Justice has not responded to the offer.
Indeed, the Chief Justice’s view that social science data is gobbledygook is not a new lament in the legal community. The importance of quantifiable data, math and science in informing the law has had mixed results when it comes to admitting evidence. While the Chief Justice is misguided in his criticism of social science, he does reveal a legitimate issue: Lawyers and judges often do not have the background to correctly assess data, scientific research or technology.
As the zeitgeist of “fake news” and “alternative facts” haunts all aspects of society, the legal profession’s willful rejection of math and science—shown in how courts admit evidence, practitioners’ misuse of scientific terms and how legal research is conducted—diminishes the rule of law. Not only does the profession’s often-hostile relationship with scientific issues imperil the ability to provide zealous representation, this dearth of knowledge is compounded by technology’s rapid advance.
To provide better representation and bolster the rule of law, the legal profession needs to acknowledge its limitations on these issues and create structures that provide help from scientific experts, like Bonilla-Silva.
DAUBERT’S SHORTCOMINGS
The law’s inability to keep junk science out of court is a foundational problem. Nearly 25 years ago, the Supreme Court decided Daubert v. Dow Pharmacuticals, which has fallen short of its intended goal to improve the standards used to admit scientific evidence in federal court.
The case created a four-factor test for the reliable determination of scientific evidence. The factors are whether a scientific technique “can be (and has been) tested;” was subjected to peer review and publication; whether there is a “known or potential rate of error” and “the existence and maintenance of standards controlling the technique’s operation;” and whether the technique has “general acceptance.” This approach was later applied to all expert testimony in Kumho Tire v. Charmichael.
The Supreme Court in 2000 called the Daubert standard “exacting.” Ultimately, Daubert, Kumho and General Electric v. Joiner were codified as the Federal Rule of Evidence 702. Since then, nearly 40 states have adopted a version of the Daubert standard.
While affecting civil and criminal proceedings, Daubert’s shortcomings are acutely felt in the admission of forensic science. Professor Paul C. Giannelli writes in a recent Case Western Law Review article that Daubert has failed to strengthen the standards of scientific validity of evidence used in criminal proceedings.
Over the past decade, Giannelli writes, the National Academy of Sciences, the National Commission on Forensic Science, the President’s Council of Advisors on Science and Technology and the Texas Forensic Science Commission have found that numerous well-known and admitted forensic science techniques—including bite-mark analysis, microscopic hair comparison, and arson evidence—are discredited and lack the scientific foundation that Daubert purports to require.
The 2009 NAS report (PDF) said: “The bottom line is simple: In a number of forensic science disciplines, forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions, and the courts have been utterly ineffective in addressing this problem.”
Just one example of this ineffectiveness regards the interpretation of “peer review,” one of Daubert’s factors. The committee found forensic science bibliographies, judges and attorneys were unclear about this term. As Giannelli notes, the “peer review” standard in some courts has been interpreted to mean that someone double-checked a lab analyst’s work and not a “rigorous peer review with independent external reviewers to validate the accuracy … [and] overall consistency with scientific norms of practice,” which was Daubert’s intent.
Beyond the misapplications of terms, courts mistakenly apply forensic science’s use in trials over time as a stand-in for the scientific testing that federal rules require. In U.S. v. Havvard, latent fingerprint matching was challenged under Daubert. The court called fingerprint testimony the “archetype” of reliable expert testimony. It confirmed this conclusion by saying that fingerprinting techniques “have been used in ‘adversarial testing for roughly 100 years,’ which offered a greater sense of the reliability of fingerprint comparisons than could the mere publication of an article.”
This conclusion was in spite of the Federal Rules of Evidence and Daubert. The 2009 report added that a common fingerprinting method used by the FBI was without “particular measurements or a standard test protocol” where “examiners must make subjective assessments throughout.” Yet fingerprinting methods have not suffered a sustained challenge in federal court in nearly 100 years.
This failure of courts to act as arbiters of science comes at the cost of human life. In a review of hair sample testimony by the FBI laboratory’s microscopic hair comparison unit between 1980 and 2000 that was published in 2015, the FBI, National Association of Criminal Defense Lawyers and the Innocence Project found that FBI testimony contained errors in “at least 90 percent of trial transcripts” of the 268 trials reviewed. Of those 268 trials, at least 35 defendants were sentenced to death. FBI testimony errors were found in 94 percent of those cases. As of April 2015, 14 of those defendants had been executed or died in prison.
The New York Times editorial board also wrote in 2015 that of the 329 exonerations due to “DNA testing since 1989, more than one-quarter involved convictions based on ‘pattern’ evidence—like hair samples, ballistics, tire tracks, and bite marks—testified to by so-called experts.”
This indicates that the law’s ability to interpret and judge science is systemically imperfect at best, and fatal at worst. Daubert’s failure to apply scientific rigor is one reason why junk science has been allowed to propagate in the legal system.
SCOTUS AND STATISTICS
This is not just a Daubert issue. Recent reporting shows that the Supreme Court has wrongly used statistics when writing opinions. An investigative journalist from ProPublica found seven errors when taking a “modest sampling” of Supreme Court decisions from 2011 to 2015.
Not just affecting criminal justice, errors included a labor rights decision that relied on a wholly made up statistic about the national use of background check services to make a central point. A case that struck down a part of the Voting Rights Act “used erroneous data to make claims about comparable rates of voter registration among blacks and whites in six southern states.” And to allay fears in a Fourth Amendment case, the Court cited material about false positives being a part of the certification process of drug-sniffing dogs from a trade group’s amicus brief. However, none of the certification groups for drug-sniffing dogs test for false positives.
The errors found, some which were minor, were primarily pulled from briefs supplied to the court by interested groups. The article quotes ex-clerks saying that these errors could have been made without malice or intent. When asked for comment, the Chief Justice and the four other justices who included factual errors in their written opinions all declined. A spokeswoman for the court said the opinions “speak for themselves.”
In 2000, Justice Stephen Breyer wrote in the science and technology magazine Issues: “The search [for law that reflects an understanding of the relevant underlying science] is not a search for scientific precision. We cannot hope to investigate all the subtleties that characterize good scientific work. A judge is not a scientist, and a courtroom is not a scientific laboratory. But the law must seek decisions that fall within the boundaries of scientifically sound knowledge.”
To that end, the Supreme Court and state and federal judges have shown their ability to correctly apply science to the law. Perhaps there is no better recent example than the Supreme Court’s trilogy of decisions (Roper v. Simmons, Graham v. Florida and Miller v. Alabama) that banned the juvenile death penalty and life without parole.
In Roper, Justice Anthony Kennedy relied in part on sociological and scientific research that showed a youth’s brain is still developing and different from an adult’s brain in three distinct ways: developmentally youth are prone to reckless decisions, more susceptible to peer pressure and their character traits are less fixed, which means it is impossible to call a youth irretrievably depraved and deserving of the criminal justice system’s harshest punishments.
In Graham and Miller, the court expanded the Roper opinion by recalling these studies. In Miller, Justice Elena Kagan wrote for the court: “Our decisions rested not only on common sense—on what ‘any parent knows’—but on science and social science as well.”
Even with this example of the court correctly applying science to the law, as Justice Breyer wrote in 2000, judges are not scientists, and courtrooms are not laboratories. In some cases courts get the science right, and in others they do not. In part, this reflects the challenges of judging scientific research, which requires a nonlegal skill set to competently do.
WHAT SHOULD BE DONE
Instead of relying on chance or a persuasive amicus brief, the legal system should create structures that improve its understanding of science and technology, which will enhance the rule of law.
Currently, Federal Rule of Evidence 706 allows judges to appoint independent experts to fill gaps in information while not relying on experts provided by the case’s adversaries.
However, judges are weary to use this option for a number of reasons. In a law review article, D.C. Circuit Judge Douglas H. Ginsburg called the practice “unworkable” in many cases. This is reflected by a 1993 survey of sitting federal judges conducted by the Federal Judicial Center, which found that only 20 percent of respondents had appointed at least one independent expert during their time on the bench. Of that 20 percent, over half had appointed an expert on only one occasion.
The judges who responded to that survey stated that they did not appoint experts for a number of reasons, including a difficulty finding suitable experts, the financial cost and the failure to recognize early enough that the appointment would be needed.
With these challenges in mind, academics and federal commissions have recommended independent research institutions to bolster the judiciary’s ability to understand and rule on legal issues as they relate to science and technology. It is time to reconsider those recommendations.
Independent research services are necessary, and not just because new technologies like machine learning and blockchain can be confounding to the uninitiatied. While tech literacy varies among judges, in recent memory, Supreme Court Justices have shown confusion regarding how text messages work and whether HBO is a free channel over public airwaves. A research service could provide a needed bridge between the worlds of science, known for its quantitative reasoning, and law, known for its reliance on logic and analogy. This would help diminish misunderstanding and improve the use of scientific information.
Since at least the Reagan administration, Professor Kenneth Culp Davis has written about the need for a research entity for the Supreme Court that is like the Congressional Research Service. It would be an independent body that provides nonpartisan research on questions that justices have. This is not an organization to upend the adversarial system, but rather one to create the knowledge base required to actively and informatively partake in oral arguments and opinion-writing. At the state level, one could envision a consortioum of courts banding together to create a similar entity.
Regarding forensic sciences specifically, both the National Academy of Sciences and the President’s Council of Advisors on Science and Technology recommended an independent federal agency, the National Institutes Standards and Technology, to conduct rigorous research on forensic sciences. The NAS report was clear that “advancing science in the forensic science enterprise is not likely to be achieved within the confines of DOJ,” because they have not “recognized” the need for change in the field.
In both cases, the emphasis on independent, rigorous research is key to improve legal understanding and decision-making. Law and science are two pedantic fields with different standards and language. Without assistance and interpreters standing between these groups the law, people and science and technology will suffer.
By embracing these solutions, the scientific research can be de-gobbledlygooked and the legal system will be better able to admit what it does not know.