Features

True Lies

  •  
  •  
  •  
  • Print

J. Peter Rosenfeld
Photo by Callie Lipkin

I am a terrorist. At least that’s what I’ve been told to be, as I’m sitting at a computer in the lab adjacent to J. Peter Rosenfeld’s cluttered second-floor office in the psychology department on Northwestern University’s leafy campus.


Electrodes are attached to my scalp, behind each ear, above and below my left eye, and in the middle of my forehead.

I’m thinking about how I can lay waste to the city of Houston. Or rather, trying not to think about it. After all, I am a terrorist; and I’m trying to conceal the fact that I want to lay waste to the nation’s fourth-largest city.

Rosenfeld, who for 20 years has been studying brain wave activity as a means of detecting deception, has been describing a new test protocol he has designed to catch terrorists before they can execute a planned attack. And when he offered me a chance to be tested, I said, “Hook me up.”

The premise of the test is that a suspect who knows details of the plan—the location, the method and the timing—will emit a telltale response from brain waves when presented with information that coincides with those details, no matter how hard the suspect tries to conceal it.

“If you had a suspected terrorist in custody and you had some idea what he was planning to do next, you could give him this test,” says Rosenfeld, a pioneer in the field of neuroscience-based lie detection. “You wouldn’t have to waterboard him, and you’d extract better information out of him, too.”

Before strapping up, I had been given a packet of instructions and a briefing for a mock terrorist attack—a list of U.S. cities, months and weapons—and told to circle any of the items that have a special meaning, such as a place I’ve lived, the month I was born or any weaponry I am particularly fond of or familiar with.

I was informed that my “commander” has chosen to attack Houston with a bomb in July, but I’m supposed to recommend one of four specific Houston locations—airport, downtown, etc.—for the attack. I’m given the pros and cons; and when I’ve made my decision, I’m told to e-mail the commander with my recommendation and the reasoning behind it. I’ve done all that, and my plot is good to go.

Once wired, I am seated before a computer monitor with a five-button control box in my left hand and a two-button control box in my right. As the test begins, the names of half a dozen cities appear momentarily on the screen, followed by a single line of six identical numbers.

Each time a city appears, I’m to press randomly one of the buttons in my left hand, trying not to follow any particular pattern or repetition. When a sequence of numbers appears, I’m to press “yes” if the numbers “111111” appear and “no” for any other numbers.

The electrodes on my scalp measure brain waves, I’m told. The ones around my eye measure eye movements, which can create artifacts in the brain waves. The ones behind the ears control for electrical “noise” in the room or the amplifiers. The one on my forehead is a ground.

While we take a few minutes to practice, the whole experience seems to me surprisingly unsophisticated. The computer looks like something out of the 1980s; the “lab” like somebody’s dorm room. The research assistants look more like college kids than scientists. Where’s all the splashy, colorful, three-dimensional, computer-generated imagery? What’s up with the buttons?

Then the real test begins.

A city flashes. Then another, followed by several more cities in quick succession. Then a string of numbers. More cities, more numbers. I’m pressing buttons as fast as my fingers will move, trying not to blink or press the same button twice while keeping track of which city I’ve just seen. (The examiner occasionally asks, just to make sure that I’m paying attention.)

At first I do OK. But monotony and fatigue set in. I realize I’ve pushed the same button two or three times in a row. Then I catch myself pushing a button with my left hand when I’m supposed to be using my right. I’m starting to blink every time I push a button.

Some 300 city-number combinations later, it’s over. It’s only one part of a three-part test, but my fingers hurt. My mind is mush. My eyes burn from trying to keep them open the entire time.

The buttons and numbers are designed to keep me from specifically thinking about Houston, my putative target. The process is designed to replicate diversion techniques that a skilled terrorist might use to shield against a lie detector.

But when the examiner shows me a screen shot of my test results, I know I’d make a lousy terrorist: My brain waves have reg­istered a huge dip downward—a positive response—each time the word Houston appeared on the screen.

It turns out my results are consistent with other, more formal tests by Rosenfeld and his team. Analyzing the data from one recent study, they were able to confirm through brain waves the “guilt” of all 12 subjects identified as “terrorists” in the mock exercise, without incorrectly identifying any “innocent” subjects as guilty. Even analyzing subjects on a blind basis, they were able to correctly identify 10 out of 12 guilty subjects without misidentifying any innocents. Moreover, in their analysis of the guilty, they were also able to correctly identify two-thirds of the details of the planned, hypothetical attack.

NEW TOOLS, NEW URGENCY

While the development of an effective lie de­tector has been a goal since ancient times, research into deception detection had been all but dormant for decades.

But the 9/11 terrorist attacks and the ensuing debate over the use of torture and other questionable means of collecting and corroborating information have given new urgency to the search for a better lie detector. And many of the most promising—and controversial—lines of inquiry are linked to recent advances in neuroscience and new applications of long-standing bio-measurement devices and techniques.

Rosenfeld’s truth detection experiment, for in­stance, relies on the use of electroencephalography, which monitors electrical signals to localize and quantify brain activity. The EEG is commonly used to help diagnose epilepsy, Alzheimer’s disease and other neurological disorders, as well as some mental illnesses, like schizophrenia.

There is also a growing fascination with the possi­bilities of magnetic resonance imaging and positron emission tomography scans to reveal otherwise hidden evidence of deception or guilt.

Yet these new truth technologies have yet to be tested in court. Overconfident marketing, conflicting scientific claims and worries about overdependence on technology have made some wary of the effect on justice.

Hank Greely
Photo by Timothy Archibald

“Because the MRI is a big, fancy machine that produces these beautiful color pictures, there’s a fear that people might be overly impressed by the technology and take it more seriously than they should,” says Hank Greely, a professor of law and genetics at Stan­ford University who also heads the law school’s Center for Law and the Biosciences.

“On the other hand, because the EEG only produces an image of a brain wave, people may not give it the attention it deserves, even though at this point, we don’t know which technique, if either, will ever be effective at lie detection.”

That’s been the problem with lie detection for de­cades. The most recognized tool we have is the polygraph, which has been around for nearly 100 years. But the polygraph doesn’t even measure deception per se. It measures physiological features associated with stress or anxiety, such as systolic blood pressure, heart rate, breathing rate and galvanic skin response.

And it isn’t considered particularly reliable in scientific circles. The polygraph has an accuracy rate of about 70 to 90 percent, depending on the examiner. And it can be beaten by sociopaths and people who are good at suppressing their emotions.

Gary Ridgway, the so-called Green River Killer, passed a polygraph early on in his killing spree and went on to kill 48 women. And CIA double agent Aldrich Ames passed two polygraph exams during the nine years he spent as a Russian spy.

DECEPTION AND DETECTION

Photo by Callie Lipkin

The next empirical attempt to develop an effective lie detector was through voice stress analysis, the origins of which date back to the 1940s. Voice stress analysis purports to measure physiological changes in the muscles of the voice box that are thought to be associated with deception.

But voice stress analysis suffers from the same basic defect as the polygraph: It attempts to measure deception indirectly, through the stress purportedly associated with telling a lie.

And while proponents claim high levels of accuracy, empirical tests have been far from encouraging. One study showed close to chance levels of success in identifying deception in mock crime situations. Another study deemed it unsuccessful in detecting spontaneous lies in a simulated job interview.

“The scientific research on it has been universally negative,” Greely says.

Though voice stress analysis is apparently quite popular among law enforcement agencies, that seems to have more to do with the fact that it is relatively inexpensive and can be quite intimidating than with its ability to detect lying, experts say.

“It’s pretty good at telling whether somebody is nervous or not, but a lot of people get nervous when they’re talking to the police,” Greely says.

The search for an EEG-based lie detector got its start in the 1980s, when researchers first identified one particular type of brain wave, known as the P300, that appears particularly useful in detecting deception.

According to the researchers, involuntary, split-second bumps in the P300, which occur about 300 milliseconds after a subject is exposed to a particular stimulus, are believed to indicate that the subject’s brain “recognizes” the stimulus.

Proponents say the technique can be used to confirm or refute a subject’s claims that he has or doesn’t have information about a particular place or event or item—such as a crime scene or a planned terrorist attack or a murder weapon—stored in his brain, even if the subject is trying to hide it.

Rosenfeld is the first to admit that the technique has practical limitations. In the case of terrorism, it won’t work if you have no clue what sort of concealed information you’re looking for. In its application to everyday crime, a guilty suspect might be too drunk or high on drugs to remember the details of the crime he committed. And if such details are well-known among the general public, even an innocent subject would have “guilty” knowledge stored in his brain.

He also concedes that the technique is not yet ready for the courtroom. It still needs to be field-tested in the real world. And it must be shown to be effective against the use of countermeasures to evade detection, such as tapping a finger on the leg or repeating one’s name to oneself over and over.

But Rosenfeld says the EEG-based approach has several practical advantages over other types of lie detection technology now being studied. It is relatively small, inexpensive and portable. It is also the only one being tested under simulated real-world conditions, like the mock terrorism scenario. And it is already proving in early trials to be highly resistant to the use of countermeasures.

“In general, about one out of 20 subjects will beat the test using countermeasures. But chances are we’re going to catch them anyway because their reaction times will give them away,” he says.

RECOGNITION AND MEMORY

Experts agree that the P300 wave is a well-established scientific phenomenon, and that the timing and shape of the P300 wave has meaning. But what that means at this point isn’t exactly clear.

The fact that somebody “recognizes” something doesn’t necessarily mean he’s guilty of anything, they point out.

“Just because you recognize Osama bin Laden doesn’t mean you spent time in an al-Qaida training camp with him,” Greely says. “Maybe you just saw his picture on TV or the cover of a magazine.”

By the same token, he adds, a suspect might recognize a crime scene because he committed the crime in question or because the crime took place at a Starbucks, the inside of which all tend to look alike.

The EEG-based approach also appears to misunderstand the nature of memory, which does not record and recall information like a videotape recorder but changes and adapts over time, other experts caution.

Jane Moriarty
Photo by David Shoenfelt

“Every time a memory is recalled, it is altered,” says Uni­versity of Akron law professor Jane Moriarty, an expert on scientific evidence.

Experts also say the cred­ibility of this line of research has been undercut somewhat by the hype given to it by Lawrence Farwell, one of its leading proponents.

Farwell is a neuroscientist who left academia in the mid-1990s to launch Brain Finger­printing Laboratories, which developed an EEG-based technique that purports to show whether an individual has specific information stored in his brain.

On its website, the company claims an extremely high accuracy rate for its patent­- ed technique, which Farwell puts at about 97 percent. In more than 200 tests, he says, the process has been correct all but six times. In those six cases, the results were indeterminate.

CLAIMS IN QUESTION

But Farwell’s claims are widely discounted in the relevant scientific community. Crit­ics say that there is little research—other than his own—to back up his claims, and that he refuses to share his underlying data with others, asserting that his technique is proprietary.

Nor is Farwell’s credibility enhanced, critics say, by his inflated claims of judicial acceptance of the technique.

On his website, Farwell suggests that the Iowa Supreme Court overturned a murder conviction based in part on the results of a brain fingerprinting test on the defendant. But the court actually reversed the conviction for reasons unrelated to the brain fingerprinting evidence.

Rosenfeld has written an in-depth critique of Farwell’s methodology and claims, which he concludes are “exaggerated and sometimes misleading.” However, he ends his critique with a plea not to throw the proverbial baby out with the bathwater.

“Just because one person is attempting to commercialize brain-based deception-detection methods prior to completion of needed peer-reviewed research (with independent replication) does not imply that the several serious scientists who are now seriously pursuing this line of investigation should abandon their efforts,” he writes.

But Farwell says he has gotten a bum rap from critics. Anyone who closely reviews the scientific literature will conclude that the fundamental science on which the technique is based is solid and well-established, he maintains. He also says he has disclosed his method­ology, both to colleagues who have used it in their own studies and to the U.S. Patent Office in exchange for the four patents he has gotten on the technique. And he insists that brain fingerprinting “played a role” in the reversal of a defendant’s murder conviction, albeit an indirect one.

“The important thing is that the brain fingerprinting evidence was admitted and that we got the right answer,” he says. “That was no accident.”

If Farwell’s claims haven’t given EEG-based lie detection research a bad name, the brain electrical oscillations signature test, which purports to build on his and other neuroscientists’ work, might.

The BEOS test, invented by an Indian neuroscientist, purportedly can tell not only whether a subject recognizes the details of a crime, but whether the subject recognizes those details because of experiencing them or for some other reason.

The test, which police in two Indian states are using, has already led to the conviction of a married couple in the arsenic poisoning murder of the wife’s former fiancé. However, both defendants have since been released on bail while an appeals court reviews their convictions on grounds apparently unrelated to the BEOS evidence.

But critics are highly dubious. Greely says all anyone has seen of the technique is the inventor’s own bro­chure. “I’ve yet to meet anybody who has any idea how what they claim to be doing would be possible,” he says.

Rosenfeld says the technology has neither been peer-reviewed nor independently replicated. “As far as I’m concerned, it’s not worth a dime.”

One of the more promising of the new lie detection technologies on the horizon is fMRI, or functional magnetic resonance imaging. This is not just because it is the most studied of the techniques under development but because it is already being offered to the public by two private companies: No Lie MRI, which is based in San Diego, and the Cephos Corp., which is based in Tyngsboro, Mass.

In fact, an fMRI-based lie detection test came close to being offered into evidence for the first time in a U.S. court earlier this year. But the proffered evidence was voluntarily withdrawn at the last minute under mounting opposition from the scientific community.

The case was a juvenile protection hearing in San Diego involving a custodial parent accused of sex abuse. The defense was seeking to introduce a brain scan performed by No Lie MRI to prove the defendant was telling the truth when he denied sexually abusing the child.

Joel Huizenga
Photo by Graham Blair

The case file is sealed. San Diego lawyer Michael McGlinn, who represented the accused parent, refused to discuss the matter. But Joel Huizenga, the founder and CEO of No Lie MRI, says the decision not to use the evidence had nothing to do with the validity of the test, which he contends clearly showed the accused parent was being truthful.

Functional magnetic resonance imaging, which got its start in medical research in the early 1990s, allows scientists to create highly localized maps of the brain’s networks in action as it processes thoughts, sensations, feelings, memories and motor commands. It basically works by tracking oxygenated blood flow to different parts of the brain in response to specific tasks, behaviors and affective states.

When the technology was first developed, it was used primarily as a diagnostic tool for disease and other neurological disorders. But researchers soon turned their attention to other possible uses, including detecting pain, predicting future behavior and studying the neurophysiological correlates of deception. The theory behind the use of fMRIs for lie detection is that lying requires the brain to do more work than telling the truth does.

Huizenga of No Lie MRI and his counterpart at Cephos, founder and CEO Steven Laken, say the technology is ready for the courtroom. Laken says there have been more than 12,000 peer-reviewed, published papers on fMRI technology itself, including nearly two dozen on its lie detection capabilities.

Laken also notes that the U.S. Supreme Court relied on fMRI-based evidence in 2005 when it outlawed the execution of juvenile offenders in Roper v. Simmons. And he says many judges have told him they believe it meets the admissibility standards for scientific evidence.

Huizenga suggests that fMRI testing, while not perfect, is far more accurate than other types of evidence that are routinely admitted in court, like eyewitness iden­tifi­ca­tions and police lineups. “This is the first time in human history we’ve had anything that can tell scientifically and accurately whether somebody is lying,” he says.

Few experts would dispute fMRI’s potential value as a lie detector. But the vast majority of them seem to agree that it has a long way to go before it is proven reliable enough to be used in court. While they acknowledge that fMRI has produced some favorable results as a lie detector in the laboratory setting, they say the research to date suffers from several flaws.

STILL IN THE DISCOVERY PHASE

All of the studies involved relatively small samples. The majority have not tried to assess deception on an individual level. Only a handful of the results have been replicated by others. And most of the ex­periments have been done on healthy young adults.

No one has tested children, the elderly or people with physical or mental illnesses.

Researchers, moreover, have often disagreed on what regions of the brain are associated with deception. Most of the experiments have involved low-stakes tasks, such as picking a playing card or remembering a three-digit number. And only a few have tried to assess the effectiveness of the technique when subjects employ countermeasures.

Moriarty says the studies raise as many questions as they answer.

“For now, the most that can be said is that the preliminary data are fascinating but sparse,” she says. “While there is little doubt that fMRI—as a machine—works well, there are innumerable questions about the extent of what can be stated with certainty about the interpretation of the images generated.”

Greely, who co-authored a 2007 study that found “very little” in the peer-reviewed literature to suggest that fMRI-based lie detection might be useful in real-word situations, says he hasn’t seen anything since then that would change his mind.

“At this point, we just don’t know how well these methods will work with diverse subjects in real-world situations, with or without the use of countermeasures,” he says.

Also, fMRI-based lie detectors would present practical limitations. The scanners are big, bulky and expensive to own and operate. Test subjects can’t be claustrophobic or have any metal in their bodies. They also must be cooperative and compliant in order to be tested, leaving some experts skeptical about the technology’s potential efficacy.

“It’s just too easy to corrupt the data by holding your breath, moving around a little or even thinking about some random stuff, which wouldn’t seem to make it very useful on somebody who’s trying to conceal something,” says Ed Vul, an fMRI researcher in the Kanwisher lab at MIT.

Other neuroscience-based methods of lie detection are also in the works, though experts say it is too early to know how useful or effective any of them will turn out to be. Three in particular deserve serious attention.

Like fMRI, near-infrared laser spectroscopy provides a way to measure changes in blood flow in some parts of the brain without the complex apparatus of an MRI machine. The basis of the technology is the measurement of how near-infrared light is scattered or absorbed by various materials. Small devices are attached to the subject’s skull that shine near-infrared light through the skull and into the brain. The pattern of scattering reveals the pattern of blood flow through the outer regions of the brain.

Researchers hope to use the technique to perform something like an fMRI scan without the cost and inconvenience of an MRI machine. Though widely discussed in the media, however, there have been no peer-reviewed publications on the technique, experts say, so its potential value is hard to evaluate.

FACING THE LIES

Another technique, championed by retired University of California at San Francisco psychology professor Paul Ekman, is based on an analysis of fleeting micro-expressions on a subject’s face, which Ekman claims can detect deception with substantial accuracy.

Ekman, who achieved fame through his work establishing the universality of some primary human facial expressions, such as anger, disgust, fear and joy, helped the Transportation Security Administration set up a behavioral screening program using the technique at about a dozen U.S. airports. (Ekman’s work is also the basis for the television show Lie to Me, which stars Tim Roth as a criminal specialist who studies facial expressions and involuntary body language to determine when someone is lying.)

Ekman has been working on the use of facial micro-expressions to detect deception since the 1960s, but he has not published much of his research in peer-reviewed publications. He has not done so, he says, in order to keep it from falling into the wrong hands. As a consequence, his research has not been subject to independent analysis, making its value difficult to assess, experts say.

If it works, however, the technique would have the advantage of not requiring any obvious intervention with a suspect. And it could plausibly be used surreptitiously, through undetected videotaping of the suspect’s face during questioning.

Another potentially fruitful avenue of deception-related research is known as periorbital thermography.

Its inventors, Ioannis Pavlidis, a computer scientist at the University of Houston, and Dr. James Levine, an endocrinologist at the Mayo Clinic, claim that rapid eye movements associated with the stress of lying increase blood flow to—and hence the temperature of—the area around the eyes.

As with Ekman’s facial micro-expressions, this approach potentially could be used without a subject’s knowledge or cooperation. But the jury is still out on its possible effectiveness.

Greely says nobody but the in­ventors seems to be working on the technology. And the National Research Council, in a 2002 report on the poly­graph and other lie detection technologies, called the testing to date a “flawed and incomplete evaluation” that “does not provide acceptable scientific evidence” to support its use in detecting deception.

Many experts believe that the government is secretly funding research into deception-related technology for defense and national security purposes under the Pen­tagon’s so-called black budget, a multibillion-dollar annual expenditure that covers clandestine programs related to intelligence gathering, covert operations and weapons development.

“There is no way of knowing how much of that is being spent on what,” Greely says, “but it’s safe to say that a lot of money is going into neuroscience-based deception research, which the defense services have long had an interest in.”

Because of the magnitude of the issues involved, some experts have called for a moratorium on the admission of such evidence until the scientific community has reached some form of consensus on its accuracy and reliability—and the courts and the public have had a chance to consider whether they really want such evidence admitted.

Such a delay, Moriarty says, would provide time for additional peer review, the replication of results, robust disagreements and the discovery of any unanticipated consequences of allowing such evidence into court.

Others argue for a ban on all nonresearch use of neuroscience-based lie detection technology until a particular method has been fully vetted in the peer-reviewed scientific literature and proved to be safe and effective.

Greely proposes a pre-market approval process sim­ilar to that used by the Food and Drug Administration in its governing of the introduction of new drugs.

“We need to prevent the use of unreliable tech­nologies and develop fully detailed information about the limits of accuracy of even reliable lie detection,” he says. “Otherwise, honest people may be treated unfairly based on negative tests; dishonest people may go free.”

Web extra:

Watch a video symposium on new lie detection technology.

Give us feedback, share a story tip or update, or report an error.