Close Mobile Menu

Catching the Brain in a Lie: Is “Mind Reading” Deception Detection Sci-Fi—or Science?

July 22, 2015
by Eli Wolfe
graphic of a brain

Ever since the inception of our species, humans have wanted to peer inside each other’s minds. A major reason we want to do this is because we lie. We lie a lot, and on the whole, we are quite good at it. The capacity for deception is possibly one of the most significant cognitive gifts we received through evolution.

But it turns out that we lack an equal genius for spotting deception. Instead we keep trying to capitalize on technology—hoping it can do the detecting for us.

President Obama speaking a neuroscience conference

The tantalizing prospect of using neuroscience to decode the brain just received a big funding boost: President Obama’s Brain Research through Advancing Innovative Neurotechnologies Initiative. Now in its second year, the BRAIN Initiative has spent $300 million, and is estimated to invest up to $1 billion over the next decade, fostering inter-disciplinary research to better visualize the human brain and understand how it works.

The prime public focus is on research to enhance mental and physical health—advances that might dramatically improve the prognosis for patients with conditions such as Parkinson’s, Alzheimer’s, schizophrenia and post-traumatic stress disorder. But experts are exploring the intended and unintended consequences of technologies that could emerge. Among them: brain imaging that could identify deception.

“Novel neuroscience techniques might soon reveal (with a cooperative witness) whether an individual recognizes a face or an object, possesses knowledge relevant to a legal proceeding, is lying or telling the truth, or even allow reconstruction of the visual imagery seen at the time of the crime,” the Presidential Commission for the Study of Bioethical Issues recently reported. Part of the report, based on expert testimony received at nine public meetings, explores what it labels as the profound ethical and legal questions raised by the notion that the government—or anyone—could attain the means to “interrogate the brain.”

“Not only is ultra-high res­olution brain imag­ing technology com­ing soon, we can pre­dict lie de­tection to be one of its earliest applica­tions.”

“Not only is ultra-high resolution brain imaging technology coming soon, we can predict lie detection to be one of its earliest applications,” Bobby Azarian, a neuroscientist who earned his Ph.D. at George Mason University, wrote in Atlantic Media’s national security publication Defense One. He envisions it as a useful and more moral tool to solve crimes, and to replace torturous techniques such as waterboarding for interrogating terrorists.

The eye-roll reaction from skeptical experts: Here we go again.

History, they say, is littered with examples of once-hyped lie detectors that fell short of being routinely reliable, much less foolproof­—from the polygraph, created by a police officer and medical student at UC Berkeley, to functional Magnetic Resonance Imagining (fMRI) scans such as those that the San Diego company No Lie MRI began marketing in 2006.

The prime research advance that Azarian cited in his Defense One article—the creation of a more sophisticated brain imaging technique called Magnetic Particle Imaging (MPI)—is the specialty of Steven Conolly, professor of bioengineering and electrical engineering and computer science at UC Berkeley. But Conolly, who notes that his lab has designed and built all the MPI scanners in North America, is so adamant that his research (including that funded by the National Institutes of Health via the BRAIN Initiative) not be linked in any way to lie detection that he refused to be interviewed about it.

Instead he emailed California the following statement: “We are very excited about MPI’s immense potential to improve early-stage diagnosis of conditions like cancer, coronary artery disease, stroke, neurodegenerative diseases, infection, inflammation and TB through its increased sensitivity, resolution and safety. We are collaborating with researchers at other universities, through NIH Brain initiative funding, to investigate how recent breakthroughs in MPI scanner technology could dramatically improve neuroscientists’ ability to study the function of the brain.

“However, we are not pursuing any applications in deception detection, nor do we see deception detection as a viable goal for MPI scanners.”

“It’s trying to interpret what they think is right or wrong, or what they’re think­ing about, and that’s a lev­el of knowl­edge that no one has shown that we can get to.”

Undaunted, Azarian, who also did not interview Conolly about his work, sees enormous promise in the potential real-world applications of MPI for deception detection. Theoretically, MPI is capable of increasing the sensitivity of a brain scan by 100-fold over traditional fMRIs—sometimes disparagingly referred to as “blobology” because the technique tracks activity in large, unspecified areas of the brain.

MPI “doesn’t get us down to the cellular level, but it’s a lot better and you can probably see activity within groups of neurons,” says Azarian, who insists the advanced spatial resolution of an MPI-based lie detector will make it far more effective at distinguishing brain patterns associated with honesty, or the lack thereof. “With advances in brain imaging resolution, I think we’re going to get there, maybe fairly soon.”

So for the foreseeable future, is the notion of brain-image lie detection more science fiction than science?

Cartoon of Wonder Woman

“Avoid hype, overstatement, and unfounded conclusions,” cautions the presidential bioethics commission in one of its recommendations. “The ethical implications of potential technologies must be considered before those technologies are used widely. But scholars have been criticized for putting the cart before the horse—puzzling through potential implications of a technology that is not ready for valid and reliable use creates the expectation that it works.”

Avoiding hype makes sense, especially if you look at the history of lie detection, starting with the polygraph. Invented in 1921, this device measures blood pressure, pulse rate, respiration and skin conductivity to pinpoint symptoms of lying. Its Berkeley mastermind, John Larson, was inspired by the work of William Moulton Marston, the psychologist who created Wonder Woman (whose weapon of choice was the Lasso of Truth).

Two years later, in 1923, the Supreme Court ruling in Frye v. United States prohibited the admission in court of polygraph results, on grounds that they failed the standards of the general scientific community. But that didn’t stop law enforcement agencies from relying on the polygraph for interrogating criminal suspects, even after it failed to catch infamous liars like the CIA mole Aldrich Ames or Gary Ridgway, the Green River Killer.

Then in 1992, researchers pioneered fMRI. Unlike an MRI, which captures three-dimensional images of internal soft tissue, fMRI can monitor changes in blood oxygen flow in the brain. Because blood flow and neural activity are closely linked, fMRI is capable of recording extended patterns of brain activity.

Soon fMRI became a well-established research tool within universities and hospitals, helping produce invaluable data on normal and abnormal brain functions, human behavior and diseases.

But the general public was far less familiar with it. So a decade ago, when the company No Lie MRI went public, minds were blown by what appeared to be mindreading lie detection technology. It’s not hard to see why this misunderstanding took hold: Unlike the polygraph, which measured symptoms of deception, an fMRI scan can look inside the brain and record a lie as it is being born—or so the theory goes.

The predictions of a lie-free future came thick and fast. The Washington Post confidently speculated that with fMRI technology, “The Lie May Be on Its Last Legs.” The American Civil Liberties Union was so concerned by the potential threats to privacy that it filed a Freedom of Information Act request demanding that U.S. intelligence agencies turn over any records pertaining to government research and development of fMRI-based lie detector technology (to date, the request has received no response).

Patient being placed into an MRI machine

Neuroscience evidence has become common in U.S. court cases—it’s used in more than 5 percent of murder trials and 25 percent of death penalty trials. Often it is used to establish diminished capacity, such as in cases involving children and adolescents. But although India became the first country to convict a man based on brain scan evidence, U.S. judges  judges have repeatedly rejected attempts to introduce fMRI lie-detection results in criminal and civil cases.

To see why, it helps to understand how fMRI was supposed to divine truth. It starts with the premise that an fMRI scan can actually show distinctive patterns of activity that correlate with deceptive or honest activity.

“In theory, it takes more neural activity to lie than tell the truth because you have to construct a narrative, so the extent of neural activity can be relevant in determining whether you’re lying,” said Andrea Roth, an assistant professor at Berkeley Law.

As Mark Twain once observed, “If you tell the truth, you don’t have to remember anything at all.”

“The [other] idea,” Roth added, “is that there are certain parts of the brain that are associated with lying, so you can see which parts are firing up when you get to certain questions.”

The two types of tests used to trigger these reactions are virtually identical to the ones used for polygraphs. One is the control question test. This involves asking a series of banal questions to establish a baseline for the subject’s truthful answers before the real interrogation begins. The second is a guilty knowledge test, which examines whether the subject possesses knowledge of a crime that only a criminal would know—for example, what objects were at a crime scene. In both tests, researchers measure the magnitude and location of neural activity in a subject’s brain in response to questions. Once the measurements are completed, a researcher should be able to determine whether a person is lying.

If that sounds too simple, you’re in good company—most neuroscientists would agree. For starters, there’s no consensus in scientific circles about what part of the brain controls deception. The prefrontal cortex, which is responsible for regulating higher planning and other executive goals, is one promising candidate. But according to Jack Gallant, a professor of neuroscience at UC Berkeley, the search for this Holy Grail of deception-control in the brain oversimplifies the complexity involved in planning a lie.

“Most things in the brain are distributed over multiple brain areas, and the patterns relating brain activity to any other sort of behavior state tend to be thoroughly complicated,” Gallant said. “I mean, there are different kinds of lies, there are different motives for lying, there are different ways of telling a lie…. Most memories that people have are not accurate—they’re confabulated.”

MRI scans of a patient's brain

Thus it may not come as a surprise that experiments testing fMRI-based lie detection techniques are riddled with confounds. Last year, Anthony Wagner, a professor of neuroscience at Stanford University and a member of the Law and Neuroscience Project, co-authored a meta-analysis on dozens of lab-based studies testing whether or not fMRI could distinguish the lying mental state. The conclusion: Virtually all fMRI-based lie-detection experiments suffer from serious design flaws. For example, in one experiment a subject was instructed to “steal” one of two objects from a drawer. In the scanner, the subject denied possession of both objects—but the brain signaled a stronger fMRI response when the subject was referring to the object that was stolen. One way of interpreting that information is to say that the scan detected that lie.

But in a similar experiment, two groups of subjects were scanned while they viewed a series of numbers, having chosen one in advance. One group was instructed to lie about not seeing the number they had selected when it flashed on the screen while the other group passively viewed the numbers. Upon comparison, both groups showed similar brain activity.

“So the signals being picked up don’t necessarily have to be about the mental state of lying or deception,” Wagner said. “They could have to do with attention and memory that differ between the conditions of the experiment because of how the experiment was conducted.”

Nor is there a reliable way to prevent countermeasures that can disrupt the fMRI results—something as minute as wiggling your toes or clenching your anus can muck up the results. But the greatest doozy remains a conceptual one: To distinguish a truth from a lie by looking at a brain scan, a neuroscientist has to be able to “read” brain activity on an incredibly nuanced level.

“It’s trying to interpret what they think is right or wrong, or what they’re thinking about, and that’s a level of knowledge that no one has shown that we can get to,” said Mark D’Esposito, a professor of neuroscience and psychology at UC Berkeley and one of the earliest practitioners of fMRI technology.

But as with the polygraph, expert skepticism doesn’t convince everyone. It certainly hasn’t swayed Joel Huizenga, founder and CEO of No Lie MRI, who staunchly contends that fMRIs are 100 percent accurate lie detectors, despite their rejection by the courts.

“Anyone with a Ph.D. can keep anything out of court. It’s just their opinion,” he says.

A doctor looking at MRI results

Huizenga goes so far as to suggest that neuroscientists who have studied and dismissed his company’s technology are driven by politics or have been bribed (he did not provide evidence to substantiate these claims). Even if the technology isn’t perfect, he asks, why should society be denied the use of a tool that is still more sophisticated than a polygraph?

“Should we work toward some theoretical perfection in 20 years? If you don’t start something now, you’ll never optimize it, you’ll never make it better,” he says. “These people are in an ivory tower.”

Huizenga’s claim will almost certainly set more eyes rolling in the neuroscience community. But here’s the thing: Even staunch skeptics won’t rule out the possibility that researchers could create an accurate brain-imaging lie detector in the future. Gallant at Berkeley concedes that in 50 years, with the right technological advances, a brain-scanning device capable of detecting deception may be possible.

Currently, fMRI-based lie detection material is not allowed in court under the Frye standard and the Daubert standard, the two methods used in the U.S. to determine the admissibility of scientific evidence.

Under the Frye standard, judges assess the general opinion of the scientific community. Under the Daubert standard, judges in federal court are treated as “gatekeepers” and are expected to admit expert testimony based on their own assessment of the science. If evidence from an fMRI-based lie detector is ever admitted into court, it will almost certainly be under the Daubert standard.

Miranda Warnings may include a new warning to an arrestee that “any incrim­inating thoughts he consciously ruminates or recalls may be used against him.”

“God knows we also admit tons of other evidence that have pretty darn high error rates,” says Roth, the Berkeley law professor. “Eye witness testimony, confession evidence, bite marks—all this junk science that you read about in The New York Times, we’re letting all that in as evidence of guilt.”

So what happens if somewhere in the near or far future, technology does produce a brain-based lie detection technique—whether improved fMRI, MPI or some yet-to-be-invented method—that courts deem accurate enough to be admissible?

A lot of ink has been spilled describing the fallout. Frankly, it’s a titillating hypothetical—the kind of scenario that gets tossed out during a pitch meeting for a new sci-fi show. But it has also attracted serious thinkers in neurolaw who have tried to predict how this technology could potentially transform the legal system.

In an essay, Nita Farahany, a neuroscience expert at Duke University, has mused that if fMRI-based lie detectors become a reality, Miranda Warnings may include a new warning to an arrestee that “any incriminating thoughts he consciously ruminates or recalls may be used against him.”

Photo of a jury

Under the Fifth Amendment, citizens are protected against self-incrimination—but that only applies to verbal testimony. Schmerber v. California determined that physical evidence, like a blood sample, does not come under the scope of the Fifth. If there was such a thing as an accurate brain-based lie detector, prosecutors might argue that a defendant’s thoughts represent physical evidence and thus would no longer be privileged under the Fifth Amendment.  

“We’re getting to the point in science where we realize that the mind-body distinction is actually quite illusory,” Roth says. “Your brain is just another part of your body, and there’s no real clear separation between mind and body.”

The reality of a brain-based lie detector also would introduce a new threat to defendants: the expectation that anyone innocent of wrongdoing would take a test. If someone refused to take it, “would you assume that the person was lying?” Roth asks.

Privacy advocates would refuse to accept this interpretation of mental privacy. Brain imaging, of course, was inconceivable when the Constitution was written—“but any fair reading of the principles that are incorporated should extend them to this technology,” said Jay Stanley, a senior policy analyst at the ACLU.

Yet the concept of an authoritative lie detector intrigues some advocates of judicial reform. With it, a court wouldn’t have to depend on a jury to determine the credibility of an eyewitness, or a defendant’s account of an alleged crime. It also could help pinpoint and then cull from the jury pool those with implicit bias, a well-documented phenomenon among jurors.

All these scenarios and concerns, of course, are based on a hypothetical future.

“We are a justice system that loves gadgets—we love the Breathalyzer, DNA, photographs, blood typing and radar guns,” Roth says. “We’re very happy to have gadgets be a part of our mode of proof. But we don’t like lie detectors.”

The White House has high hopes that its ambitious Brain Initiative will revolutionize our understanding of the brain, likening it to the landmark Human Genome Project. Wherever the consequences of the BRAIN Initiative lead, some remain concerned about possible ulterior motives. Its research is funded by federal health agencies, but also in part by the Defense Advanced Research Projects Agency (DARPA), which develops military technology and also gave us the Internet. Because the government turned down a Freedom of Information Act request from the ACLU, it’s impossible to know with certainty whether any agency is actually interested in using brain-imaging technology for counter-terrorism or military purposes.

On a more fundamental level, the concept of accurate lie-detector technology may represent an idea too repugnant to justify its regular use in society—both in courts and outside of them. This idea was summed up in April 1958, when Pope Pius XII, in addressing the Rome Congress of the International Association of Applied Psychology, spoke out against the polygraph and other methods of penetrating man’s “mysterious core.”

Pius was concerned about the soul. But in an age in which the boundaries of privacy have grown increasingly porous, the mind is truly the last place of personal refuge. Would people be willing to give that up?

Share this article