How to beat an fMRI lie detector

In a not-so-distant dystopia, you might be placed in a brain scanner to test whether you’re telling the truth. Here’s how to cheat.

The polygraph

First, you’ll need some background on old-school lie-detection technology. [This is a simplified story – see polygraphs for a richer account.] Polygraphs are seismographs for the nervous system. They measure physiological responses such as heart rate, blood pressure, sweatiness through skin conductance, and breathing. When you’re anxious, angry, randy, in pain, or otherwise emotionally aroused, these measures spike automatically. The effort and stress of lying also causes them to spike.

Of course, if you’re trapped in a windowless room on trial for murder, these measures will probably be pretty high to begin with. So you’ll first be asked a few control questions to assess your baseline levels when lying and telling the truth, against which your physiological response to the important questions will be compared.

So, if you want to beat a polygraph, you need either keep your physiological responses stable when you lie (which is difficult), or you need to artificially elevate your baseline response when telling the truth. The age-old technique is to place a thumb-tack in your shoe, and press on it painfully with your toe when telling the truth, spiking your physiological responses, and providing a misleading control so that your lies don’t seem higher relatively.

Functional magnetic resonance imaging

Now, on to fMRI. Simplifying again, the fMRI brain scanner takes a reading of the level of metabolic activity at thousands of locations around your brain every couple of seconds. Activity in a number of brain areas tends to be elevated when we lie, perhaps because we have to work harder to invent and keep track of the extra information involved in a lie, and override the default responses in the rest of the brain. Under laboratory conditions, accuracy at distinguishing truth from lie approaches 100%.

The modern machine learning algorithms used to make sense of the richer neural data are more sophisticated than those used in a polygraph. And they’re measuring your brain activity (albeit indirectly), so it might feel as though there’s no way to deceive them. But ultimately, they work in an analogous way to the polygraph, by comparing your neural response to the important questions with your neural response to the baseline questions. That means that they can be gamed in an analogous way – as you’re being asked the baseline questions, wiggle your head, take a deep breath, do some simple arithmetic or tell a lie in your head. Each of these will elevate the neural response artificially. By disrupting the baseline response, you disrupt the comparison.

Possible flaws in this argument

This simplified account of how to cheat an fMRI lie detector has some issues.

Firstly, it rests on the idea that we’ll still use some kind of comparison between baseline and important questions. In the case of most recent fMRI analyses, this is certainly true. Although they use modern machine learning classification algorithms to compare against baseline, they still seem subject to the same problems as the simpler statistical tests used in polygraphs.

Above, I suggested taking a deep breath, doing simple arithmetic or telling a lie in your head during the baseline questions. Taking a deep breath increases the BOLD response measured by fMRI throughout your brain. The idea behind doing arithmetic or telling a lie in your head is to engage the brain areas involved in internal mental conflict detection (between areas of the brain that are pulling in different directions), executive control (over the rest of your brain), and working memory whose activity changes when lying. As far as I know, all of the studies on lie detection seem to use naive participants, and no one has yet tested the efficacy of these counter-measures.

I have also assumed that the analysis would be run ‘within subject’. In other words, the machine learning classifier algorithms would be making a comparison between baseline and important questions for the *same person*. However, there have been attempts to train the algorithms on a corpus of data from multiple participants beforehand, and then applied to a new brain. This approach is considerably and inherently less accurate (less than 90% as opposed to nearly 100%) since everyone’s brain is different, and since brain activity will probably vary for different kinds of lies. Indeed, there appears to be variability in the areas that have been identified by different experiments.

There are alternative experimental paradigms to the basic questioning approach described here. For instance, one might show someone the scene of a crime, and look to see whether their brain registers familiarity. I haven’t looked into this approach. But fundamentally, this familiarity assessment is much more limited in the kinds of questions that can be asked, and furthermore, you only get one chance to assess someone’s familiarity (after which the stimulus is, by definition, familiar). That single response simply might not be enough data to go on.

All of the studies so far have employed ‘willing’ participants. In other words, the participants kept their heads still, told the truth when they were asked to, and lied when they were asked to. An uncooperative participant might move around more (blurring the image), show generally elevated levels of arousal that could skew their data, be in worse mental or physical condition, and come from a different population than the predominantly white, young, relaxed, intelligent and willing undergraduate participants. We don’t know how these factors change things, and it’s difficult to see how we might collect reliable experimental data to better understand them.

I haven’t considered alternative imaging methodologies here (such as EEG or infrared imaging). Mostly though, fMRI appears to be leading the field in terms of accuracy and effort spent, and all of these arguments should apply to EEG and other methods equally.

Why am I writing this?

There are a number of fMRI-based lie detection startups attracting government funding and attempting to charge for their services. I don’t begrudge them their entrepreneurial ambition, but I am dismayed by their hyperbolic avowals of success.

In truth, this is a new, mostly unproven technology that seems to work fairly well in laboratory conditions. But it’s subject to the same sensitivity/specificity tradeoffs that plagues medical tests and traditional lie detection technologies. The allure of an ostensibly direct window into the mind with the shiny veneer of scientific infallibility is a beguiling combination.

Eventually, the limitations of this technology will be realized. I’d prefer to see this techno-myth punctured and caution exercised now, rather than after costly mistakes have been made. Cheeringly, the courts appear to take the same view (at least so far).

My credentials

I’m finishing my PhD in the psychology and neuroscience of human forgetting at Princeton. I’ve worked on the application of machine learning methods to fMRI for the last few years, was part of the prize-winning team in the Pittsburgh fMRI mind-reading competition, and lead the development of a popular software toolbox for applying these algorithms for scientific analysis. However, I have no expertise in the neuroscience of cognitive control, lie detection or law.

So I apologize if I’m wrong or out of date anywhere here. If so, I’d be glad to see this pointed out and to amend things.

7 thoughts on “How to beat an fMRI lie detector

  1. Your comments about the polygraph are a bit naive. Several studies have demonstrated that people cannot “beat” a polygraph test by using countermeasures or somehow changing their physiological functioning at will.

    For starters, you may want to look at studies in the journal Psychophysiology.

    Louis Rovner, Ph.D.
    PolygraphReality.wordpress.com

  2. You make some good points.

    My interest in a really good lie-detector – fMRI or whatever – is having the opportunity to use it on members of Congress, scientists who submit papers that have important social consequences (Biederman, for example), judges, cops, etc.

    Not the gist of your post but worth considering.

  3. Thing is, even if the fMRI was beatable, if you knew exactly how, your average criminal/liar probably won't even know what an fMRI is, let alone specific techniques for beating it (and the reliability of said techniques).

    Also, nothing strikes me as dystopian about it. The only potential difference between an fMRI and a polygraph is accuracy. It'd only be a dystopia if third parties could scan your brain without you realising. And I think one would notice a two-ton machine making loud noises nearby.

  4. Louis,

    interpersonal expectancy is a huge part of whether a polygraph can 'successfully' detect a lie. Thus there is a lot of intentional theatre when doing polygraphs. Things like explaining the expertise of the individual conducting the exam, the wearing of lab coats etc. to enhance the perceived authority of the examiner, starting with one examiner and then switching to a 'higher caliber expert', using complex and 'scientific looking' apparatus.

    Without the interpersonal expectancy polygraphs are generally garbage.

    The wikipedia article on polygraphs is fairly good, and points one to the NAS 2003 report on polygraphs

    http://en.wikipedia.org/wiki/Polygraph

  5. At Louis Rovner,
    Polygraphs ARE fallible. There was a study from I believe Penn about it. The professor who published it got a lot of backlash from the companies that support it…since, oh, I don't know, if your entire career/corporation is based on the idea that “it works” and “it's infallible” and it proves to not be…it falls apart.

    It's not just the baseline that can be altered, it can be subtle differences in how questions are phrased. Training can't alter the very character of the person…

    Think about this, if you were asked, “did you rape your son” you're going to be affected by the question…the idea of it…elevating your responses making it seem like a lie. It means nothing.

    the fMRI isn't perfect, but it is much closer to a “lie detector” than the polygraph could ever be. Since it's based on blood flow, the act of trying to think of a lie will change blood flow. Meaning, because you THOUGHT about making a lie, those centers would have to start working and that will give away that you're thinking about a lie. So, that's already defeated. The baseline could be just watching the person lie still for ~5 minutes, or reading a book aloud, or solving a math problem. They require certain areas being used. There's no such thing as multi-tasking, so…the person will show spikes in each area involved alternatingly showing the instructions aren't being followed.

    So, going back on top Mr. Louis Rovner…have you been published in a peer reviewed journal yet? (One that isn't somehow affiliated with a corporation or making a profit I mean)

    V.B., soon to be MD

  6. Louis Rovner is a perveyor of high quality horse manure, as stated above, once someone knows how the polygraph show works, the examiners game is done. You can con the unknowing, but those so informed are not so easily beaten.

Leave a comment