A photo of a man undertaking a polygraph test.

A photo of a man undertaking a polygraph test. Flickr user Lwp Kommunikácio

This Is How America's Spies Could Detect Lying in the Future

IARPA has awarded a prize for a JEDI MIND trick software. By Patrick Tucker

Polygraph-based lie detection technology remains the standard method of deceit spotting in the government. MRI-based lie detection systems are better, so long as you can get the person you are evaluating over to a huge neural imager and can afford $2,600 per scan. But what the national security community has long wanted is lie detection system that works in the field, can be deployed anywhere and can spot deceit on site and immediately, a polygraph encoded in software.

In February, the Intelligence Advanced Research Projects Activity (IARPA) announced a rather unique competition called INSTINCT, which stands for Investigating Novel Statistical Techniques to Identify Neurophysiological Correlates of Trustworthiness. The goal was to develop “innovative algorithms that can use data from one participant to accurately predict whether their partner will make trusting decisions and/or act in a trustworthy manner.” 

On Thursday, the agency announced the winner of the contest, a project called JEDI MIND, which stands for Joint Estimation of Deception Intent via Multisource Integration of Neuropsychological Descriminators. The creators of the winning algorithm, Troy Lau and Scott Kuzdeba with BAE Systems, found that their system could predict trustworthiness 15 percent better than average (baseline analysis).

According to IARPA, the researchers “found that someone’s heart rate and reaction time were among the most useful signals for predicting how likely their partner was to keep a promise.” The methodology of the experiments has not yet been released.

“Knowing who can be trusted is essential for everyday interactions and is especially vital for many Intelligence Community (IC) missions and organizations. Improving this capability to know whom to trust could have profound benefits for the IC, as well as for society in general,” Adam Russell, IARPA program manager said in a statement.

How might the government’s lie-detecting software robots benefit society? Stopping people from lying about whether they’ve come in contact with Ebola is an obvious one but there are others.

A quick look at the military’s efforts to quantify truthiness in recent years offers some clues about where the research is headed.

A Brief History of the Government’s Lie Detecting Computer Research

Probably no researcher has been more important in a computational approach to trust than Paul Ekman, creator of one of the world’s foremost experiments on lie detection,  specifically how deception reveals itself through facial expression. Ekman’s work has shown that with just a bit of training a person can learn to spot active deceit with 90 percent accuracy simply by observing certain visual and auditory cues -- wide and fearful eyes and fidgeting, primarily -- and do so in just 30 seconds. If you are a TSA agent and have to screen hundreds of passengers at a busy airport, 30 seconds is about as much time as you can take to decide if you want to pull a suspicious person out of line or let her go get on a plane.

While Ekman’s lie detection methods worked well in sit-down interviews, they weren’t designed to be used on people waiting in a line. It was a capacity that the government wanted and so the DARPA Rapid Checkpoint Screening Program was launched in 2005 to take some of Ekman’s findings and automate violent intent detection, thus making it objective. In other words, the goal was to develop a machine to anticipate whether or not someone might be a risk to the safety of a plane.

The biometric detection of lies could involve a number of methods. Today, we know that someone who hesitates while texting is a bit more likely to be lying to you than someone who answers back right away, according to a 2013 study from researchers at Brigham Young University.  Your voice can also reveal clue to “fraudulent behavior” in a ways that are hard to detect with the naked ear, but can be detected algorithmically.

We know that someone who hesitates while texting is a bit more likely to be lying to you than someone who answers back right away.

However, the most promising is thermal image analysis for anxiety. If you look at the heat coming off someone’s face with a thermal camera you can see large hot spots in the area around the eyes (the periorbital region.)  This indicates activity in the sympathetic/adrenergic nervous system, which is a sign of fear. Someone at a checkpoint with hot eyes is someone who is probably nervous about something. The hope of people in the lie detection business like the late Ralph Chatham, Ekman and DARPA's Larry Willis is that very sensitive sensors placed a couple of inches away from a subject’s face would provide a reliable cue.

Unfortunately implementing such a system in an airport setting proved unworkable in 2006, when the TSA began to experiment with live screeners who were being taught to examine people’s facial expressions, mannerisms and so on for signs of lying as part of a program called SPOT (Screening Passengers by Observational Techniques). When a police officer trained in “behavior detection” harassed King Downing, an ACLU coordinator who is black, a lawsuit followed. As Downing’s lawyer John Reinstein told The New York Times, “There is a significant prospect this security method is going to be applied in a discriminatory manner. It introduces into the screening system a number of highly subjective elements left to the discretion of the individual officer.”

Later, the Government Accountability Office would tell Congress that the TSA had “deployed its behavior detection program nationwide before first determining whether there was a scientifically valid basis for the program.”

DARPA’s Larry Willis defended the program before Congress, noting that “a high-risk traveler is nine times more likely to be identified using Operational SPOT versus random screening.”

Today’s computerized lie detectors in airports take the form of Embodied Avatar Kiosks that watch eye dilation and other factors to discern whether passengers are being truthful or deceitful. No, the kiosk isn’t going to do a cavity search, but it can summon an agent if it robotically determines you’re just a bit too shifty to be allowed on a plane without an interview.

The announcement gets us closer to a day when computers can detect truth and trustworthiness better than thought possible. That could be an important breakthrough for a variety of reasons, not the least of which, helping intelligence professionals develop better lie detection techniques when no computer is around.

The announcement gets us closer to a day when computers can detect truth and trustworthiness better than thought possible.

“We’re delighted with Lau and Kuzdeba’s insight into the data,” Russell said. “Their performance under the rigorous evaluation process of the INSTINCT Challenge provides additional evidence in support of one of the TRUST program’s basic hypotheses: that the self’s own, often non-conscious signals – if they can be detected and leveraged appropriately – may provide additional valuable information in trying to anticipate the intentions of others.” 

What would this look like?

Imagine that you are asked to make a promise. It’s one you do not intend to keep but you say ‘yes’ anyway, after a moment’s hesitation. Your pulse rises in a way that causes your cheeks to feel warm. This is not noticeable to anyone -- but a computer analyzing your neural, physiological, and behavioral signals has determined what you already know, you’re lying.

Editor’s Note: A portion of this article was excerpted from The Naked Future: What Happens In A World That Anticipates Your Every Move? by Patrick Tucker, Current, 2014.

CORRECTION: An earlier version of this article mistakenly referred to the late Paul Ekman. Ekman runs The Paul Ekman Group in San Francisco.   

NEXT STORY: Could Selfies Replace Passwords?