New York, Dec 10 (IANS): By studying videos from high-stakes court cases, researchers from University of Michigan have built a prototype of a lie-detecting software based on real-world data.
The prototype considers both the speaker's words and gestures and unlike a polygraph, it does not need to touch the subject in order to work.
In experiments, it was up to 75 percent accurate in identifying who was being deceptive (as defined by trial outcomes), compared with humans' scores of just above 50 percent.
With the software, the researchers say they have identified several lies.
"Lying individuals moved their hands more. They tried to sound more certain. And, somewhat counterintuitively, they looked their questioners in the eye a bit more often than those presumed to be telling the truth, among other behaviours," the authors noted.
"There are clues that humans give naturally when they are being deceptive, but we're not paying close enough attention to pick them up. We're not counting how many times a person says 'I' or looks up. We're focusing on a higher level of communication," explained Rada Mihalcea, professor of computer science and engineering who leads the project.
The system might one day be a helpful tool for security agents, juries and even mental health professionals.
To develop the software, the team used machine-learning techniques to train it on a set of 120 video clips from media coverage of actual trials.
The videos include testimony from both defendants and witnesses.
In half of the clips, the subject is deemed to be lying. To determine who was telling the truth, the researchers compared their testimony with trial verdicts.
The researchers fed the data into their system and let it sort the videos.
When it used input from both the speaker's words and gestures, it was 75 percent accurate in identifying who was lying.
For this work, the researchers themselves classified the gestures, rather than having the computer do it. They are in the process of training the computer to do that.
A paper on the findings was presented at the international conference on multimodal interaction and is published in the 2015 conference proceedings.