Artificial Intelligence Outperforms Humans in Lie Detection in New Research

TapTechNews July 15th news, a latest research announced by the University of Würzburg in Germany on the 12th local time shows that in an era of increasing prevalence of fake news, suspicious remarks by politicians, and manipulated videos, artificial intelligence performs better than humans in lie detection.

Artificial Intelligence Outperforms Humans in Lie Detection in New Research_0

Researchers from Würzburg, Duisburg, Berlin, and Toulouse explored the effectiveness of AI in detecting lies and its impact on human behavior. The main findings of this study can be summarized as follows:

In text-based lie detection, the accuracy of AI is superior to that of humans.

Without the support of AI, people are reluctant to accuse others of lying.

With the support of AI, people are more likely to express suspicion of encountering lies.

Only about one-third of the study participants would take the opportunity to ask AI for assessment. However, most people would follow the advice of the algorithm.

In preparing for this study, the research team asked nearly 1000 people to write down their upcoming weekend plans. In addition to the true statements, they were also required to write a fictional statement about their plans. To make their fictional statements as persuasive as possible, the research team also provided them with a stipend. After quality inspection, the team finally obtained a data set containing 1536 statements from 768 authors.

Based on this data set, the research team developed and trained a lie detection algorithm using Google's open-source language model BERT. After training, the algorithm actually identified nearly 81% of the lies in the data set.

In the main study, the team randomly selected 510 statements and recruited another 2040 participants. These subjects were divided into four groups and were required to evaluate whether the received statements were true or false.

Group 1: Needed to evaluate the statements without AI support.

Group 2: Would always see the assessment of the algorithm before making a judgment.

Group 3/Group 4: Could actively request the assessment of AI, but had to pay a small fee for it.

TapTechNews note: In fact, Group 3 did not actively request AI to provide advice, while Group 4 put forward assessment requirements to AI and got responses as requested.

The experimental results are as follows:

The judgment accuracy rate of members in Group 1 was 46.5% (roughly equivalent to random guessing).

The judgment accuracy rate of members in Group 2 was 60.1%.

People are usually reluctant to accuse others of lying: In Group 1, less than 20% of the members chose to point out the lies; the number of people in Group 2 who automatically received the assessment of AI and pointed out the lies was 30%; the accusation rate of members in Group 4 increased significantly to about 58%.

Only about one-third of the people asked the lie detection algorithm to provide assistance. The people who asked for prediction were very inclined to follow the advice of the algorithm, with a follow rate of about 88%.

In those who automatically received the assessment of AI, only 57% of the people followed its advice.

When AI determined a statement to be a lie, this difference became more obvious: 85% of those who requested the assessment of AI agreed with the judgment made by AI; while among those who automatically received the assessment of AI, only 40% followed the advice of AI.

TapTechNews attached the paper address: https://doi.org/10.1016/j.isci.2024.110201.

Likes