Lie Detector 2.0. AI vs. Human Cunning

Hello, this is Sherpa Robotics. Today we have translated for you a publication by Jessica Hamselow, author of articles on biomedicine and biotechnology for MIT Technology Review.

Can you recognize lies? This question probably worries many people after every political debate. Research shows that humans are not very good at this task. But what if artificial intelligence helps us? New technology promises us greater accuracy than older methods such as the polygraph.

AI-powered lie detection systems can help us separate truth from fake news, assess the credibility of claims, and perhaps even spot lies in job applications. But do we trust them? And should we?

In a recent study, Alicia von Schenk and her colleagues developed a tool that is significantly better than humans at detecting lies.

What happened? The researchers conducted an experiment where people wrote about their weekend plans. Half of the participants were paid to lie – to make a plausible but untrue statement. As a result, 1,536 statements from 768 people were collected.

The findings of the study, published in the journal iScience

The AI-powered tool is indeed effective in detecting lies.

People who use it become better at recognizing lies, but are also more likely to accuse others of lying.

Let's look at how an AI lie detector works.

The researchers used 80% of the collected statements to train an algorithm based on Google’s BERT language model. They then tested the algorithm on the remaining 20% ​​of statements and found that it could guess true or false 67% of the time. That’s significantly better than the average human, who guesses correctly about half the time.

What happened when scientists gave people the ability to use an AI lie detector?

Only a third of participants decided to use it, perhaps due to skepticism about the technology or overconfidence in their own lie-detection abilities. But those who did trust the AI ​​almost always followed its predictions.

How does this affect our behavior?

We usually assume that people are telling the truth. In this study, participants knew that half of the statements were false, but only 19% rated them as such. When people used an AI detector, the number of accusations of lying jumped to 58%. On the one hand, this is good: such tools can help us identify lies, for example, in fake news on social media. But on the other hand, it can undermine trust, a fundamental aspect of human behavior that helps us build relationships. If the price of accurate judgment is the destruction of social bonds, is it worth it?

Let's look at the issues of accuracy.

In the study, the researchers aimed to create a tool that would outperform humans at detecting lies. This isn’t that difficult, given how bad we are at it. But what if such a tool were used to assess the truthfulness of social media posts or screen job candidates’ resumes? In such cases, it’s not enough for the technology to be “better than humans.”

How should we respond to approximate accuracy?

Are we willing to accept 80% accuracy, where only four out of five statements tested are scored correctly? What about 99% accuracy? And don't forget the past mistakes in lie detectors.

The polygraph was developed to measure heart rate and other signs of “arousal” because it was thought that certain signs of stress were unique to liars, but this is not the case. Polygraph results are often not admissible in U.S. court cases. Despite this, the polygraph continues to be used in some situations and has been harmful when used to convict on reality TV.

AI tools could have a big impact on the lie detector testing process in the future because they are easy to scale.

It is important to understand that a polygraph can only be used on a certain number of people per day. But an AI lie detector has the potential to be applied to a huge number of people and data.

AI tools can be useful in combating fake news and disinformation, but they need to be carefully vetted. If an AI lie detector generates too many false accusations, it might be best not to use it at all.

Sherpa Robotics Commentary

According to a study by Superjob, recruiters increasingly consider the use of a polygraph during hiring to be ineffective, and applicants are increasingly less likely to agree to undergo interviews using this testing method.

The share of Russian companies using lie detectors remains small: 2% of organizations use them for all employees, and 8% use them only for certain positions during hiring. In 2021-2022, the number of employers opposed to the use of a polygraph increased: 56% do not consider it appropriate to use a lie detector during hiring, while in 2020 there were 47%.

But already in 2023, an article by Sber employees was published in the international publishing group Nature. The publication in the scientific journal Scientific Reports (part of the Nature Portfolio group of journals) is devoted to the development of an automated “lie detection” tool. The bank's researchers managed to create an AI system for reviewing the results of polygraph tests (“lie detector”). Now artificial intelligence is able to process polygraph data, expert opinions, and offer its own opinion. According to the Sber research team, they managed to create the world's first prototype of such a tool to support a polygraph examiner.

The developed AI solution is already being used in the bank and is undergoing the patenting process.

*Read the news about the achievements of Sber's research team: https://lenta.ru/news/2023/04/17/al/

There is a high probability that polygraph tests coupled with artificial intelligence will become much more popular in the coming years. But given the high probability of errors, is this justified?

Have you ever had to undergo a lie detector test? Tell us about your experience. Do you think artificial intelligence will actually make such tests more common? Will employers trust AI more to the detriment of potential employees?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *