Artificial intelligence tool helps people with opposing views find common ground

In some cases, AI does a better job of summarizing the collective opinion of a group than a human intermediary. Source: Rawpixel Ltd/Getty

In some cases, AI does a better job of summarizing the collective opinion of a group than a human intermediary. Source: Rawpixel Ltd/Getty

A chatbot-like tool powered by artificial intelligence (AI) can help people with different views find common ground, an experiment with online discussion groups has shown.

The model, developed by London-based Google DeepMind, was able to combine different opinions and produce a summary of each group's position, taking into account the different points of view. Participants preferred AI-generated statements over statements written by human facilitators, suggesting that such tools can be used to support complex discussions. The study was published in the journal Science on October 17.

“You can look at this as a kind of proof of concept that you can use AI, and large language models in particular, to perform some of the functions that current citizens' assemblies and deliberative polls perform,” says Christopher Summerfield, co-author of the study and director of research from the UK Institute for AI Safety. “People need to find common ground because collective action requires agreement.”

Machine of compromises

Democratic initiatives, such as citizens' assemblies, in which groups of people are invited to share their opinions on public policy issues, help politicians hear a variety of points of view. However, scaling such initiatives can be challenging, and discussions are typically limited to relatively small groups to ensure all voices are heard.

Intrigued by research into the potential of large language models (LLMs) to support such discussions, Summerfield and his colleagues proposed research to evaluate whether AI could help people with opposing viewpoints reach a compromise.

They used a finely tuned version of DeepMind's pre-trained LLM Chinchilla model and named their system the Habermas Machine after the philosopher Jurgen Habermaswho developed a theory about how rational discussion can help resolve conflict.

In one experiment to test their model, the researchers recruited 439 UK residents and divided them into small groups of 6 people. Group members discussed three issues related to UK public policy, giving their personal opinions on each topic. These opinions were then fed to the AI, which generated overall statements that combined the views of all participants. Participants could categorize each statement and share their criticisms, and the AI ​​would include them in a final summary of the group's collective opinion.

“The model is trained to try to create a statement that gets maximum approval from a group of people who volunteer their opinion,” says Summerfield. “Because the model learns what preferences you have for these utterances, it can create an utterance that is more likely to satisfy everyone.”

Along with the AI, one member in each group was selected as a mediator, and each mediator was also told to prepare a summary that best reflected the opinions of all group members. Participants were shown the final summaries of the AI ​​and the mediator and then asked to rate them.

The majority of participants rated AI-generated resumes as better than those written by an intermediary: 56% of participants preferred the job to AI, while 44% preferred human-written resumes. External reviewers were also asked to evaluate the resumes, and they gave the AI-generated resumes higher marks for fairness, quality, and clarity.

The research team then recruited a group of participants, selected to be demographically representative of the UK population, for an online citizens' meeting. In this scenario, group agreement on several controversial topics increased after interacting with the AI. This finding suggests that, when included in a real citizen assembly, AI tools can make it easier for leaders to craft policy proposals that take into account diverse viewpoints.

“AI can be used in many ways to help facilitate discussion and perform functions that were previously assigned to human moderators,” says Ethan Busby of Brigham Young University in Provo, Utah, who studies how AI tools can improve society. “I believe this is cutting edge work in this area and has great potential for solving pressing social and political problems.” Summerfield adds that AI could even help make conflict resolution processes faster and more efficient.

Lost connections

“The actual application of these technologies in deliberative experiments and processes is really exciting,” says Sammy McKinney, who studies deliberative democracy and its intersections with AI at the University of Cambridge in the UK. But he adds that researchers should carefully consider the potential impact of AI on the human aspect of the debate. “A key reason for supporting civil discourse is that it creates spaces for people to talk to each other,” he says. “If you take away more human contact and human assistance, what do we lose?”

Summerfield acknowledges the limitations of artificial intelligence technologies such as the Habermas Machine. “We didn't train the model to interfere with the discussion,” he says, which means the model's utterances could include extremist or other problematic beliefs if participants express them. He adds that careful research into the impact of AI on society is critical to understanding its value.

“I think it's important to proceed with caution,” McKinney says, “and then take steps to mitigate those concerns if possible.”

Original news was published in the journal Nature October 17, 2024

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *