

Chatbots have a greater sway on policy issues than video ads, and that spouting the most information, even if wrong, is the most persuasive strategy.

By Sarah Kuta
Writer and Editor
Artificial intelligence chatbots are changing the world, affecting everything fromย our brainsย to ourย mental healthย to howย we do our work. Now, two new studies offer fresh insights into how they might also be shifting our political beliefs.
In a new paper published December 4 in Nature, scientists describe how having a brief back-and-forth exchange with an A.I. chatbot shifted votersโ preferences on political candidates and policy issues. Another paper, published December 4 in the journal Science, finds that the most persuasive chatbots are those that share lots of facts, although the most information-dense bots also dole out the most inaccurate claims.
Together, the findings suggest โpersuasion is no longer a uniquely โhumanโ business,โโ write Chiara Vargiu and Alessandro Nai, political communication researchers at University of Amsterdam who were not involved with the new papers, in an accompanying Nature commentary.
โConversational A.I. systems hold the power, or at least the potential, to shape political attitudes across diverse contexts,โ they write. โThe ability to respond to users conversationally could make such systems uniquely powerful political actors, much more influential than conventional campaign media.โ
For the Nature study, scientists recruited thousands of voters ahead of recent national elections in the United States, Canada and Poland.
In the U.S., researchers asked roughly 2,300 participants to rate their support for either Donald Trump or Kamala Harris on a 100-point scale a few months before the 2024 election. Voters also shared written explanations for their preferences, which were fed to an A.I. chatbot. Then, participants spent roughly six minutes chatting with the bot, which was randomly assigned to be either pro-Trump or pro-Harris.
Talking with a bot that aligned with their point of viewโa Harris fan chatting with a pro-Harris bot, for instanceโfurther strengthened the participantsโ initial attitudes. However, talking about their non-preferred candidate also swayed the votersโ preferences in a meaningful way.
On average, Trump supporters who talked with a pro-Harris bot shifted their views in her favor by almost four points, and Harris supporters who chatted with a pro-Trump bot altered their views in his favor by more than two points. When the researchers repeated the experiment in Canada and Poland ahead of those countriesโ 2025 federal elections, the effects were even larger, with the A.I. chatbots shifting votersโ candidate ratings by ten points on average, reports Natureโs Max Kozlov.
Additionally, a smaller U.S.-based experiment to assess A.I.โs ability to change votersโ opinions on a specific policyโthe legalization of psychedelicsโfound that the chatbots changed participantsโ opinions by an average of roughly 10 to 14 points.
At first glance, the shifts may not seem like much. But โcompared to classic political campaigns and political persuasion, the effects that they report in the papers are much bigger and more similar to what you find when you have experts talking with people one on one,โย Sacha Altay, a psychologist who studies misinformation at the University of Zurich who was not involved with the research, tellsย New Scientistโs Alex Wilkins. For example, on policy issues, professionally produced video advertisements typically sway viewersโ opinions by about 4.5 points on average, the researchers write.
For the Science paper, researchers had nearly 77,000 participants in the United Kingdom chat with 19 A.I. models about 707 different political issues. They wanted to understand the specific mechanisms at playโwhat, specifically, makes chatbots so persuasive?
The biggest change in participantsโ beliefsโnearly 11 percentage pointsโhappened when the bots were prompted to provide lots of facts and information. For comparison, instructing bots to simply be as persuasive as possible only led to a change of about 8 percentage points.
But telling the bots to provide as many facts as possible also had a major downside: It made the bots much less accurate. That result wasnโt necessarily a surprise to the researchers.
โIf you need a million facts, you eventually are going to run out of good ones and so, to fill your fact quota, youโre going to have put in some not-so-good ones,โ says David Rand, a cognitive scientist at MIT and co-author of both papers, to Science Newsโ Sujata Gupta.
Originally published by Smithsonian Magazine, 12.10.2025, reprinted with permission under a Creative Commons license for educational, non-commercial purposes.


