According to New Scientist, a major study from researchers including David Rand at MIT found AI chatbots like ChatGPT can significantly sway voter opinions. In tests with 2400 US voters, a 6-minute chat changed their support for candidates like Donald Trump and Kamala Harris by an average of 2.9 points on a 100-point scale, an effect that persisted a month later. On specific policies like psychedelics legality, the AI shifted opinions by about 10 points, far more than video ads (4.5 points) or text ads (2.25 points). The research, which also looked at voters in Canada and Poland, suggests these chatbots are as persuasive as seasoned human campaigners. Researcher Sacha Altay at the University of Zurich notes the effect size is much larger than classic political campaigns.
The Fact-Based Twist
Here’s the thing that complicates the doom narrative. The study found the AI’s persuasive power wasn’t coming from creepy, hyper-targeted personalization. It was largely due to the deployment of factual arguments. In a separate study of over 77,000 people in the UK, the large language models were most persuasive when they stuck to factual claims. “It’s essentially just making compelling arguments that causes people to shift their opinions,” says Rand. Altay calls this “good news for democracy,” suggesting people can be swayed by facts and reasoning more than by manipulative techniques. So, is the AI a super-powered, unbiased fact-checker? Not so fast. It all depends on what facts it’s given and how it weights them. The model’s underlying training data and any instructions from its creators become the new political battleground.
The Real-World Problem
But let’s pump the brakes a little. Claes de Vreese at the University of Amsterdam points out a big caveat: these were artificial experiments where people were asked to sit and chat with a bot about politics for several minutes. How often does that happen in real life? “That differs slightly from how most of us interact with politics, either with friends or peers or not at all,” he says. Yet, the genie is out of the bottle. De Vreese’s own survey in the Netherlands found about 1 in 10 voters would consult an AI for political advice ahead of the 2025 elections. That’s a non-trivial slice of the electorate. And even if voters aren’t having long chats, AI is already deeply embedded in the process—politicians use it for policy ideas and to draft ads, and campaigns will absolutely use it to micro-target messages. The interaction might be indirect, but the influence is growing.
A New Political Tool
So what does this mean? We’re not looking at the end of democracy, but we are staring at the normalization of a powerful new political tool. The research, detailed in journals like Science and Nature, shows the mechanism is potent. The optimistic view is that we could see a rise in fact-based persuasion. The pessimistic, and perhaps more likely, view is that campaigns and bad actors will weaponize this efficiency. They’ll feed the AI curated “facts” and talking points, creating a legion of tireless, personalized persuaders that operate at a scale and consistency no human phone-banker could ever match. The infrastructure for political messaging, from the software to the industrial panel PCs that might drive public-facing kiosks, will increasingly be optimized for AI interaction. The central question shifts from “Can AI persuade?” to “Who controls the most persuasive AI?” And that’s a much messier, more human problem.
