
Image by Vector Juice, from Freepik
AI Outsmarts Humans In Debates
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
AI can now outdebate humans by using personal data—shifting opinions more effectively and raising concerns about manipulation and mass persuasion.
In a rush? Here are the quick facts:
- GPT-4 outperformed humans in debates when given personal data.
- Personalized AI arguments swayed opinions 81% more effectively than humans.
- Over 900 participants debated topics like climate and healthcare.
A new study published in Nature Human Behaviour shows that OpenAI’s GPT-4 can be more persuasive than humans, especially when it tailors its arguments based on personal information about its opponent.
The research, led by Francesco Salvi and a team at EPFL, explored how effective GPT-4 is at changing people’s minds through short, structured debates.
Participants in the study were randomly matched with either a human or an AI opponent. Some debaters were given access to personal information about their counterpart, such as age, education level, and political orientation.
When GPT-4 had access to these personal details, it was significantly more persuasive—outperforming human debaters by 81%. In debates with a clear winner, the AI emerged victorious 64% of the time when it personalized its arguments.
Without access to personal data, GPT-4 performed on par with humans. Interestingly, when humans were given the same personal information, they did not become noticeably more persuasive, suggesting that AI is better equipped to strategically use such data.
The experiment involved over 900 participants debating topics like school uniforms, climate change policy, and healthcare reform. Participants rated their agreement with the debate topic both before and after the interaction.
GPT-4’s personalized arguments produced the most significant opinion changes, especially on moderately polarizing topics.
These findings raise concerns about the future of AI-driven microtargeting, which uses individualized messages to influence people based on their personal traits. Since AI can already infer private information from online behavior, this opens the door to highly effective persuasion campaigns in politics, advertising, and misinformation.
Experts involved in the project note that coordinated AI-based disinformation campaigns could become a serious threat.
“Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction,” says Riccardo Gallotti, who worked on the project, as reported by MIT .
“These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time,” he added. One aspect the researchers are still exploring is the role of participant perception in these debates.
It remains unclear whether people were more willing to shift their opinions because they believed they were arguing with a bot and therefore did not feel like they were “losing” to a real person, or if their assumption that their opponent was a bot came after their opinion changed as a way to rationalize the outcome.
However, the researchers note that the study had limitations. The debates were brief and structured, which doesn’t fully capture the complexity of real-world discourse. Participants also knew they were part of an experiment, which may have influenced how they responded.
Still, the message is clear: AI is becoming alarmingly effective at persuasion, particularly when it knows who it’s talking to. The researchers caution that this could have serious consequences, and they call for governments and tech companies to implement safeguards to prevent potential misuse.

Photo by Windows on Unsplash
Microsoft Shares Vision For AI Agents, And Backs Anthropic’s Open-Source Protocol
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Microsoft’s Chief Technology Officer, Kevin Scott, announced on Sunday that the company expects artificial intelligence agents to interact and enhance memory capabilities in order to provide better service, including by integrating technologies from different companies. The tech giant also backed Anthropic’s open-source protocol.
In a rush? Here are the quick facts:
- Microsoft expects AI agents to enhance memory capabilities and interact with AI systems developed by other companies.
- The tech giant supports Anthropic’s MCP and encourages broader industry collaboration.
- The company is using structured retrieval augmentation to reduce computing power requirements.
According to Reuters , Scott shared a few updates and thoughts with the media and analysts at the headquarters in Washington, ahead of Microsoft’s Build conference in Seattle—which takes place today, and where experts will share with developers the latest tools and programs for AI systems.
Scott explained that Microsoft is focusing on encouraging the industry to adopt standards for AI and allow multiple systems, from different firms, to collaborate. The CTO also said that the tech giant is backing Anthropic and Google’s Model Context Protocol (MCP), and compared the protocol to the hypertexts linking websites in the 90s.
Anthropic also recently announced “Integrations,” a feature to connect its chatbot Claude to popular tools and apps through its MCP program.
According to Scott, the MCP has the potential to build an “agentic web” and improve collaboration. “It means that your imagination gets to drive what the agentic web becomes, not just a handful of companies that happen to see some of these problems first,” said Scott.
Microsoft also expects AI agents to develop improved memories so that the AI systems can be more helpful to users. “Most of what we’re building feels very transactional,” added Scott regarding the current agentic AI technology and acknowledging its potential.
The problem with enhancing AI models is the energy cost, as it requires more computing power. Microsoft is working on a new approach called “structured retrieval augmentation,” which allows AI agents to build a roadmap to remember conversations by extracting short bits of information.
A few days ago, Microsoft introduced a new feature for AI agents on Copilot Studio called “computer use” which allows advanced AI systems to perform complex tasks and take control of the user’s desktop and interact with applications.