
Study Warns AI Could Supercharge Social Media Polarization
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Artificial intelligence could supercharge polarization on social media, warn Concordia researchers and students, raising concerns over free speech, and misinformation.
In a rush? Here are the quick facts:
- AI algorithms can spread division using only follower counts and recent posts.
- Reinforcement-learning bots adapt quickly to exploit social media vulnerabilities.
- Experts warn platforms risk either censorship or unchecked manipulation.
Although polarization on social media is nothing new, researchers and student activists at Concordia University warn that artificial intelligence could make the problem much worse.
“Instead of being shown footage of what’s happening or content from the journalists who are reporting on it, we’re instead seeing overly dramatized AI art of things we should care about politically […] It really distances people and removes accountability” said Danna Ballantyne, external affairs and mobilization coordinator for the Concordia Student Union, as reported by The Link .
Her concerns echo new research from Concordia, where professor Rastko R. Selmic and PhD student Mohamed N. Zareer showed how reinforcement-learning bots can fuel division online. “Our goal was to understand what threshold artificial intelligence can have on polarization and social media networks, and simulate it […] to measure how this polarization and disagreement can arise.” Zareer said as reported by The Link.
The findings suggest that algorithms don’t need private data to stir division, where basic signals like follower counts and recent posts are enough. “It’s concerning, because [while] it’s not a simple robot, it’s still an algorithm that you can create on your computer […] And when you have enough computing power, you can affect more and more networks” Zareer explained to The Link.
This mirrors a wider body of research showing how reinforcement learning can be weaponized to push communities apart. The study by Concordia used Double-Deep Q-learning and demonstrated that adversarial AI agents can “flexibly adapt to changes within the network, allowing it to effectively exploit structural vulnerabilities and amplify divisions among users,” as the research noted.
Indeed, Double-Deep Q-learning is an AI technique where a bot learns optimal actions through trial and error. It uses deep neural networks to handle complex problems and two value estimates to avoid overestimating rewards. In social media, it can strategically spread content to increase polarization with minimal data.
Zareer warned that policymakers face a difficult balance. “There is a fine line between monitoring and censoring and trying to control the network,” he said to The Link. Too little oversight lets bots manipulate conversations, whilst too much may risks suppressing free speech.
Meanwhile, students like Ballantyne fear AI is erasing lived experience. “AI completely scraps that,” she said to The Link.

Image by Aerps.com, from Unsplash
One-third Of AI Search Answers Contain Unsupported Claims, Study Finds
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new study claims that AI tools, tools designed to answer questions and perform online research, are struggling to live up to their promises.
In a rush? Here are the quick facts:
- GPT-4.5 gave unsupported claims in 47% of responses.
- Perplexity’s deep research agent reached 97.5% unsupported claims.
- Tools often present one-sided or overconfident answers on debate questions.
Researchers reported that about one-third of answers given by generative AI search engines and deep research agents contained unsupported claims, and many were presented in a biased or one-sided way.
The study, led by Pranav Narayanan Venkit at Salesforce AI Research, tested systems like OpenAI’s GPT-4.5 and 5, Perplexity, You.com, Microsoft’s Bing Chat, and Google Gemini. Across 303 queries, answers were judged on eight criteria, including whether claims were backed up by sources.
The results were troubling. GPT-4.5 produced unsupported claims in 47 per cent of answers. Bing Chat had unsupported statements in 23 percent of cases, while You.com and Perplexity reached about 31 percent.
Perplexity’s deep research agent performed the worst, with 97.5 per cent of its claims unsupported. “We were definitely surprised to see that,” Narayanan Venkit said to New Scientist .
The researchers explain that generative search engines (GSEs) and deep research agents (DRs) are supposed to gather information, cite reliable sources, and provide long-form answers. However, when tested in practice, they often fail.
The evaluation framework, called DeepTRACE, showed that these systems frequently give “one-sided and overconfident responses on debate queries and include large fractions of statements unsupported by their own listed sources,” as noted by the researchers.
Critics warn that this undermines user trust. New Scientist reports that Felix Simon at the University of Oxford said: “There have been frequent complaints from users and various studies showing that despite major improvements, AI systems can produce one-sided or misleading answers.”
“As such, this paper provides some interesting evidence on this problem which will hopefully help spur further improvements on this front,” he added.
Others questioned the methods, but agreed that reliability and transparency remain serious concerns. As the researchers concluded, “current public systems fall short of their promise to deliver trustworthy, source-grounded synthesis.”