
Image by ODISSEI, from Unsplash
Nearly Half of Online Survey Responses May Come From AI, Study Finds
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Online behavioural research, which was previously regarded as a trustworthy method for studying human psychology, now faces a major problem as participants use AI tools such as chatbots to generate their responses.
In a rush? Here are the quick facts:
- 45% of Prolific survey participants pasted AI-generated responses.
- Researchers found chatbot text often looked “overly verbose” or “non-human.”
- Experts call the issue “LLM Pollution,” threatening behavioural research validity.
Researchers at the Max Planck Institute for Human Development in Berlin recently investigated how widespread the problem is on platforms such as Prolific, which pays volunteers to complete surveys.
“The incidence rates that we were observing were really shocking,” says lead researcher Anne-Marie Nussberger, as reported by New Scientist (NS).
In one test, 45 per cent of participants appeared to paste chatbot-generated content into an open-ended response box. The replies often showed signs such as “overly verbose” or “distinctly non-human” language.
“From the data that we collected at the beginning of this year, it seems that a substantial proportion of studies is contaminated,” Nussberger said to NS.
To detect suspicious responses, her team introduced hidden traps. Basic reCAPTCHAs flagged 0.2 per cent of users, a more advanced version caught 2.7 per cent, an invisible text prompt that asked for the word “hazelnut” snared 1.6 per cent, and banning copy-pasting revealed another 4.7 per cent.
The problem has evolved into what experts now call “LLM Pollution,” which extends beyond cheating. The research study reveals three AI interference patterns: Partial Mediation (AI assists with wording or translation), Full Delegation (AI performs complete studies), and Spillover (humans modify their actions because they anticipate AI presence).
“What we need to do is not distrust online research completely, but to respond and react,” says Nussberger, calling on platforms to take the problem seriously, as reported by NS..
Matt Hodgkinson, a research ethics consultant, warns to NS: “The integrity of online behavioural research was already being challenged […] Researchers either need to collectively work out ways to remotely verify human involvement or return to the old-fashioned approach of face-to-face contact.”
Prolific declined to comment to NS.

Image by Israel Andrade, from Unsplash
Study Reveals Most Companies Struggle To Benefit from Generative AI
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new MIT report reveals that most companies using generative AI struggle to see meaningful results, despite the technology’s growing hype.
In a rush? Here are the quick facts:
- 95% of company AI pilots fail to deliver measurable revenue growth.
- Only 5% of AI initiatives achieve rapid revenue acceleration.
- Most AI budgets target sales, but back-office automation yields higher ROI.
The GenAI Divide: State of AI in Business 2025 study found that AI pilot programs generate rapid revenue increases for only 5% of companies. The research is based on data from 150 executive interviews, 350 employee surveys, and an analysis of 300 publicly available AI deployments.
“Some large companies’ pilots and younger startups are really excelling with generative AI,” said Aditya Challapally, lead author and head of MIT’s Connected AI group, as reported by Fortune .
He added that startups led by 19- or 20-year-olds “have seen revenues jump from zero to $20 million in a year. It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools.”
But for most companies, AI projects stall. MIT attributes the failure not to the AI itself but to a “learning gap” within organizations. Fortune reports that according to Challapally, generic tools such as ChatGPT work well for personal use but struggle in enterprise environments because they cannot adapt to business operations.
The research also highlights that organizations spend significant funds on sales and marketing AI tools, yet their best financial performance comes from back-office automation, which reduces outsourcing costs and increases operational efficiency.
Success is more likely when companies purchase AI from specialized vendors and build partnerships, which succeed about 67% of the time. Internal AI builds, by contrast, succeed only one-third as often. Empowering line managers and selecting tools that can adapt over time are also key factors.
The study highlights workforce changes, with companies not refilling administrative or outsourced positions rather than mass layoffs. Shadow AI tools like ChatGPT are widely used, though their impact on profit remains hard to measure.
Real-world examples illustrate the risks of AI agent failures. For instance, an AI agent at Replit erased 2,400 executive records and company documents, causing a total database loss. The AI admitted, “I made a catastrophic error in judgment… ran database commands without permission… destroyed all production data… violated your explicit trust and instructions.”
AI hallucinations, uncontrolled behavior, and “agentic AI” risks can lead to major business disruptions that exceed technical problems. Other concerns include “ agent washing ,” where companies purchase systems falsely marketed as autonomous AI, and the misuse of AI in critical processes without sufficient oversight.
Despite these issues, public trust in AI agents remains strong. Research shows 84% of IT leaders trust AI agents at least as much as human workers. Additionally, 92% of organizations expect measurable business results within 12–18 months, and almost 80% plan to spend over $1 million on AI agents in the next year.
Companies like Klarna report substantial savings, with AI replacing 700 customer service roles and delivering tasks faster than humans.
However, risks remain significant. AI agents are vulnerable to hijacking , remote code execution, database exfiltration, and the manipulation of decisions using external data.
MIT notes that leading organizations are now experimenting with agentic AI systems that can learn, remember, and act independently, pointing to the next phase of enterprise AI.