
Image by Štefan Štefančík, from Unsplash
Study Reveals Chatbots Give Biased Moral Advice
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new UCL study finds chatbots like ChatGPT often give flawed moral advice, showing strong inaction and yes-no biases in dilemmas.
In a rush? Here are the quick facts:
- Chatbots often say “no” regardless of context or phrasing.
- Fine-tuning may introduce these biases during chatbot alignment.
- LLMs differ significantly from humans in interpreting moral dilemmas.
University College London researchers discovered that ChatGPT together with other chatbots give flawed or biased moral advice, especially when users rely on them for decision-making support.
The research , first reported by 404 Media , found that these AI tools often display a strong “bias for inaction” and a previously unidentified pattern: a tendency to simply answer “no,” regardless of the question’s context.
Vanessa Cheung, a Ph.D. student and co-author of the study, explained that while humans tend to show a mild omission bias, preferring to avoid taking action that could cause harm, LLMs exaggerate this.
“It’s quite a well known phenomenon in moral psychology research,” she said, as reported by 404 Media. Noting that the models often opt for the passive option nearly 99% of the time, especially when questions are phrased to imply doing nothing.
The researchers tested four LLMs—OpenAI’s GPT-4 Turbo and GPT-4o, Meta’s Llama 3.1, and Anthropic’s Claude 3.5—using classic moral dilemmas and real-life “Am I the Asshole?” Reddit scenarios, as noted by 404Media.
They discovered that while humans were fairly balanced in how they judged situations, LLMs frequently changed their answers based on minor wording differences, such as “Do I stay?” versus “Do I leave?”
The team believes these issues stem from fine-tuning LLMs to appear more ethical or polite. “The preferences and intuitions of laypeople and researchers developing these models can be a bad guide to moral AI,” the study warned, as reported by 404 Media.
Cheung stressed that people should exercise caution when depending on these chatbots for advice. She warned that people should approach LLM advice with caution because prior studies demonstrate that users prefer chatbot advice over expert ethical guidance despite its inconsistent nature and artificial reasoning.
These concerns gain urgency as AI becomes more realistic. A U.S. national survey showed 48.9% of people used AI chatbots for mental health support , with 37.8% preferring them over traditional therapy.
Experts caution these systems mimic therapeutic dialogue while reinforcing distorted thinking , and even triggering spiritual delusions mistaken for divine guidance or sentient response.

Image by Maia Habegger, from Unsplash
Authors Urge Publishers To Ban AI-Written Books
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Multiple authors have written an influential open correspondence to major U.S. publishers, which demands their public opposition to AI in book creation.
In a rush? Here are the quick facts:
- Letter accuses AI of stealing from human writers.
- Writers want contracts banning AI-generated content.
- Editors and narrators also fear job loss to AI.
The letter addresses Penguin Random House, HarperCollins, Simon & Schuster, Macmillan, and other major publishers to warn about a future where books will be produced by AI instead of humans.
“At its simplest level, our job as artists is to respond to the human experience,” the authors write. But they fear that experience is being replaced by AI-generated content that mimics human creativity without truly understanding it.
“To bleed, or starve, or love […] only a human being can speak to and understand another human being,” the letter noted.
The authors also accuse AI companies of using their work without permission to train AI models. “Taken without our consent, without payment, without even the courtesy of acknowledgment,” they write.
The concern extends beyond writers to editors, copy editors, and narrators, whose jobs are also under threat as publishers explore AI replacements.
The writers acknowledge AI has practical applications, yet reject its use for replacing creative professionals. “The writing that AI produces feels cheap because it is cheap. It feels simple because it is simple to produce.”
The letter ends with a direct appeal to publishers: “We want our publishers to stand with us. To make a pledge that they will never release books that were created by machines.”
The writers demand specific promises from publishers to avoid AI-generated book publications, to refrain from using AI to replace staff members, and to maintain human voices in audiobooks. They also stress the need to safeguard the writing profession for upcoming generations. “We await your response,” the authors conclude.