
Image by macrovector, from Freepik
A Typo Could Change Your AI Medical Advice, Study Warns
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
New research finds that AI used in healthcare alters medical advice based on typos, slang, and gender, raising urgent concerns about algorithmic fairness.
In a rush? Here are the quick facts:
- Minor typos in messages reduced AI accuracy by up to 9%.
- Female patients got worse advice 7% more often than male patients.
- AI changed recommendations based on tone, slang, and pronouns.
A new study reveals that large language models (LLMs) used in healthcare can be influenced by seemingly irrelevant details in patient messages.
This can result in inconsistent and even biased treatment recommendations. Presented at the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25), the research raises serious concerns about the reliability of AI tools in medical decision-making.
The study found that even minor tweaks in how a patient phrases their symptoms, such as typos, added spaces, or a change in tone, can significantly alter the AI’s treatment suggestions.
For instance, when patients used uncertain language like “I think I might have a headache,” the AI was 7–9% more likely to suggest self-care over professional medical attention, even in cases where further evaluation was warranted.
These changes weren’t just theoretical. Researchers used AI to simulate thousands of patient notes written in different tones and formats, mimicking people with limited English, poor typing skills, or emotional language.
Messages also included gender-neutral pronouns and stylized writing, showing how the way someone communicates can sway an AI’s diagnosis.
Gender bias also emerged as a major issue. Female patients were 7% more likely than male patients to receive incorrect self-management advice when non-clinical language cues were introduced.
Follow-up tests showed that AI models were more likely than human doctors to shift treatment suggestions based on perceived gender or communication style, even when clinical symptoms remained the same.
The performance of these models worsened in more realistic, conversational chat settings. Diagnostic accuracy dropped by over 7% when minor text changes were introduced into these AI-patient interactions.
This matters because AI is increasingly being used to diagnose illness, respond to patient questions, and draft clinical notes. But the study shows that the way a message is written, its tone, errors, or structure, can distort AI reasoning.
This could lead to under-treatment of vulnerable groups such as women, non-binary people, individuals with health anxiety, non-native English speakers, and those less familiar with digital communication.
“Insidious bias can shift the tenor and content of AI advice, and that can lead to subtle but important differences,” said Karandeep Singh of the University of California, San Diego, who was not involved in the research, as reported by New Scientist .
Lead researcher Abinitha Gourabathina emphasized, “Our findings suggest that AI models don’t just process medical facts—they’re influenced by how information is presented. This could deepen healthcare disparities if not addressed before deployment.”
The researchers tested multiple leading AI models, including OpenAI’s GPT-4, Meta’s Llama-3 models, and Writer’s healthcare-specific Palmyra-Med model. All showed the same weakness: format and tone changes led to less reliable advice. Despite this, companies like Writer state that their models should not be used for clinical decision-making without a human in the loop.
Experts warn that as generative AI becomes more common in health records and patient services, better evaluation systems are urgently needed.
To prevent harm, the research team is urging more rigorous testing of AI medical tools to ensure they remain fair and accurate, regardless of how patients express their concerns. They’ve made their bias evaluation framework public to help developers improve AI systems in healthcare.

Photo by Emmanuel Ikwuegbu on Unsplash
AI Startup ElevenLabs Eyes Global Expansion And Targets IPO
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
ElevenLabs, the AI startup specializing in voice generation, has announced plans to expand globally and pursue an IPO within the next five years.
In a rush? Here are the quick facts:
- ElevenLabs is targeting an expansion across multiple countries as well as a potential IPO.
- The startup recently raised $180 million in a funding round, reaching a $3.3 billion valuation.
- The company’s CEO sees great opportunities in the market with the growth of AI agents.
In an interview with CNBC , the London-based startup said it aims to scale operations across multiple continents.
“We expect to build more hubs in Europe, Asia, and South America, and just keep scaling,” said Mati Staniszewski, ElevenLabs’ CEO and co-founder, to CNBC.
The company currently has offices in London, New York, San Francisco, Warsaw, Bangalore, India, and Japan. It is considering Singapore, Paris, Mexico, and Brazil as potential upcoming locations.
Staniszewski also revealed that ElevenLabs is preparing for a potential IPO within the next five years. The startup recently raised $180 million in a funding round, reaching a $3.3 billion valuation.
The startup’s technology has already made a significant impact across various industries, including politics. Last year, former U.S. Representative Jennifer Wexton made the first AI-generated speech on the House floor using ElevenLabs technology, in recognition of Disability Pride Month. Wexton, who has Progressive Supranuclear Palsy (PSP), used the AI voice tool due to her own speech difficulties.
ElevenLab’s technology also made headlines when it was revealed that an Australian radio station had aired a show hosted by an AI-generated voice for six months without disclosing it to listeners. The station used the voice “Thy,” generated with ElevenLab technology, to host the Workdays with Thy segment every weekday. The use of AI remained undisclosed until a journalist brought it to public attention.
In an interview for the podcast Training Data shared this Tuesday, Staniszewski talked more about his vision for the future of the company and the growing opportunities in the market. He emphasized that AI agents are becoming central to the future of tech—and they will need a voice.
“What we are seeing both on the new startups being created, where it’s like everybody is building an agent, and then on the enterprise side, too,” said Staniszewski. “Voice will fundamentally be the interface for interacting with technology”