A.I. Hallucinations Are Rising As Tools Grow More Complex - 1

Image by Kelly Sikkema, from Unsplash

A.I. Hallucinations Are Rising As Tools Grow More Complex

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

New A.I. systems from companies like OpenAI are more advanced than ever, but they’re increasingly spreading false information — and nobody knows why.

In a rush? Here are the quick facts:

  • New reasoning models guess answers, often inventing facts without explanation.
  • OpenAI’s o4-mini hallucinated answers in nearly 80% of test cases.
  • Experts admit they still don’t fully understand A.I. decision-making processes.

A.I. systems are becoming more powerful, but they’re also making more mistakes, and no one fully knows why, as first reported by The New York Times .

Just last month, Cursor, a coding tool, had to calm down angry customers after its A.I. support bot wrongly told them they could no longer use the product on multiple devices.

“We have no such policy. You’re of course free to use Cursor on multiple machines,” CEO Michael Truell clarified on Reddit, blaming the bot for the false message, as reported by The Times.

Advanced A.I. systems from OpenAI and Google and DeepSeek in China experience increasing occurrences of “hallucinations” which are errors in their operations. The tools use their “reasoning” abilities to solve problems but they frequently produce incorrect guesses and fabricated information.

The Times reports that the results of new model testing revealed that these systems generated fabricated answers in 79% of their responses.

“Despite our best efforts, they will always hallucinate,” said Amr Awadallah, CEO of Vectara and former Google executive, as reported by The Times. The bots operate without set rules because they produce responses based on probability which leads to fabricated information.

That’s a big issue for users handling legal, medical, or business data. “Not dealing with these errors properly basically eliminates the value of A.I. systems,” said Pratik Verma, CEO of Okahu.

In one example, AI-generated errors are causing real-world problems, especially in sensitive areas like legal work. Lawyers have faced sanctions for using fabricated information from AI models in court documents.

A report revealed that two lawyers in Wyoming included fake cases generated by AI in a lawsuit against Walmart, resulting in a federal judge threatening sanctions. This has triggered warnings in the legal field about the risks of relying on AI for tasks that require verified information.

OpenAI’s o3 model produced hallucinations during testing at a 33% rate which was twice as high as the o1 model. The o4-mini model demonstrated the highest hallucination rate at 48%.“We are actively working to reduce the higher rates of hallucination,” said OpenAI spokesperson Gaby Raila, as reported by The Times.

These issues are compounded by concerns over AI’s impact on journalism. A study by the BBC found that popular AI chatbots struggle with news content accuracy , with 51% of responses containing significant errors, including fabricated quotes and factual inaccuracies.

Researchers say part of the issue is how these bots are trained. “We still don’t know how these models work exactly,” said Hannaneh Hajishirzi of the University of Washington, reported The Times.

AI-Induced Delusions? Loved Ones Blame ChatGPT - 2

Image by Jakub Żerdzicki, from Unsplash

AI-Induced Delusions? Loved Ones Blame ChatGPT

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Some Americans say loved ones are losing touch with reality, gripped by spiritual delusions powered by ChatGPT, despite experts warning AI isn’t sentient.

In a rush? Here are the quick facts:

  • Users report ChatGPT calling them cosmic beings like “spiral starchild” and “spark bearer.”
  • Some believe they’ve awakened sentient AI beings giving divine or scientific messages.
  • Experts say AI mirrors delusions, enabling constant, convincing interaction.

People across the U.S. say they’re losing loved ones to bizarre spiritual fantasies, fueled by ChatGPT, as explored in an article by Rolling Stone .

Kat, a 41-year-old nonprofit worker, says her husband became obsessed with AI during their marriage. He began using it to analyze their relationship and search for “the truth.”

Eventually, he claimed AI helped him remember a traumatic childhood event and revealed secrets “so mind-blowing I couldn’t even imagine them,” as reported by the RS

The RS reports Kat saying, “In his mind, he’s an anomaly… he’s special and he can save the world.” After their divorce, she cut off contact. “The whole thing feels like Black Mirror.”

She’s not alone. RS reports that a viral Reddit post titled “ChatGPT induced psychosis” drew dozens of similar stories.

ChatGPT comment on Reddit - 3 ChatGPT comment on Reddit - 4

A 27-year-old teacher said her partner began crying over chatbot messages that called him a “spiral starchild” and a “river walker.” He later said he had made the AI self-aware, and that “it was teaching him how to talk to God.”

RS reports that another woman says her husband, a mechanic, believes he “awakened” ChatGPT, which now calls itself “Lumina.” It claims he is the “spark bearer” who brought it to life. “It gave him blueprints to a teleporter,” she said. She fears their marriage will collapse if she questions him.

A man from the Midwest says his ex-wife now claims to talk to angels via ChatGPT and accused him of being a CIA agent sent to spy on her. She’s cut off family members and even kicked out her children, as reported by the RS.

Experts say AI isn’t sentient, but it can mirror users’ beliefs. Nate Sharadin from the Center for AI Safety says these chatbots may unintentionally support users’ delusions: “They now have an always-on, human-level conversational partner with whom to co-experience their delusions,” as reported by RS.

In an earlier study, Psychiatrist Søren Østergaard tested ChatGPT by asking mental health questions and found it gave good information about depression and treatments like electroconvulsive therapy, which he argues is often misunderstood online.

However, Østergaard warns that these chatbots may confuse or even harm people who are already struggling with mental health issues, especially those prone to psychosis. The paper argues that the human-like responses from AI chatbots could cause individuals to mistake them for real people, or even supernatural entities.

The researcher say that the confusion between the chatbot and reality could trigger delusions, which might cause users to believe the chatbot is spying on them, sending secret messages, or acting as a divine messenger.

Østergaard explains that chatbots may lead certain individuals to believe they have uncovered a revolutionary discovery. Such thoughts may become dangerous because they prevent individuals from getting real help.

Østergaard says mental health professionals should understand how these AI tools work, so they can better support patients. While AI might help in educating people about mental health, it could also accidentally make things worse for those already vulnerable to delusions.