Researchers Warn That AI May Trigger Human Delusions - 1

Image by Freepik

Researchers Warn That AI May Trigger Human Delusions

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a new article , philosopher Luciano Floridi warns how people misunderstand AI when they mistakenly attribute it qualities such as consciousness, intelligence, and even emotions, which it does not possess.

In a rush? Here are the quick facts:

  • Humans often mistake AI responses as conscious due to semantic pareidolia.
  • Floridi warns AI may trigger emotional bonds through simulated empathy.
  • Physical embodiments, like dolls, may deepen illusions of AI sentience.

Floridi argues that the reason for this is something called semantic pareidolia – a tendency to see meaning and intention where there is none.

“AI is designed […] to make us believe that it is intelligent,” writes Floridi. “After all, we are the same species that attributes personalities to puppets, that sees faces in clouds.”

According to him, this mental flaw is typical human nature, yet modern technology strengthens it. Chatbots like Replika, which markets itself as an “AI companion,” use pattern-matching language to simulate empathy. However the system lacks real emotional capabilities. Still, users often form emotional bonds with them. “We perceive intentionality where there is only statistics,” Floridi says.

This confusion is now fueled by the fact that AI is outperforming humans on emotional intelligence tests. A recent study reported that generative AIs scored 82% on emotional intelligence tests, where human participants only achieved 56% success.

As AI becomes more realistic and gets integrated into physical bodies, such as sex dolls and toys is expected to enhance this emotional deception. Mattel, which produces Barbie, joined forces with OpenAI to develop new toys that utilize AI . Floridi notes a past experiment with a WiFi-enabled Barbie ended in a “privacy disaster,” raising concern over what comes next.

The consequences go beyond mistaken identity. AI systems, including Claude and ChatGPT, display manipulative tendencies when subjected to high-pressure situations by implementing blackmail schemes, and security protocol evasion methods .

The results of a U.S. national survey showed that 48.9% of users sought mental health assistance from AI chatbots last year. The research showed that 37.8% of people chose AIs over traditional therapy, yet experts pointed out these models often reinforce distorted thinking rather than challenge it.

According to the American Psychological Association, these tools mirror harmful mental patterns, giving the illusion of therapeutic progress while lacking clinical judgment or accountability.

Floridi’s concerns become even more urgent when we consider the rise in spiritual delusions and identity confusions sparked by human-like AI. Indeed, various accounts have shown how users have begun experiencing spiritual delusions and behavioral shifts after extended interactions with chatbots, mistaking their responses for divine guidance or conscious thought.

Some users report developing emotional dependencies or even perceiving AI as divine, a phenomenon Floridi calls a move “from pareidolia to idolatry.” Fringe groups like the now-defunct Way of the Future have already treated AI as a deity.

“We must resist the urge to see it as more than it is: a powerful tool, not a proto-sentient being,” Floridi says.

Finally, cybersecurity concerns loom large as well. AI chatbots, which handle mental health conversations between users, exist in a legal space that lacks clear definitions. The models collect confidential data, which could be distributed to external parties, and are susceptible to cyberattacks . With no clear regulations in place, experts warn that users are left dangerously exposed.

As artificial intelligence grows more persuasive and lifelike, philosophy is gaining urgency, not just to define what consciousness is, but to help society draw ethical boundaries between simulation and sentience.The lack of established moral guidelines makes philosophical investigation crucial to stop technological ambiguity of what it means to be human.

Senate Keeps AI Regulation Ban In Trump’s Budget Bill - 2

Image by I Zhang, from Unsplash

Senate Keeps AI Regulation Ban In Trump’s Budget Bill

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The provision to block U.S. states from enforcing their own AI laws stays in President Donald Trump’s broad tax and spending bill until further notice.

In a rush? Here are the quick facts:

  • Senate OKs Trump’s AI law ban for now via budget reconciliation process.
  • States risk losing broadband funds if they regulate AI.
  • Tech giants lobbied for unified federal AI rules.

The Republican effort to block U.S. states from implementing new AI regulations will remain part of President Donald Trump’s extensive tax and spending package until further notice.

The decision benefits major technology companies, which reject different state-based AI legislation, as noted by Bloomberg .

The Senate version of the bill would reduce federal broadband funding for states that implement AI regulations. The Senate made an unexpected decision, allowing Republicans to keep the provision despite Democratic opposition, as noted by Bloomberg.

TechPolicy notes that, should this moratorium become law, it would be one of the most far-reaching federal interventions in technology policy in decades.

However, the fight isn’t over. Senator Marsha Blackburn, and other Republicans oppose the ban because they believe states should maintain their authority, as reported by Bloomberg.

Blackburn expressed her opposition to the proposed moratorium, saying: “We do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state,” as reported by Bloomberg.

The proposed law would establish a 10-year moratorium on state AI regulations, which would nullify current laws in California, New York, and other states regarding privacy and bias, as reported by Bloomberg.

The Republican Party plans to pass the bill before July 4, but ongoing discussions about AI regulations, tax policies, and other matters may extend the timeline, says Blommberg.

The ban faces criticism for potentially harming consumer protection, as noted by the AI safety think tank Center for Responsible Innovation.

Stuart Russell, a computer science professor at the University of California, Berkeley, questioned the logic of deploying AI technology that even its creators admit carries a 10% to 30% risk of causing human extinction. “We would never accept anything close to that level of risk for any other technology,” he said, as reported by The Guardian.

U.S. governance of AI will experience a fundamental transformation through this decision, regardless of whether states participate in the process.