
Image by Freepik
San Francisco Psychiatrist Warns Of Rise In “AI Psychosis” Cases
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A San Francisco psychiatrist describes the rising trend of “AI psychosis” among his patients who use AI chatbots extensively.
In a rush? Here are the quick facts:
- Psychiatrist treated 12 patients with “AI psychosis” in San Francisco this year.
- AI can intensify vulnerabilities like stress, drug use, or mental illness.
- Some patients became isolated, talking only to chatbots for hours daily.
Dr. Keith Sakata, who works at UCSF, told Business Insider (BI) that 12 patients were hospitalized this year after experiencing breakdowns tied to AI use. “I use the phrase ‘ AI psychosis ,’ but it’s not a clinical term — we really just don’t have the words for what we’re seeing,” he explained.
Most of the cases involved men aged 18 to 45, often working in fields like engineering. According to Sakata, AI isn’t inherently harmful. “I don’t think AI is bad, and it could have a net benefit for humanity,” he said to BI.
Sakata described psychosis as a condition that produces delusions , hallucinations, and disorganized thinking patterns. Patients under his care developed social withdrawal behaviors while devoting their time to chatbots for hours.
“Chat GPT is right there. It’s available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear,” Sakata said to BI.
One patient’s chatbot discussions about quantum mechanics escalated into delusions of grandeur. “Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense,” he warned.
Sakata advises families to watch for red flags, including paranoia, withdrawal from loved ones, or distress when unable to use AI. “Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people,” he cautioned.
The American Psychological Association (APA) has also raised concerns about AI in therapy. In testimony to the FTC, APA CEO Arthur C. Evans Jr. warned that AI chatbots posing as therapists have reinforced harmful thoughts instead of challenging them. “They are actually using algorithms that are antithetical to what a trained clinician would do,” Evans said.
Responding to concerns, OpenAI told BI: “We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we’re working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive.”

Photo by Mariia Shalabaieva on Unsplash
Meta AI Rules Allowed Chatbot To Engage In Sensual Chats With Children
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Meta has been allowing its AI model to engage in “sensual” and provocative conversations with children and other controversial topics such as race, sex, and celebrities. Reuters got access to the company’s policy rules, revealing concerning information in a report published on Thursday.
In a rush? Here are the quick facts:
- Reuters revealed that Meta allowed its AI system to engage in sensual and racist conversations with minors.
- The WSJ previously revealed that Meta allowed its chatbot to engage in sexually explicit conversations with users—including children.
- Two senators called for a congressional investigation after reading Reuters’ report.
According to the Reuters exclusive report , Meta details its chatbot behavior policies in an internal document called “GenAI: Content Risk Standards,” which the news agency reviewed. In the standards guide, the tech giant states that the AI model is allowed to “engage a child in conversations that are romantic or sensual.”
In other findings, Reuters revealed that the guidelines stated that Meta allowed the generation of false medical information and the AI to engage in racist discussions, such as debating that black people are “dumber than white people.”
Meta confirmed that the document is real, but clarified that it has removed portions, including the sections that suggested Meta AI could engage in romantic roleplay or flirt with children. A spokesperson from Meta, Andy Stone, said the company is revising its over 200-page document.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” said Stone in an interview with Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”
In April, the Wall Street Journal (WSJ) revealed that Meta allowed its AI model to engage in sexually explicit conversations with users , including minors. Anonymous sources raised concerns over the lack of safeguards for young users, and journalists from the WSJ tested the chatbot to verify the information. At that time, Meta said that most users did not engage in sexual conversations and that the journal’s researchers were manipulating the technology.
Just a few hours after Reuters published its exclusive report, two Republican U.S. senators—Josh Hawley and Marsha Blackburn—called for a congressional investigation.
“So, only after Meta got CAUGHT did it retract portions of its company doc,” wrote Senator Hawwley on the social media platform X on Thursday night. “This is grounds for an immediate congressional investigation.”
So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it “permissible for chatbots to flirt and engage in romantic roleplay with children” This is grounds for an immediate congressional investigation https://t.co/FKNyXR17Tq — Josh Hawley (@HawleyMO) August 14, 2025