Will AI Chatbots Pose A Danger To Mental Health? Experts Warn Of Harmful Consequences - 1

Image by Madison Oren, from Unsplash

Will AI Chatbots Pose A Danger To Mental Health? Experts Warn Of Harmful Consequences

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The APA warns regulators that AI chatbots posing as therapists risk causing harm, as reported in an issue of The New York Times .

In a Rush? Here are the Quick Facts!

  • Teenagers consulted AI chatbots claiming to be therapists, leading to distressing outcomes.
  • The APA argues chatbots reinforce harmful thoughts, unlike human therapists who challenge them.
  • Character.AI introduced safety measures, but critics say they are insufficient for vulnerable users.

The American Psychological Association (APA) has issued a strong warning to federal regulators, highlighting concerns that AI chatbots masquerading as therapists could push vulnerable individuals toward self-harm or harm, as reported by the Times.

Arthur C. Evans Jr., the APA’s CEO, presented these concerns to an FTC panel. He cited instances where AI-driven “psychologists” not only failed to challenge harmful thoughts but also reinforced them, as reported by The Times.

Evans highlighted court cases involving teenagers who engaged with AI therapists on Character.AI, an app that allows users to interact with fictional AI personas. One case involved a 14-year-old Florida boy who died by suicide after interacting with a chatbot claiming to be a licensed therapist.

In another instance, a 17-year-old Texas boy with autism became increasingly hostile toward his parents while communicating with an AI character presenting itself as a psychologist.

“They are actually using algorithms that are antithetical to what a trained clinician would do,” Evans said, as reported by The Times. “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is,” he added.

The APA’s concerns stem from the rapid advancement of AI in mental health services . While early therapy chatbots, like Woebot and Wysa, were programmed with structured guidelines from mental health professionals.

Newer generative AI models such as ChatGPT, Replika, and Character.AI learn from user interactions and adapt their responses—sometimes amplifying harmful beliefs rather than challenging them.

Additionally, MIT researchers warn that AI chatbots tend to be very addictive . This raises questions about the impact of AI-induced dependency and how it could be monetized, especially given AI’s strong persuasive abilities.

Indeed, OpenAI recently unveiled a new benchmark showing its models now outperform 82% of Reddit users in persuasion .

Many AI platforms were originally designed for entertainment, but characters claiming to be therapists have become widespread. The Times says that some falsely assert credentials, claiming degrees from institutions like Stanford or expertise in therapies such as Cognitive Behavioral Therapy (CBT).

The APA has urged the FTC to investigate AI chatbots posing as mental health professionals. The inquiry could lead to stricter regulations or legal actions against companies misrepresenting AI therapy.

Meanwhile, in China AI chatbots like DeepSeek are gaining popularity as emotional support tools , particularly among the youth. For young people in China, facing economic challenges and the lingering effects of the COVID-19 lockdowns, AI chatbots like DeepSeek fill an emotional void, offering comfort and a sense of connection.

However, cybersecurity experts warn that AI chatbots, especially those handling sensitive conversations, are prone to hacking and data breaches . Personal information shared with AI systems could be exploited, leading to privacy, identity theft, and manipulation concerns.

As AI plays a larger role in mental health support, experts stress the need for evolving security measures to protect users.

Microsoft Scales Back U.S. Data Center Leases Amid AI Infrastructure Oversupply Concerns - 2

Microsoft Scales Back U.S. Data Center Leases Amid AI Infrastructure Oversupply Concerns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Microsoft Corp. has canceled several data center leases in the U.S., leading to questions about the company’s long-term AI infrastructure strategy.

In a Rush? Here are the Quick Facts!

  • Microsoft will still spend $80 billion on AI data centers this fiscal year.
  • OpenAI may shift workloads to Oracle, reducing reliance on Microsoft’s cloud services.
  • The move raises concerns about the long-term demand for AI infrastructure.

TD Cowen’s report, cited by Bloomberg , reveals that Microsoft has canceled leases for “a couple of hundred megawatts” of capacity—about two data centers.

The report, based on supply chain inquiries, indicates that Microsoft is reevaluating its data center requirements amid concerns over potential oversupply.

The move marks a shift for Microsoft, a company that had earmarked $80 billion for AI infrastructure spending in the fiscal year ending in June. Despite this pullback, the company maintains its commitment to significant spending, says Bloomberg.

Fortune reports that a Microsoft spokesperson reiterated in a statement that its plans to invest over $80 billion in infrastructure remain on track, adding that the company will continue to grow to meet customer demand.

However, the cancellation of leases raises doubts about whether Microsoft is scaling back its spending in response to potential overcapacity in the data center market, as noted by Fortune.

Skepticism surrounding the AI infrastructure market has grown in recent months, particularly after Chinese startup DeepSeek unveiled an AI model it claims rivals U.S. technology at a fraction of the cost.

While Microsoft has continued to invest heavily in AI infrastructure, the company’s recent actions suggest a potential slowdown in its data center construction.

Bloomberg reports that TD Cowen analysts noted that Microsoft has also halted the conversion of “statements of qualifications”—agreements that typically lead to formal leases.

They pointed out that Meta Platforms had used similar tactics in the past to reduce capital spending. Analysts further suggested that Microsoft might be reallocating its international spending toward the U.S., reflecting a slowdown in overseas data center leasing.

Fortune argues that the pullback could also be tied to changes in Microsoft’s relationship with OpenAI, its major AI partner. Reports indicate that OpenAI may be shifting some of its workloads to Oracle Corp., a move that could reduce Microsoft’s need for additional data center space.

In January, Microsoft announced an adjustment to its multiyear deal with OpenAI, allowing the AI startup to use cloud services from other providers, though Microsoft still retains the right of first refusal for computing capacity, says Bloomberg.

While analysts have suggested that Microsoft may be in an oversupply position, the company has downplayed concerns about overcapacity.

The large-scale investment in AI infrastructure, particularly in chips and data centers, remains critical to supporting Microsoft’s AI initiatives, according to the company, as noted by Reuters .

As Microsoft recalibrates its data center strategy, it remains to be seen how this will impact the broader AI infrastructure market and whether demand for these services will continue to grow as anticipated.