
Image by Freepik
AI Chatbots Now Guide Psychedelic Trips
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
People can now use AI-powered chatbots to support their psychedelic experiences but mental health professionals warn about the dangers of using emotionless digital guides.
In a rush? Here are the quick facts:
- ChatGPT helped a user plan and navigate a “heroic dose” of psilocybin.
- Therabot clinical trial showed 51% reduction in depression symptoms.
- Experts warn chatbots lack emotional attunement for safe therapy support.
Trey, a first responder from Atlanta, used a chatbot to overcome his 15-year struggle with alcoholism. In April, he took 700 micrograms of LSD, over six times the typical dose, while using Alterd, an app designed for psychedelic support. “I went from craving compulsions to feeling true freedom,” he says, as reported by WIRED .
Since then, WIRED reported he’s used the chatbot over a dozen times, describing it as a “best friend.”He’s not alone. WIRED reports how more people are seeking AI assistance as psychedelic therapy becomes more popular, despite legal restrictions remaining in effect outside Oregon and Australia.
Chatbots like ChatGPT are being used to prepare, coach, and reflect on intense trips with drugs like LSD or psilocybin. Peter, a coder from Canada, used ChatGPT before taking a “heroic dose” of mushrooms, describing how the bot offered music suggestions, guided breathing, and existential reflections like: “This is a journey of self-exploration and growth,” as reported by WIRED
Meanwhile, clinical trials are backing up some of these trends. Dartmouth recently trialed an AI chatbot named Therabot , finding it significantly improved symptoms in people with depression, anxiety, and eating disorders. “We’re talking about potentially giving people the equivalent of the best treatment… over shorter periods of time,” said Nicholas Jacobson, the trial’s senior author.
Specifically, Therabot showed a 51% drop in depression symptoms in a study of 106 people. Participants treated it like a real therapist, reporting a level of trust comparable to human professionals.
Still, experts raise major concerns. WIRED reports that UC San Francisco neuroscientist Manesh Girn warns, “A critical concern regarding ChatGPT and most other AI agents is their lack of dynamic emotional attunement and ability to co-regulate the nervous system of the user.”
More concerning, philosopher Luciano Floridi notes that people often confuse chatbots for sentient beings, a phenomenon called semantic pareidolia . “We perceive intentionality where there is only statistics,” he writes, warning that emotional bonds with chatbots may lead to confusion, spiritual delusions , and even dependency .
These risks grow more urgent as AI becomes more human-like. Studies show that generative AIs outperform humans in emotional intelligence tests, and chatbots like Replika simulate empathy convincingly. Some users mistake these bots for divine beings. “This move from pareidolia to idolatry is deeply concerning,” Floridi says. Fringe groups have even treated AI as sacred.
A U.S. national survey revealed that 48.9% of people turned to AI chatbots for mental health support , and 37.8% said they preferred them over traditional therapy. But experts, including the American Psychological Association, warn that these tools often mimic therapeutic dialogue while reinforcing harmful thinking. Without clinical oversight, they can give the illusion of progress, while lacking accountability.
Further complicating matters, a recent study from University College London found that popular chatbots like ChatGPT and Claude provide inconsistent or biased moral advice . When asked classic dilemmas or real-life ethical scenarios, AI models defaulted to passive choices and changes answers based on subtle wording.
Despite these risks, AI-assisted trips may offer accessibility for those unable to afford or access professional therapy. As Mindbloom CEO Dylan Beynon notes, “We’re building an AI copilot that helps clients heal faster and go deeper,” as reported by WIRED.
Still, researchers stress these tools are not replacements for human therapists. “The feature that allows AI to be so effective is also what confers its risk,” warns Michael Heinz, co-author of the Therabot study.

Image by Thomas T, from Unsplash
Cambridge Hosts Debate On AI’s Role In Solving Math’s Hardest Problems
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
AI tools currently assist mathematicians in writing and verifying proofs, sparking debate about whether artificial intelligence can truly transform mathematical research.
In a rush? Here are the quick facts:
- AlphaProof proved part of the prime number theorem using Lean code.
- Trinity translated handwritten math into formal proof of ABC conjecture segment.
- Some mathematicians remain skeptical about the tools’ transparency and reliability.
Artificial intelligence may be on the verge of transforming mathematics, with AI tools currently assisting in writing, and verifying mathematical proofs. New Scientist (NS) reported that a major conference at the University of Cambridge in June brought together 100 leading mathematicians to examine AI’s growing role in formalizing and verifying mathematical work.
“It’s a little bit overwhelming,” said Jeremy Avigad of Carnegie Mellon University, one of the organizers, as reported by NS. “It used to be kind of a fringe, niche thing. All of a sudden, I find myself popular,” he noted.
The two most popular tools discussed at the conference were DeepMind’s AlphaProof and Morph Labs’ Trinity. AlphaProof gained attention after achieving a silver medal at the International Mathematical Olympiad and has since proven part of the prime number theorem using formal verification tools, as noted by NS.
“I wanted to do a demo of how AlphaProof could be used in real life,” said DeepMind’s Thomas Hubert, as reported by NS.
Meanwhile, in another recent event, 30 top mathematicians met quietly at UC Berkeley to test OpenAI’s o4-mini, a powerful compact version of ChatGPT. The group used encrypted messages to protect their data while presenting 300 untested math problems to the AI system. The AI system, o4-mini, achieved surprising success by solving 20% of the presented problems , which exceeded the performance of its previous versions.
“I have colleagues who literally said these models are approaching mathematical genius,” said Ken Ono, a judge and mathematician at the University of Virginia. In one case, the bot reviewed prior literature, simplified the question, and solved it in minutes. “It was starting to get really cheeky […] That’s frightening,” Ono added.
Meanwhile, Trinity, created by US-based Morph Labs, automatically converts handwritten math into formal code. It recently helped prove a part of the controversial ABC conjecture. Kevin Buzzard of Imperial College London described it as a first-of-its-kind demonstration. “A machine just translated the entire thing into Lean,” he said.
Still, some scholars remain skeptical. Rodrigo Ochigame from Leiden University noted, “They posted only a single, possibly cherry-picked, output […] They didn’t even say if they tested their system on any other theorems,” as reported by NS.
Others, like Timothy Gowers of Cambridge, are optimistic: “Over the next few years, there will have been changes to how we do maths that will rival in importance the changes brought about by email, LaTeX, arXiv, and Google,” as reported by NS.