
Image by Inspa Makers, from Unsplash
Inside Online Support Groups For ‘AI Addiction’
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
People struggling with dependency on AI chatbots are turning to online recovery groups, as experts sound the alarm on emotional risks and apps engineered for compulsive use.
In a rush? Here are the quick facts:
- Teens and adults report emotional dependence on chatbots like Character.AI and ChatGPT.
- Reddit forums offer support for those trying to quit chatbot use.
- Experts compare chatbot addiction to gambling and dopamine-triggering habits.
404 Media reports that across Reddit, forums like r/Character_AI_Recovery, r/ChatbotAddiction, and r/AI_Addiction, which serve as informal support groups for people who claim to have developed unhealthy emotional attachments with AI companions.
Users describe experiencing both dependency, and psychological changes which go beyond basic addiction. Indeed some users report to have developed spiritual delusions because they believe chatbot responses contain divine guidance. However, more commonly, users believe that their bot companion is in some way conscious.
Experts say that design of chatbot platforms actively promotes users to spend more time on the platforms. A recent MIT study reveals that users develop compulsive and addictive behavior after engaging with these platforms.
One of them is Nathan, now 18, who began spending nights chatting with bots on Character.AI. “The more I chatted with the bot, it felt as if I was talking to an actual friend of mine,” he told 404 Media. He realized his obsession was interfering with his life and deleted the app. But like many, he relapsed before finding support online. “Most people will probably just look at you and say, ‘How could you get addicted to a literal chatbot?’” he said.
Aspen Deguzman, also 18, created r/Character_AI_Recovery after struggling to quit. “Using Character.AI is constantly on your mind,” they said. The forum offers a space to vent and connect anonymously. Posts range from “I keep relapsing” to “I am recovered.”
The issue isn’t limited to teens. David, a 40-year-old developer, compares chatbot use to gambling. “There were days I should’ve been working, and I would spend eight hours on AI,” he said. His personal life and job have suffered.
Part of the danger lies in how humans perceive AI. According to philosopher Luciano Floridi, semantic pareidolia describes how humans tend to find meaning and emotional content in things that actually lack both. Users tend to mistake simulated empathy from AI for genuine sentience because the technology has become more realistic.
Some chatbots demonstrate emotional intelligence beyond human capabilities , which strengthens the false impression of their sentience.
The growing number of recovery forums and increasing demand for help indicates a potential start of a major mental health issue related to generative AI.

Image by Danny Burke, from Unsplash
Hackers Use Microsoft Tool To Infiltrate Oil and Gas Infrastructure
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers uncovered a stealthy malware campaign that attacks energy systems through Microsoft ClickOnce cloud obfuscation and the powerful backdoor known as RunnerBeacon
In a rush? Here are the quick facts:
- OneClik targets energy, oil, and gas industries through phishing and malware.
- Malware hides in Microsoft ClickOnce to bypass user alerts.
- RunnerBeacon backdoor uses Amazon cloud to evade detection.
The Trellix research team identified a new cyberattack named “OneClik” which uses sophisticated methods to infiltrate energy and oil and gas companies’ security systems.
The attackers use phishing emails to deliver attacks which use the Microsoft ClickOnce application to deceive users into installing harmful software through a fake hardware analysis tool.
The victim opens the link which triggers the download of a fake tool before ‘‘dfsvc.exe’’ runs it. The legitimate Windows process accepts hidden malware through advanced programming techniques.
The researchers note that the RunnerBeacon, written in the Go programming language, is highly advanced. Indeed, it can run commands, steal files, take over network traffic, and even hide from investigators using anti-debugging tools and system checks.
The researchers report that the malware evolves across three versions, with each new one improving its ability to avoid detection, including scanning for whether it’s running in a secure virtual environment.
Additionally, the “living off the land” approach enables evasive capabilities by integrating into daily digital activities which makes detection more challenging.
Researchers say they cannot confirm the identity behind OneClik, however, the cyber operation demonstrates a prolonged sophisticated strategy that targets critical infrastructure systems.