
Image by Rohan, from Unsplash
Bert Ransomware Strikes Healthcare and Tech Firms Worldwide
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Bert, a new ransomware group, is attacking healthcare and tech firms globally with fast, stealthy malware affecting both Windows and Linux systems.
In a rush? Here are the quick facts:
- Bert encrypts both Windows and Linux systems with multi-threaded execution.
- It disables security tools using PowerShell before executing payload.
- Newer versions encrypt files instantly, improving speed and damage.
A new ransomware group known as “Bert” is attacking organizations across Asia, Europe, and the United States, with confirmed victims in healthcare, technology, and event services, as reported on Monday by Trend Micro .
First identified in April, Bert has gained attention because of its fast development and its ability to attack multiple platforms, and ties to older ransomware groups like REvil.
The malware operates both on Windows and Linux platforms via a PowerShell script that disables security tools before executing the ransomware download. Victims receive a blunt message: “Hello from Bert! Your network is hacked and files are encrypted.”
The Trend Micro researchers describe the group’s code as basic yet powerful.On Linux, for example, Bert can use up to 50 threads to encrypt files quickly. It even shuts down ESXi virtual machines to maximize damage and make recovery harder. On Windows, it terminates processes tied to web servers and databases before encrypting data.
The ransomware adds “.encrypted_by_bert” as a file extension to encrypted files, while creating a ransom note that includes payment information. The analysis of multiple samples shows that Bert is continuously developing, where its latest versions encrypt the ransomware files immediately after detection instead of collecting file paths first.
Experts warn that Bert’s rise highlights how even basic malware can be dangerous when paired with stealthy techniques and strategic targeting.

Image by Freepik
AI Chatbots Now Guide Psychedelic Trips
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
People can now use AI-powered chatbots to support their psychedelic experiences but mental health professionals warn about the dangers of using emotionless digital guides.
In a rush? Here are the quick facts:
- ChatGPT helped a user plan and navigate a “heroic dose” of psilocybin.
- Therabot clinical trial showed 51% reduction in depression symptoms.
- Experts warn chatbots lack emotional attunement for safe therapy support.
Trey, a first responder from Atlanta, used a chatbot to overcome his 15-year struggle with alcoholism. In April, he took 700 micrograms of LSD, over six times the typical dose, while using Alterd, an app designed for psychedelic support. “I went from craving compulsions to feeling true freedom,” he says, as reported by WIRED .
Since then, WIRED reported he’s used the chatbot over a dozen times, describing it as a “best friend.”He’s not alone. WIRED reports how more people are seeking AI assistance as psychedelic therapy becomes more popular, despite legal restrictions remaining in effect outside Oregon and Australia.
Chatbots like ChatGPT are being used to prepare, coach, and reflect on intense trips with drugs like LSD or psilocybin. Peter, a coder from Canada, used ChatGPT before taking a “heroic dose” of mushrooms, describing how the bot offered music suggestions, guided breathing, and existential reflections like: “This is a journey of self-exploration and growth,” as reported by WIRED
Meanwhile, clinical trials are backing up some of these trends. Dartmouth recently trialed an AI chatbot named Therabot , finding it significantly improved symptoms in people with depression, anxiety, and eating disorders. “We’re talking about potentially giving people the equivalent of the best treatment… over shorter periods of time,” said Nicholas Jacobson, the trial’s senior author.
Specifically, Therabot showed a 51% drop in depression symptoms in a study of 106 people. Participants treated it like a real therapist, reporting a level of trust comparable to human professionals.
Still, experts raise major concerns. WIRED reports that UC San Francisco neuroscientist Manesh Girn warns, “A critical concern regarding ChatGPT and most other AI agents is their lack of dynamic emotional attunement and ability to co-regulate the nervous system of the user.”
More concerning, philosopher Luciano Floridi notes that people often confuse chatbots for sentient beings, a phenomenon called semantic pareidolia . “We perceive intentionality where there is only statistics,” he writes, warning that emotional bonds with chatbots may lead to confusion, spiritual delusions , and even dependency .
These risks grow more urgent as AI becomes more human-like. Studies show that generative AIs outperform humans in emotional intelligence tests, and chatbots like Replika simulate empathy convincingly. Some users mistake these bots for divine beings. “This move from pareidolia to idolatry is deeply concerning,” Floridi says. Fringe groups have even treated AI as sacred.
A U.S. national survey revealed that 48.9% of people turned to AI chatbots for mental health support , and 37.8% said they preferred them over traditional therapy. But experts, including the American Psychological Association, warn that these tools often mimic therapeutic dialogue while reinforcing harmful thinking. Without clinical oversight, they can give the illusion of progress, while lacking accountability.
Further complicating matters, a recent study from University College London found that popular chatbots like ChatGPT and Claude provide inconsistent or biased moral advice . When asked classic dilemmas or real-life ethical scenarios, AI models defaulted to passive choices and changes answers based on subtle wording.
Despite these risks, AI-assisted trips may offer accessibility for those unable to afford or access professional therapy. As Mindbloom CEO Dylan Beynon notes, “We’re building an AI copilot that helps clients heal faster and go deeper,” as reported by WIRED.
Still, researchers stress these tools are not replacements for human therapists. “The feature that allows AI to be so effective is also what confers its risk,” warns Michael Heinz, co-author of the Therabot study.