
Image by Nik Shuliahin, from Unsplash
Patients Alarmed as Therapists Secretly Turn To ChatGPT During Sessions
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Some therapists have been found secretly resorting to ChatGPT to counsel their patients, who now feel shocked and worried about their privacy.
In a rush? Here are the quick facts:
- Some therapists secretly use ChatGPT during sessions without client consent.
- One patient discovered his therapist’s AI use through a screen-sharing glitch.
- Another Patient caught her therapist using AI when a prompt was left in a message.
A new report by MIT Technology Review shows the case of Declan, a 31-year-old from Los Angeles, who discovered his therapist was using AI in his sessions as a result of a technical glitch.
During an online session, his therapist accidentally shared his screen. “Suddenly, I was watching him use ChatGPT,” says Declan. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”
Declan played along, even echoing the AI’s phrasing. “I became the best patient ever,” he says. “I’m sure it was his dream session.” But the discovery made him question, “Is this legal?” His therapist later admitted turning to AI because he felt stuck. “I was still charged for that session,” Declan said.
Other patients have reported similar experiences. Hope, for example, messaged her therapist about the loss of her dog. The reply seemed consoling, until she noticed the AI prompt at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.” Hope recalls, “Then I started to feel kind of betrayed. … It definitely affected my trust in her.”
Experts warn that undisclosed AI use threatens the core value of authenticity in psychotherapy. “People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, professor at UC Berkeley, as reported by MIT. Aguilera then asked: “Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”
Privacy is another major concern. “This creates significant risks for patient privacy if any information about the patient is disclosed,” says Duke University’s Pardis Emami-Naeini, as noted by MIT.
Cybersecurity experts caution that chatbots handling deeply personal conversations are attractive targets for hackers . The breach of patient information can result not only in privacy violations but also create opportunities for hackers to steal identities, launch emotional manipulation schemes, as well as ransomware attacks.
Additionally, the American Psychological Association has requested an FTC investigation into AI chatbots pretending to offer mental health services, since the bots can actually reinforce harmful thoughts instead of challenging them, the way human therapists are trained to do.
While some research suggests AI can draft responses that appear more professional, suspicion alone makes patients lose trust. As psychologist Margaret Morris puts it: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”

Image by Kevin Ku, from Unsplash
Ransomware Detection Reaches 99.96% Accuracy With New AI Model
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Scientists have developed an AI system that detects ransomware with 99.96% accuracy, converting malicious behavior into images to enhance cybersecurity defenses.
In a rush? Here are the quick facts:
- AI converts ransomware behavior into images for accurate detection.
- System operates in a secure sandbox environment.
- ResNet50 model achieved 99.96% ransomware detection accuracy.
This new AI tool, detailed in Scientific Reports , uses a “behavior-to-image” technique that converts software actions into images the AI is able to analyze.
The researchers explain how ransomware attacks are becoming more frequent and costly, with the average ransom payment skyrocketing to $2.73 million.
The new system works by first running software through an isolated sandbox environment, allowing it to safely monitor its behavior. The system detects the specific behavior of file encryption, which is a characteristic ransomware operation. These behaviors are then converted into a two-dimensional grayscale or color image.
This image-based format allows researchers to use a technique known as ‘transfer learning’ with pre-trained AI models. The researchers explain that this step is crucial as it overcomes the major hurdle in cybersecurity tied to the lack of large, up-to-date datasets of ransomware samples for training.
“Limited data increases the overfitting risk, reduces diverse behavior identification, and undermines reliability in detecting new threats,” the authors explain.
Transfer learning allows the AI to apply knowledge gained from analyzing millions of general images to the specific task of spotting ransomware, all without needing an enormous dataset of malware samples.
The research team found that a model called ‘ResNet50’ was exceptionally good at analyzing these behavior-images.
Notably, the model reached an accuracy of 99.96% which made it highly effective at ransomware detection despite working with a small dataset.
To ensure the AI’s decisions were trustworthy and not based on random noise, the team used advanced visualization tools. They generated saliency maps, which confirmed that “the model focuses on structured behavior-encoded areas and confirms the class-specific pattern learning.”
This combination of near-perfect accuracy, the ability to work with small datasets, and a transparent decision-making process highlights the model’s potential for practical deployment