
Photo by Rafiee Artist on Unsplash
Opinion: Is ChatGPT Your Friend? It Might Be A Good Time To Set Limits
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI and MIT recently released a paper on the impact of ChatGPT on people’s well-being. While most users rely on the technology for practical tasks, the study reveals that a small group has been developing deep emotional connections with the AI model that can affect their well-being
For some time now, I’ve been observing with curiosity the relationships some people are developing with generative artificial intelligence. A few months ago, I read in The New York Times the story of a 28-year-old married woman who fell in love with ChatGPT , and how what started as “a fun experiment” evolved into a complex and unexpected relationship.
I’ve been watching my friends, especially those who once rejected technology or weren’t interested in it, who can’t make a big decision without consulting their AI oracle. I’ve also found myself surprised by the empathetic responses AI models give to emotionally or psychologically charged queries.
And, of course, I’ve laughed at the jokes, memes, and TikTok videos of people’s posts on social media showing how they’ve become dependent on the chatbot, some even calling it their “best friend” or “therapist”—and even seriously recommending others do the same .
But, if we put the fun experiences and jokes aside for a moment, we might realize we are facing a concerning phenomenon globally .
This month, for the first time in the short history of artificial intelligence, OpenAI and the MIT Media Lab released a study offering insights on the current impact of ChatGPT on people’s emotional well-being , as well as suggestions on the risks we might face as a society: loneliness, emotional dependency, and less social interactions with real people .
A Relationship That Evolves
The first approach to the new generative artificial intelligence technologies often begins with a few timid questions, perhaps technical ones about practical tasks like crafting an email or requests to explain complex topics, or just for brainstorming.
However, once a user begins to test the chatbot’s capabilities, they discover these can be wider and more complex than expected.
While certain AI products like Friend —a wearable AI device—have been designed and very awkwardly promoted as a user’s life companion, ChatGPT has been advertised as a productivity tool . Yet, a percentage of people use the chatbot for personal and emotional matters and develop strong bonds with it.
Even if they’re just a “small group,” as OpenAI clarified, they could still represent millions of people worldwide, especially considering that now over 400 million people use ChatGPT weekly . These users quickly notice that OpenAI’s chatbot mimics their language, tone, and style and can even be trained to interact in a certain way or use pet names—like that lady who fell in love with it did—and even “sound” more human.
“Their conversational style, first-person language, and ability to simulate human-like interactions have led users to sometimes personify and anthropomorphize these systems ,” states the document shared by OpenAI.
But this closeness comes with risks, as the researchers noted: “While an emotionally engaging chatbot can provide support and companionship, there is a risk that it may manipulate users’ socioaffective needs in ways that undermine longer term well-being.”
The Study’s Methodology
The recently released investigation focuses on humans’ well-being after consistent use of ChatGPT. To understand the emotional and social impact of the chatbot, researchers pursued two main studies applying different strategies.
OpenAI processed and analyzed over 40 million interactions respecting users’ privacy by using classifiers, and surveyed over 4,000 of them on how the interactions made them feel.
MIT Media Lab conducted a trial with almost 1,000 people over a month, focusing on the psychosocial consequences of the use of ChatGPT for at least 5 minutes a day. They also submitted and processed questionnaires at the end of the experiment.
Unsurprisingly, the findings revealed that users who spend more time with the technology experience more loneliness and show more signs of isolation .
Complex Consequences And Multiple Ramifications
The MIT Media Lab and OpenAI’s study also offered several reflections on how complex and unique human-chatbot relationships can be.
In the research, the authors give us a glimpse into the diverse experiences and ways each user interacts with ChatGPT—and how the outcome can vary depending on different factors , such as the use of advanced voice features, text-only mode, the voice type, frequency of use, conversation topics, the language used, and the amount of time spent on the app.
“We advise against generalizing the results because doing so may obscure the nuanced findings that highlight the non-uniform, complex interactions between people and AI systems,” warns OpenAI in its official announcement .
All the different approaches each user chooses translate into different results, and immerse us in grey areas that are difficult to explore.
It’s the Butterfly AI Effect!
More Questions Arise
The paper shared by OpenAI also notes that heavy users said they would be “upset” if their chatbot’s voice or personality changed.
This reminded me of a video I recently saw on social media of a guy saying he preferred a female voice and that he talked to the generative AI every day. Could ChatGPT also be helping men open up emotionally? What would happen if one day ChatGPT spoke to him with a male voice? Would he feel betrayed? Would he stop using ChatGPT? Was he developing a romantic connection—or simply a space of trust? Of course, it’s hard not to immediately relate these scenarios to Spike Jonze’s Her movie.
Every ChatGPT account, along with its historic chats—every day more intimate and private than any WhatsApp profile or social media DMs—represents a unique relationship with countless outcomes and consequences.
The Expected Result
All studies analyzed different aspects, but reached a similar conclusion, briefly explained at the MIT Technology Review : “ Participants who trusted and ‘bonded’ with ChatGPT more were likelier than others to be lonely, and to rely on it more. ”
While the investigation didn’t focus on solutions or deeper explanations on why this is happening or how it could evolve, it seems likely that more users will join OpenAI and other AI platforms, especially now that the AI image generation tool went viral .
Although the conclusions of MIT and OpenAI’s research aren’t particularly surprising, the study provides a scientific background with evidence, measurements, samples, and more ‘tangible’ metrics that could pave the way for further research and help address the implications of using artificial intelligence today.
We also received an official warning—from its own developers—about the bonds we build with ChatGPT and an invitation to establish limits and reflect on our interactions and current relationships—or situationships?—with chatbots.

Image by pressfoto, from Freepik
Cybersecurity Firm Hijacks Ransomware Gang’s Leaks
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The cybersecurity company Resecurity conducted a daring operation against cybercrime by penetrating BlackLock ransomware, infiltrating its systems to gather key intelligence, which they then shared with national agencies to help victims.
In a rush? Here are the quick facts:
- A security flaw let Resecurity access BlackLock’s hidden leak site.
- Resecurity warned victims before BlackLock could release their stolen data.
- Hackers defaced BlackLock’s site before it shut down.
ITPro previously reported that BlackLock ransomware experienced a 1,425% surge during 2024 because it employed custom malware and double extortion methods. BlackLock ransomware shows indications that it will control ransomware attacks during 2025 according to expert predictions.
Resecurity discovered BlackLock’s TOR-based Data Leak Site (DLS) misconfiguration during the 2024 holiday period. The security flaw revealed to BlackLock the exact IP addresses of clearnet servers that hosted their infrastructure.
Through a Local File Include (LFI) vulnerability Resecurity obtained access to server-side data that included configuration files and credentials. The company explained that Resecurity spent many hours performing hash-cracking attacks against threat actors’ accounts.
Hash-cracking attacks describe the process of attempting to reverse-engineer or decode hashed passwords or data. The hashing process transforms plaintext passwords into a specific-length string of characters through encryption algorithms.
The purpose of hashes makes them impossible to reverse so attackers to discover the original password from its hashed form. The Resecurity team employed hash-cracking methods to gain access to BlackLock’s accounts, which enabled them to seize control of their infrastructure.
BlackLock operators’ command history information was retrieved through the data collection efforts led by Resecurity. The security incident revealed copied credentials which exposed a critical operational security weakness.
The BlackLock operator “$$$” reused the same password in all their managed accounts thus revealing more information about the group’s operations. Through its research Resecurity discovered that BlackLock depended on Mega file-sharing service to carry out its data theft activities.
The criminal group operated eight email accounts to access Mega platform where they used both the client application and rclone utility to move stolen data from victims’ machines to their DLS through Mega.
The criminal organization sometimes used Mega client software to steal data from victim machines because it provided a less detectable method of exfiltration.
The target was a French legal services provider classified as major. Through their network access Resecurity gained knowledge of BlackLock’s upcoming data leak operations which allowed them to notify CERT-FR and ANSSI before the data became public two days in advance, as noted by The Register .
Through its intelligence sharing with the Canadian Centre for Cyber Security, Resecurity provided a Canadian victim with a warning about their data leak which happened 13 days in advance, said The Register.
Through Resecurity’s early warnings about the attacks victims gained sufficient time to develop appropriate defensive measures. The company stressed the necessity of active measures to disrupt worldwide criminal cyber operations.
The available information reveals that BlackLock operates from Russian and Chinese forums and follows rules to avoid targeting BRICS and CIS countries and uses IP addresses from these nations for its Mega accounts.
Through its actions Resecurity demonstrates how offensive cybersecurity operations succeed in fighting ransomware attacks to shield possible victims from harm.