
Image by ROBIN WORRALL, from Unsplash
Crocodilus: An Advanced Android Malware Takes Remote Control of Your Banking Apps
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new Android malware known as Crocodilus has emerged and is causing a stir in the world of cybersecurity.
In a rush? Here are the quick facts:
- Crocodilus is a new Android malware targeting banks and cryptocurrency wallets.
- It uses overlay attacks, keylogging, and remote access to steal user data.
- The malware is linked to a Turkish-speaking developer based on source code analysis.
Crocodilus manipulates victims with fake wallet backup prompts to steal seed phrases.
Unlike other mobile banking threats such as Anatsa and Octo that evolved gradually, Crocodilus is a highly sophisticated threat from the start. This malware was discovered by researchers from ThreatFabric while doing their routine checks, and they described it as a significant step forward in mobile malware.
The researchers say that Crocodilus functions as a “device takeover” Trojan, meaning the attackers can take control of the infected Android devices from a distance.
The malware has different techniques of depriving victims of their information including overlay attacks, keylogging, and even utilizing Android’s Accessibility Services to record user activities. This type of malware is mainly used to steal bank and crypto account credentials.
After being installed on a victim’s phone, the malware asks for permission to access the phone’s accessibility services. Then, the malware establishes a connection with a remote server to receive further instructions and a list of apps to target.
As a consequence, it develops fake login pages known as overlays which are placed on top of the actual banking and cryptocurrency applications, aimed at stealing users’ login credentials. ThreatFabric explains that these attacks have been observed mainly in Spain and Turkey, but they expect the malware to spread globally.
What makes Crocodilus different from other malware is that it can collect information that is not limited to passwords. This feature is called an “Accessibility Logger,” and it captures everything that is displayed on the phone’s screen, including OTPs from applications like Google Authenticator.
This makes it possible for attackers to obtain sensitive information including the name and the value of the OTPs that are needed to secure transactions.
The malware also has a “hidden mode” where the malware displays a black screen overlay on the device so that the actions of the attackers cannot be seen. It also mutes sounds on the device so that fraudulent transactions go unobserved. The researchers say that this makes it very difficult for the victims to realize that their devices are being compromised.
Crocodilus is not only for financial apps, it also works with cryptocurrency wallets. When it gets the login credentials, the malware will use social engineering tactics to ask the victims to disclose their wallet’s seed phrase.
For instance, a fake notification pops up and tells the user to back up the wallet key in the next 12 hours or else they will be locked out. When the victim complies with the prompt, Crocodilus steals the seed phrase and hands the attacker the keys to the wallet, which they can then drain.
At first glance, it seems that the malware’s code is connected to a well-known Turkish-speaking cyber group, but the link is not confirmed.
As mobile threats are always on the rise, it is evident that malware like Crocodilus is a clear indication of how advanced malware can be. With its capabilities of device takeover, it is also a sophisticated data harvesting tool and can work in the background, making it a threat that should be taken seriously.
Financial institutions and cryptocurrency platforms have to improve their security measures to be able to counter such sophisticated types of attacks.

Photo by Rafiee Artist on Unsplash
Opinion: Is ChatGPT Your Friend? It Might Be A Good Time To Set Limits
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI and MIT recently released a paper on the impact of ChatGPT on people’s well-being. While most users rely on the technology for practical tasks, the study reveals that a small group has been developing deep emotional connections with the AI model that can affect their well-being
For some time now, I’ve been observing with curiosity the relationships some people are developing with generative artificial intelligence. A few months ago, I read in The New York Times the story of a 28-year-old married woman who fell in love with ChatGPT , and how what started as “a fun experiment” evolved into a complex and unexpected relationship.
I’ve been watching my friends, especially those who once rejected technology or weren’t interested in it, who can’t make a big decision without consulting their AI oracle. I’ve also found myself surprised by the empathetic responses AI models give to emotionally or psychologically charged queries.
And, of course, I’ve laughed at the jokes, memes, and TikTok videos of people’s posts on social media showing how they’ve become dependent on the chatbot, some even calling it their “best friend” or “therapist”—and even seriously recommending others do the same .
But, if we put the fun experiences and jokes aside for a moment, we might realize we are facing a concerning phenomenon globally .
This month, for the first time in the short history of artificial intelligence, OpenAI and the MIT Media Lab released a study offering insights on the current impact of ChatGPT on people’s emotional well-being , as well as suggestions on the risks we might face as a society: loneliness, emotional dependency, and less social interactions with real people .
A Relationship That Evolves
The first approach to the new generative artificial intelligence technologies often begins with a few timid questions, perhaps technical ones about practical tasks like crafting an email or requests to explain complex topics, or just for brainstorming.
However, once a user begins to test the chatbot’s capabilities, they discover these can be wider and more complex than expected.
While certain AI products like Friend —a wearable AI device—have been designed and very awkwardly promoted as a user’s life companion, ChatGPT has been advertised as a productivity tool . Yet, a percentage of people use the chatbot for personal and emotional matters and develop strong bonds with it.
Even if they’re just a “small group,” as OpenAI clarified, they could still represent millions of people worldwide, especially considering that now over 400 million people use ChatGPT weekly . These users quickly notice that OpenAI’s chatbot mimics their language, tone, and style and can even be trained to interact in a certain way or use pet names—like that lady who fell in love with it did—and even “sound” more human.
“Their conversational style, first-person language, and ability to simulate human-like interactions have led users to sometimes personify and anthropomorphize these systems ,” states the document shared by OpenAI.
But this closeness comes with risks, as the researchers noted: “While an emotionally engaging chatbot can provide support and companionship, there is a risk that it may manipulate users’ socioaffective needs in ways that undermine longer term well-being.”
The Study’s Methodology
The recently released investigation focuses on humans’ well-being after consistent use of ChatGPT. To understand the emotional and social impact of the chatbot, researchers pursued two main studies applying different strategies.
OpenAI processed and analyzed over 40 million interactions respecting users’ privacy by using classifiers, and surveyed over 4,000 of them on how the interactions made them feel.
MIT Media Lab conducted a trial with almost 1,000 people over a month, focusing on the psychosocial consequences of the use of ChatGPT for at least 5 minutes a day. They also submitted and processed questionnaires at the end of the experiment.
Unsurprisingly, the findings revealed that users who spend more time with the technology experience more loneliness and show more signs of isolation .
Complex Consequences And Multiple Ramifications
The MIT Media Lab and OpenAI’s study also offered several reflections on how complex and unique human-chatbot relationships can be.
In the research, the authors give us a glimpse into the diverse experiences and ways each user interacts with ChatGPT—and how the outcome can vary depending on different factors , such as the use of advanced voice features, text-only mode, the voice type, frequency of use, conversation topics, the language used, and the amount of time spent on the app.
“We advise against generalizing the results because doing so may obscure the nuanced findings that highlight the non-uniform, complex interactions between people and AI systems,” warns OpenAI in its official announcement .
All the different approaches each user chooses translate into different results, and immerse us in grey areas that are difficult to explore.
It’s the Butterfly AI Effect!
More Questions Arise
The paper shared by OpenAI also notes that heavy users said they would be “upset” if their chatbot’s voice or personality changed.
This reminded me of a video I recently saw on social media of a guy saying he preferred a female voice and that he talked to the generative AI every day. Could ChatGPT also be helping men open up emotionally? What would happen if one day ChatGPT spoke to him with a male voice? Would he feel betrayed? Would he stop using ChatGPT? Was he developing a romantic connection—or simply a space of trust? Of course, it’s hard not to immediately relate these scenarios to Spike Jonze’s Her movie.
Every ChatGPT account, along with its historic chats—every day more intimate and private than any WhatsApp profile or social media DMs—represents a unique relationship with countless outcomes and consequences.
The Expected Result
All studies analyzed different aspects, but reached a similar conclusion, briefly explained at the MIT Technology Review : “ Participants who trusted and ‘bonded’ with ChatGPT more were likelier than others to be lonely, and to rely on it more. ”
While the investigation didn’t focus on solutions or deeper explanations on why this is happening or how it could evolve, it seems likely that more users will join OpenAI and other AI platforms, especially now that the AI image generation tool went viral .
Although the conclusions of MIT and OpenAI’s research aren’t particularly surprising, the study provides a scientific background with evidence, measurements, samples, and more ‘tangible’ metrics that could pave the way for further research and help address the implications of using artificial intelligence today.
We also received an official warning—from its own developers—about the bonds we build with ChatGPT and an invitation to establish limits and reflect on our interactions and current relationships—or situationships?—with chatbots.