
Image by Freepik
ToxicPanda Malware Hits Banks Across Europe And Latin America
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a Rush? Here are the Quick Facts!
- Over 1,500 devices infected across Italy, Portugal, Spain, and Latin America.
- Malware bypasses bank security, enabling fraud through account takeover and On-Device Fraud.
- ToxicPanda is still in early development, with incomplete commands in its code.
In October 2024, Cleafy’s Threat Intelligence team discovered a new Android banking Trojan campaign, initially linked to the known TgToxic family of malware. However, after further investigation, it became clear that this new malware was different, leading experts to track it under the name ToxicPanda.
In their recent report , the analysts explain that ToxicPanda is designed to steal money from compromised devices by bypassing bank security measures.
The malware uses a technique called On-Device Fraud (ODF), which allows attackers to take control of a victim’s bank account without the person’s knowledge. It can bypass identity verification and behavioral detection systems that banks use to flag suspicious activities.
The researchers explain that ToxicPanda works by exploiting Android’s accessibility services. This allows it to gain control over a victim’s device, intercept one-time passwords (OTPs), and carry out fraudulent bank transactions. It can also hide its presence on the phone, making it harder for antivirus software to detect.
However, the report notes that the malware is still in early development. Some parts of its code are incomplete, with commands that don’t yet do anything.
Despite this, ToxicPanda has already managed to infect over 1,500 Android devices across Italy, Portugal, Spain, and Latin America. These infected devices are being used in attacks on 16 different banking institutions.
The threat actors (TAs) behind ToxicPanda are suspected to be Chinese speakers, marking a shift in the regions they target.
It is uncommon for Chinese-speaking cybercriminals to focus on banking fraud in Europe and Latin America. The researchers suggest that this might indicat a potential change in their operational focus.
Although ToxicPanda is not as advanced as some other banking trojans, it shares similarities with previous malware like TgToxic.
The report suggest that the malware’s developers appear to be new to targeting financial institutions outside their home regions, which may explain its somewhat basic code and limited features.
ToxicPanda’s spread has been significant, with Italy seeing the highest number of infections, followed by countries like Portugal, Spain, and Peru. This broad geographic reach signals that the malware creators are expanding their targets to include more countries, especially in Latin America.
In conclusion, ToxicPanda is a growing threat that highlights the increasing sophistication of mobile banking fraud. While the malware is still developing, its rapid spread across multiple regions shows that cybercriminals are becoming more focused on exploiting banking systems worldwide.

Image by Anthony DELANOIX, from Unsplash
UK Boosts AI Safety, Signs Partnership With Singapore To Grow Trusted AI Market
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a Rush? Here are the Quick Facts!
- UK’s AI assurance market could grow six-fold by 2035.
- Expansion aims to unlock £6.5 billion in economic growth.
- New AI assurance platform launched to support responsible AI use.
The UK government announced yesterday new measures to support the safe and responsible use of AI, aiming to unlock £6.5 billion in economic growth by 2035. The UK’s AI assurance sector—which focuses on ensuring AI systems are fair, transparent, and secure—is expected to expand six-fold in the next decade.
This growth is seen as essential to the government’s broader strategy to incorporate AI in public services and boost economic productivity, while maintaining public trust in these technologies.
Peter Kyle, Secretary of State for Science, Innovation, and Technology, emphasized that public trust is essential to fully harness AI’s potential to improve services and productivity. He noted that these steps aim to position the UK as a leader in AI safety.
To aid this expansion, the Department for Science, Innovation, and Technology (DSIT) and the UK’s AI Safety Institute have introduced a new AI assurance platform, designed to help British businesses manage the risks associated with AI use.
The platform will centralize resources for assessing data bias, conducting impact evaluations, and monitoring AI performance. Small and medium-sized enterprises (SMEs) will also have access to a self-assessment tool to implement responsible AI practices within their organizations.
The UK is also strengthening its international efforts on AI safety by signing a partnership with Singapore.
The Memorandum of Cooperation, signed by Secretary Kyle and Singapore’s Minister for Digital Development Josephine Teo, aims to promote joint research and establish common standards for AI safety.
This agreement builds on discussions held at last year’s AI Safety Summit and aligns with the goals of the International Network of AI Safety Institutes (AISI), a global initiative to coordinate AI safety efforts.
Josephine Teo emphasized that both countries are committed to advancing AI for public benefit while ensuring it remains safe.
“The signing of this Memorandum of Cooperation with an important partner, the United Kingdom, builds on existing areas of common interest and extends them to new opportunities in AI,” Teo said.
Hyoun Park, CEO of Amalgam Insights—a firm specializing in financially responsible IT decisions—points out that, although marketed as a tool for building trust in AI, the platform’s main purpose is to provide businesses with a government-aligned framework for evaluating AI, reports CIO .
Park raised concerns about the platform’s current capabilities. “The platform is still fairly rudimentary, with plans for an essential toolkit that has yet to be fully developed,” he said, as reported by CIO.
“This assessment relies on human responses rather than direct integration with the AI itself, and the scale used by the assessment tool is vague, offering only binary yes/no options or responses that are difficult to quantify,” he added.
Park also pointed out that bias assessments could be especially challenging. “Every AI has a bias, and the notion that bias can be eliminated is both a myth and potentially dangerous,” he noted to CIO.
For smaller businesses, new compliance requirements like risk assessments and data audits may pose additional burdens, potentially stretching limited resources, says CIO.