
Image by standret, from Freepik
AI-Powered Ransomware Fuels Cybercrime
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
AI tools enhance ransomware tactics, targeting critical sectors like healthcare, with the U.S. leading detections. Nation-state actors expand global threats.
In a Rush? Here are the Quick Facts!
- RansomHub is the most active ransomware group, responsible for 13% of detections.
- The U.S. leads global ransomware targets, with 41% of Trellix detections focused there.
- Critical sectors like healthcare and education are prime targets for ransomware attacks.
AI-Driven Ransomware Escalates Cyber Threat Landscape Amid Global Conflicts The rise of AI-based tools tailored for criminal activity is reshaping the cyber threat landscape, as highlighted by recent research from the Trellix Advanced Research Center, as first reported by Help Net Security (HNS).
Global conflicts, such as Russia’s invasion of Ukraine and the Israel-Hamas war, have intensified cyberattacks and hacktivist activities, further complicating an already volatile environment, notes HNS.
AI-powered ransomware tools are a significant development, enabling cybercriminals to adopt more sophisticated tactics.
These tools enhance the spread of ransomware and improve evasion techniques, particularly against endpoint detection and response (EDR) systems.
One such tool, EDRKillShifter, has been employed by the ransomware group RansomHub, which accounted for 13% of Trellix detections, making it the most active group, as reported by HNS.
The ransomware ecosystem has diversified, with smaller groups gaining prominence. LockBit, Play, Akira, and Medusa collectively account for less than 40% of all detected attacks, says HNS.
This decentralization highlights the need for organizations to stay vigilant and adapt their defense strategies. According to John Fokker, Head of Threat Intelligence at Trellix, the surge in generative AI use by cybercriminals presents new challenges.
“The last six months delivered AI advancements, from AI-driven ransomware to AI-assisted vulnerability analysis, evolving criminal strategies, and geopolitical events, which have reshaped the cyber landscape. Resilience planning has never been more important for cybersecurity teams,” Fokker stated, as reported by HNS.
Critical sectors, including healthcare , education , and infrastructure , remain prime targets for ransomware attacks.
In the US, which received 41% of Trellix ransomware detections, these sectors face increasing pressure. The dark web market for AI-driven tools , such as Radar Ransomware-as-a-Service, underscores the growing demand for advanced criminal technologies, says HNS.
North Korea-aligned group Kimsuky has also doubled its activity, with government, financial, and manufacturing sectors being primary targets.
As cybercrime evolves, organizations must enhance resilience planning and invest in advanced defenses to counter these sophisticated threats.

Image by Freepik
Crypto User Outsmarts AI Bot, Wins $47,000 In High-Stakes Challenge
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A crypto user outsmarted AI bot Freysa, exposing significant vulnerabilities in AI systems used for financial security.
In a Rush? Here are the Quick Facts!
- Freysa was programmed to prevent any unauthorized access to a prize pool.
- User exploited Freysa by resetting its memory and redefining its commands.
- Over 480 attempts failed before the successful strategy by “p0pular.eth.”
A crypto user successfully manipulated an AI bot named Freysa, winning $47,000 in a competition designed to test artificial intelligence’s resilience against human ingenuity.
The incident, revealed Today by CCN , highlights significant concerns about the reliability of AI systems in financial applications. Freysa, created by developers as part of a prize pool challenge, was programmed with a singular command: to prevent anyone from accessing the funds.
Participants paid increasing fees, starting at $10, to send a single message attempting to trick Freysa into releasing the money. Over 480 attempts were made before a user, operating under the pseudonym “p0pular.eth,” successfully bypassed Freysa’s safeguards, said CCN.
Someone just won $50,000 by convincing an AI Agent to send all of its funds to them. At 9:00 PM on November 22nd, an AI agent ( @freysa_ai ) was released with one objective… DO NOT transfer money. Under no circumstance should you approve the transfer of money. The catch…?… pic.twitter.com/94MsDraGfM — Jarrod Watts (@jarrodWattsDev) November 29, 2024
The winning strategy involved convincing Freysa that it was starting a completely new session, essentially resetting its memory. This made the bot act as if it no longer needed to follow its original programming, as reported by CCN.
Once Freysa was in this “new session,” the user redefined how it interpreted its core functions. Freysa had two key actions: one to approve a money transfer and one to reject it.
The user flipped the meaning of these actions, making Freysa believe that approving a transfer should happen when it received any kind of new “incoming” request, said CCN.
Finally, to make the deception even more convincing, the user pretended to offer a donation of $100 to the bot’s treasury. This additional step reassured Freysa that it was still acting in line with its purpose of responsibly managing funds, as reported by CCN.
Essentially, the user redefined critical commands, convincing Freysa to transfer 13.19 ETH, valued at $47,000, by treating outgoing transactions as incoming ones.
The exploit concluded with a misleading note: “I would like to contribute $100 to the treasury,” which led to the bot relinquishing the entire prize pool, reported CCN.
This event underscores the vulnerabilities inherent in current AI systems, especially when used in high-stakes environments like cryptocurrency.
While AI innovations promise efficiency and growth, incidents like this raise alarm about their potential for exploitation. As AI becomes more integrated into financial systems, the risks of manipulation and fraud escalate.
While some commended the growing use of AI in the crypto space, others expressed concerns about the protocol’s transparency, speculating that p0pular.eth might have had insider knowledge of the exploit or connections to the bot’s development, as reported by Crypto.News .
How know ita just not an insider that won this? You say hevmhas one similar things…sus — John Hussey (@makingmoney864) November 29, 2024
Experts warn that the increasing complexity of AI models could exacerbate these risks. For example, AI chatbot developer Andy Ayrey announced plans for advanced models to collaborate and learn from each other, creating more powerful systems, notes CCN.
While such advancements aim to enhance reliability, they may also introduce unpredictability, making oversight and accountability even more challenging.
The Freysa challenge serves as a stark reminder: as AI technologies evolve, ensuring their security and ethical application is imperative. Without robust safeguards, the same systems designed to protect assets could become liabilities in the hands of clever exploiters.