
Image created with ChatGPT
AI Agents Tricked By Fake Memories, Enabling Crypto Theft
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new research study revealed significant security vulnerabilities in Web3 AI-powered agents , which allow attackers to use fake memories to perform unauthorized cryptocurrency transfers.
In a rush? Here are the quick facts:
- Hackers can inject fake memories into AI agents to steal cryptocurrency.
- Memory-based attacks bypass basic security prompts and safeguards.
- Blockchain transactions are irreversible—stolen funds are permanently lost.
Researchers from Princeton University and the Sentient Foundation discovered that these AI agents, designed to handle blockchain-based tasks like trading crypto and managing digital assets, are vulnerable to a tactic called context manipulation.
The attack works by targeting the memory systems of platforms like ElizaOS, which creates AI AI agents for decentralized applications. The memory system of these agents store past conversations to use them as a guide for their future choices.
The researchers demonstrated that attackers can embed misleading commands in the memory system, leading the AI to send funds from the intended wallet to an attacker-controlled wallet. Alarmingly, these fake memories can travel between platforms.
For example, an agent compromised on Discord might later make incorrect transfers via X, without realizing anything is wrong.
What makes this especially dangerous is that standard defensive measures cannot stop this type of attack. The treatment of fake memories as genuine instructions renders basic prompt-based security measures ineffective against this kind of attack.
All blockchain transactions become permanent so there is no possibility to restore stolen funds. The problem becomes worse because certain AI agents store memory across multiple users so a single security breach could affect many users.
The research team tested several ways to prevent this, including adjusting AI training and requiring manual approval for transactions. While these approaches offer some hope, they come at the cost of slowing down automation.
The issue goes beyond cryptocurrency. The same vulnerability could affect general-purpose AI assistants, risking data leaks or harmful actions if attackers alter their memory.
This vulnerability is particularly alarming in light of recent findings where 84% of IT leaders trust AI agents as much as or more than human employees, and 92% expect these systems to drive business results within 12 to 18 months.
To address the problem, the researchers released a tool called CrAIBench to help developers test their systems and build stronger defenses. Until then, experts warn users to be cautious when trusting AI agents with financial decisions.

Image by Freepik
Judge Fines Lawyers For Using Fake AI-Generated Legal Research
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A U.S. judge has sharply criticized two law firms for including fake legal information generated by AI in a court filing, calling it a major lapse in legal responsibility.
In a rush? Here are the quick facts:
- Judge fined two law firms $31,000 for fake AI-generated legal citations.
- False information was found in a court brief filed in a State Farm case.
- At least two cited legal cases were completely fabricated by AI.
Judge Michael Wilner, based in California, fined the firms $31,000 after discovering the brief was filled with “false, inaccurate, and misleading legal citations and quotations,” as first reported by WIRED .
“No reasonably competent attorney should out-source research and writing’’ to AI, Wilner wrote in his ruling, warning that he was close to including the fake cases in a judicial order.
The situation arose during a civil lawsuit against State Farm. One lawyer used AI tools to draft a legal outline. That document, containing fake research, was handed to the larger law firm K&L Gates, which added it to an official filing.
“No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief,” Wilner noted, as reported by WIRED.
After discovering that at least two of the cited cases were completely made up, Judge Wilner asked K&L Gates for clarification. When they submitted a new version, it turned out to include even more fake citations. The judge demanded an explanation, which revealed sworn statements admitting to the use of AI tools, as reported by WIRED..
Wilner concluded: “The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong […] And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way,”as reported by WIRED.
This is not the first time AI has caused trouble in courtrooms. Indeed, two Wyoming lawyers recently admitted using fake AI-generated cases in a court filing for a lawsuit against Walmart. A federal judge threatened to sanction them as well.
In this scenario, AI “hallucinations” — made-up information generated AI tools — are becoming a growing concern in the legal system.