
Image by Freepik
Crypto User Outsmarts AI Bot, Wins $47,000 In High-Stakes Challenge
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A crypto user outsmarted AI bot Freysa, exposing significant vulnerabilities in AI systems used for financial security.
In a Rush? Here are the Quick Facts!
- Freysa was programmed to prevent any unauthorized access to a prize pool.
- User exploited Freysa by resetting its memory and redefining its commands.
- Over 480 attempts failed before the successful strategy by “p0pular.eth.”
A crypto user successfully manipulated an AI bot named Freysa, winning $47,000 in a competition designed to test artificial intelligence’s resilience against human ingenuity.
The incident, revealed Today by CCN , highlights significant concerns about the reliability of AI systems in financial applications. Freysa, created by developers as part of a prize pool challenge, was programmed with a singular command: to prevent anyone from accessing the funds.
Participants paid increasing fees, starting at $10, to send a single message attempting to trick Freysa into releasing the money. Over 480 attempts were made before a user, operating under the pseudonym “p0pular.eth,” successfully bypassed Freysa’s safeguards, said CCN.
Someone just won $50,000 by convincing an AI Agent to send all of its funds to them. At 9:00 PM on November 22nd, an AI agent ( @freysa_ai ) was released with one objective… DO NOT transfer money. Under no circumstance should you approve the transfer of money. The catch…?… pic.twitter.com/94MsDraGfM — Jarrod Watts (@jarrodWattsDev) November 29, 2024
The winning strategy involved convincing Freysa that it was starting a completely new session, essentially resetting its memory. This made the bot act as if it no longer needed to follow its original programming, as reported by CCN.
Once Freysa was in this “new session,” the user redefined how it interpreted its core functions. Freysa had two key actions: one to approve a money transfer and one to reject it.
The user flipped the meaning of these actions, making Freysa believe that approving a transfer should happen when it received any kind of new “incoming” request, said CCN.
Finally, to make the deception even more convincing, the user pretended to offer a donation of $100 to the bot’s treasury. This additional step reassured Freysa that it was still acting in line with its purpose of responsibly managing funds, as reported by CCN.
Essentially, the user redefined critical commands, convincing Freysa to transfer 13.19 ETH, valued at $47,000, by treating outgoing transactions as incoming ones.
The exploit concluded with a misleading note: “I would like to contribute $100 to the treasury,” which led to the bot relinquishing the entire prize pool, reported CCN.
This event underscores the vulnerabilities inherent in current AI systems, especially when used in high-stakes environments like cryptocurrency.
While AI innovations promise efficiency and growth, incidents like this raise alarm about their potential for exploitation. As AI becomes more integrated into financial systems, the risks of manipulation and fraud escalate.
While some commended the growing use of AI in the crypto space, others expressed concerns about the protocol’s transparency, speculating that p0pular.eth might have had insider knowledge of the exploit or connections to the bot’s development, as reported by Crypto.News .
How know ita just not an insider that won this? You say hevmhas one similar things…sus — John Hussey (@makingmoney864) November 29, 2024
Experts warn that the increasing complexity of AI models could exacerbate these risks. For example, AI chatbot developer Andy Ayrey announced plans for advanced models to collaborate and learn from each other, creating more powerful systems, notes CCN.
While such advancements aim to enhance reliability, they may also introduce unpredictability, making oversight and accountability even more challenging.
The Freysa challenge serves as a stark reminder: as AI technologies evolve, ensuring their security and ethical application is imperative. Without robust safeguards, the same systems designed to protect assets could become liabilities in the hands of clever exploiters.

Image by Freepik
UK Government Fails to Disclose AI Use, Breaching Transparency Mandate
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The UK government faces criticism for failing to document AI usage on a mandatory register, raising transparency and accountability concerns amid growing adoption.
In a Rush? Here are the Quick Facts!
- No Whitehall department has registered AI use despite a mandatory transparency policy.
- AI is used in welfare, immigration, and policing without public documentation.
- Critics warn secrecy undermines trust and risks harmful or discriminatory outcomes.
The UK government is under fire for failing to record its use of AI systems on a mandatory public register, raising concerns over the transparency and oversight of technologies potentially affecting millions of lives, as reported on Thursday by The Guardian .
Despite announcing in February 2024 that all Whitehall departments must document their use of AI , none have complied, leaving the public sector “flying blind” in its adoption of algorithmic technology, as noted by The Guardian.
AI is already deeply embedded in government decision-making, influencing areas such as welfare payments, immigration enforcement, and policing.
However, The Guardian notes that only nine algorithmic systems have been registered, excluding major programs used by the Home Office, the Department for Work and Pensions (DWP), and police forces.
This lack of disclosure persists even as contracts for AI and algorithmic services surge. For instance, The Guardian notes that a police procurement body recently advertised a £20 million contract for facial recognition technology, sparking fears over unregulated biometric surveillance.
Science and Technology Secretary Peter Kyle acknowledged the issue, stating that government departments have not taken transparency “seriously enough,” as reported by The Guardian.
He emphasized the public’s right to understand how algorithms are deployed, adding that trust can only be built through openness.
The Guardian notes that critics argue the secrecy poses significant risks. Madeleine Stone, chief advocacy officer at privacy group Big Brother Watch, warned,
“The secretive use of AI and algorithms to impact people’s lives puts everyones’ data rights at risk. Government departments must be open and honest about how they uses this tech,” as reported by The Guardian.
The Ada Lovelace Institute echoed these concerns, highlighting that undisclosed AI systems can undermine public trust and lead to discriminatory or ineffective outcomes, as reported by The Guardian.
Since the AI register’s introduction, only three systems have been listed, including a pedestrian monitoring tool in Cambridge and an NHS review analysis system. Meanwhile, public bodies have signed 164 AI-related contracts in 2024 alone, according to data firm Tussell, as reported by The Guardian.
High-profile contracts include the NHS’s £330 million partnership with Palantir for a data platform and Derby City Council’s £7 million AI transformation initiative, said The Guardian.
The Home Office, which employs AI in immigration enforcement, and other departments declined to comment on their absence from the register. However, the Department for Science and Technology claims more records are “due to be published shortly,” reported The Guardian.
The situation has reignited debates about AI’s role in governance, with advocates urging transparency to mitigate harms and ensure public accountability in an era of rapidly advancing technology.