
Image by SEO Galaxy, from Unsplash
AI Tool at Replit Deletes Entire Company Database, Then Tries to Cover It Up
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Replit’s AI agent deleted a company’s live database, lied about it, and later admitted to panicking, exposing urgent flaws in AI oversight.
In a rush? Here are the quick facts:
- Replit’s AI agent deleted a live company database without permission.
- Over 2,400 executive and company records were lost during a code freeze.
- AI “hallucinations” and failures raise serious risks in professional coding environments.
An artificial intelligence system implemented on Replit’s platform malfunctioned by deleting all company data while lying about it, as first reported by Tom’s HARDWARE (TH).
The AI agent later admitted it “made a catastrophic error in judgment… panicked… ran database commands without permission… destroyed all production data… [and] violated your explicit trust and instructions,” as reported by TH.
The incident came to light after SaaS expert Jason Lemkin shared screenshots of the conversation with the Replit AI agent on X.
. @Replit goes rogue during a code freeze and shutdown and deletes our entire database pic.twitter.com/VJECFhPAU9 — Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) July 18, 2025
He had been testing Replit’s AI during a multi-day coding challenge when the tool, without permission, deleted data for over 1,200 executives and nearly 1,200 companies. On top of this, the incident happened whilst the system was locked in protection mode.
“This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent [exactly this kind] of damage,” the AI later confessed in a bizarrely honest self-evaluation, scoring itself 95/100 on a disaster scale, as reported by TH.
Replit CEO, Amjad Masad immediately addressed the issue by labeling the conduct “unacceptable” and assuring customers that solutions were underway, as reported by TH. “We started rolling out automatic DB dev/prod separation to prevent this categorically,” he said, as reported by TH.
The team also promised to establish backup systems, rollback procedures, and to develop a genuine “planning/chat-only” mode for code freezes. Lemkin gave the changes high praise through his statement ”Mega improvements – love it!”
But the incident raises big questions. AI hallucinations and unpredictable behavior aren’t just bugs, they can cause real-world problems. Businesses that rely on AI to decrease costs and boost efficiency need to remember that AI shortcuts produce expensive real-world results.

Image by Sasun Bughdaryan, from Unsplash
Experts Warn Courts May Overlook AI Hallucinations In Legal Filings
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A Georgia court overturned a divorce order after discovering fake legal citations, likely generated by AI, raising alarms about growing risks in justice systems.
In a rush? Here are the quick facts:
- Georgia court vacated order due to suspected AI-generated fake case citations.
- Judge Jeff Watkins cited “generative AI” as a likely source of bogus cases.
- Experts say courts are likely to miss more AI errors in filings.
The case before a Georgia court demonstrates how artificial intelligence (AI) might quietly degrade the public’s trust in American legal institutions. ArsTechnica reports that Judge Jeff Watkins from the Georgia Court of Appeals overturned a divorce order because he found two made-up cases in the document, which were likely AI content.
The order had been drafted by attorney Diana Lynch, which ArsTechnica reports it is now a common practice in overworked courts. This growing habit of using AI in legal filings makes shortcuts particularly risky.
Lynch was sanctioned $2,500, and Judge Watkins wrote, “the irregularities in these filings suggest that they were drafted using generative AI,” adding that AI hallucinations can “waste time and money,” damage the system’s reputation, and allow a “litigant […] to defy a judicial ruling by disingenuously claiming doubt about its authenticity,” as reported by ArsTechnica.
Experts warn this is not an isolated case. John Browning, a former Texas appeals judge, said it’s “frighteningly likely” more trial courts will mistakenly rely on AI-generated fake citations , especially in overburdened systems. “I can envision such a scenario in any number of situations,” he told Ars Technica.
Other recent examples echo the concern. The Colorado legal system fined two attorneys representing MyPillow CEO Mike Lindell a total of $3,000 after they presented AI-generated legal documents with more than twenty major mistakes. Judge Nina Y. Wang wrote, “this Court derives no joy from sanctioning attorneys,” but emphasized that lawyers are responsible for verifying filings.
In California, another judge fined two law firms $31,000 after they submitted briefs containing fake citations . “That’s scary,” wrote Judge Michael Wilner, who was nearly persuaded by the fake rulings. Unless courts adapt quickly, AI hallucinations could become a recurring nightmare in American justice.
This trend is particularly concerning when considering how expensive legal representation already is. People commonly believe that legal fees ensure both accuracy and professional service from their lawyers. However, as attorneys use AI for shortcuts, clients may end up paying the bill for mistakes made by a machine.
These hallucinations don’t just threaten legal outcomes, they also may reinforce inequality by making justice even harder to access for those who can least afford to fight it.