
Image by Daniele Franchi, from Unsplash
AI Misinformation Could Lead To Mass Bank Withdrawals
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A new British study warns that artificial intelligence-generated disinformation circulating on social media is increasing the risk of bank runs, urging financial institutions to enhance their monitoring efforts, as first reported by Reuters .
In a Rush? Here are the Quick Facts!
- AI-generated fake news on social media increases the risk of bank runs.
- A UK study found AI-driven disinformation could trigger mass withdrawals from banks.
- Researchers urge banks to monitor social media to detect and counter fake narratives.
The research , published by UK-based Say No to Disinfo and communications firm Fenimore Harper, highlights how AI makes disinformation cheaper, faster, and more effective, a wider range of actors—including those driven by financial, ideological, or political motives—could exploit this vulnerability.
The ease of online banking and rapid money transfers further increase banks’ exposure to such risks.
To analyze the potential impact, the researchers created an AI-generated fake news campaign targeting banks’ financial stability. False headlines were designed to exploit existing fears, using doppelganger websites and AI-generated social media content.
The researchers simulated large-scale amplification, generating 1,000 tweets per minute at minimal cost. A poll of 500 UK customers showed that after exposure to the disinformation, 33.6% were extremely likely and 27.2% somewhat likely to move their money, with 60% inclined to share the content.
Based on average UK bank account balances, a single disinformation campaign could move £10 million, with the cost of shifting £150 million as low as $90–$150.
Despite the speed and ease of such attacks, the researchers say that financial institutions remain unprepared. Banks lack disinformation specialists, proactive monitoring, and crisis response plans. Current security measures focus on cyber threats while neglecting AI-driven influence operations.
AI-enhanced disinformation has the potential to destabilize the financial sector by eroding trust and triggering large-scale withdrawals. Without proactive measures, the researchers claim these campaigns could cause widespread economic damage.
Reuters notes that concerns over AI-driven disinformation follow the 2023 collapse of Silicon Valley Bank, where depositors withdrew $42 billion in a single day.
Regulators, including the G20’s Financial Stability Board, have since cautioned that generative AI could exacerbate financial instability, warning in November that it “could enable malicious actors to generate and spread disinformation that causes acute crises,” such as flash crashes and bank runs, as reported by Reuters.
While some banks declined to comment to Reuters, UK Finance stated that financial institutions are actively managing AI-related risks.
The study’s release coincides with an AI Summit in France , where leaders are shifting focus from AI risks to promoting its adoption.

Image by Sami Salim, from Usplash
AI Denied Their Care, Now UnitedHealth Faces A Major Lawsuit
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A federal judge has ruled that a major lawsuit against UnitedHealth Group (UHG) can continue, despite dropping some of the claims.
In a Rush? Here are the Quick Facts!
- A judge allowed a class-action lawsuit against UnitedHealth Group to proceed.
- The lawsuit claims UHG used AI to deny elderly patients’ Medicare Advantage claims.
- Plaintiffs allege the AI system had a 90% error rate, leading to deaths.
The lawsuit accuses UHG and its partners, UnitedHealthcare and naviHealth, of using AI instead of doctors to decide whether elderly patients on Medicare Advantage plans should receive care, as first reported by Courthouse News Case (CNC).
The plaintiffs allege that UHG’s AI program, developed by its subsidiary naviHealth, has a 90% error rate. The automated decisions, according to the lawsuit, led to worsened health conditions for patients and, in some cases, even death.
The suit claims that despite the high error rate, the company continues to deny claims using the flawed system because only a small fraction of policyholders—around 0.2%—appeal these denials, with the rest either paying out of pocket or going without care.
UHG is the largest health insurance company in the U.S., covering nearly 53 million people. The company has already faced heavy criticism over claim denials, especially after the shocking murder of former UnitedHealthcare CEO Brian Thompson in New York last year, as reported in an earlier article by CNC .
Police found bullet casings at the scene with the words “deny,” “defend,” and “depose” written on them—terms allegedly tied to UHG’s claims practices. The suspect, Luigi Mangione, has pleaded not guilty and raised $300,000 in donations for his defense.
UHG tried to get the lawsuit dismissed, arguing that the patients didn’t fully go through the company’s appeals process. But Judge John Tunheim disagreed, saying UHG’s appeal system was so difficult that it was nearly impossible for patients to get a fair review.
He said the process was “futile” and could cause “irreparable harm,” as reported by CNC.
One example highlighted in the complaint involves a 74-year-old stroke patient who was denied post-acute care despite his doctor’s recommendation. The man was forced to pay over $70,000 in medical expenses out of pocket before he passed away in an assisted living facility.
Now, the lawsuit will focus on whether UHG broke its own contracts by refusing to pay for care it had promised. Lawyers for both sides have not yet commented on the case.