Doctors Use AI To Appeal Insurance Denials - 1

Image by National Cancer Institute, from Unsplash

Doctors Use AI To Appeal Insurance Denials

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Doctors are using AI to automate appeals against insurance denials.

In a rush? Here are the quick facts:

  • AI tool Fight Paperwork automates appeal letters for denied health claims.
  • Created by ex-Netflix engineer Holden Karau in response to personal frustration.
  • Over 6,000 appeals generated since May, saving time and lowering costs.

Frustrated by a healthcare system that routinely denies treatment, some U.S. doctors are turning to AI to push back. The San Francisco Standard (SFS) reports on a new tool, called Fight Paperwork , which enables medical practitioners to produce appeal letters for insurance claim rejections, which both saves administrative time and enhances the probability of successful approval.

Dr. Paul Abramson, who practices medicine in San Francisco, said to SFS that insurers often communicate through email and fax to deny necessary treatments, hoping that doctors and patients will abandon their appeals. The system relies on patient and doctor frustration as its primary strategy. But when doctors do appeal, they’re successful in most cases. The problem is that less than 1% of denials are ever appealed, as noted by SFS.

“There are no consequences to the insurance companies for wasting everyone’s time,” Abramson told SFS. “When there are more consequences, they will stop playing the game,” he added.

That’s where AI comes in. Fight Paperwork, which was developed by former Netflix engineer Holden Karau, helps automate the appeal process. The platform has generated more than 6,000 appeals since its launch in May, which has reduced both expenses and administrative time spent on bureaucratic procedures. Mental health providers, who often work alone, make up the largest number of users for this tool, as reported by SFS.

Still, some experts believe the playing field is far from even. “I think an ‘arms race’ is optimistic, because it would imply that there are equal sides. It’s more of a mosquito bite that may cause them some inconvenience […] their robot basically wrestles with your robot.” said retired Bay Area doctor Harley Schultz, as reported by SFS.

While AI offers hope, many doctors stress that real change must come through regulation. California is considering laws to increase transparency from pharmacy benefit managers, a key player blamed for rising costs.

Until then, AI is offering doctors like Abramson a small, but powerful, way to reclaim their time and fight for their patients.

AI Scam Uses Rubio’s Voice To Target Government Officials - 2

Image by Michael Vadon, from Wikimedia Commons

AI Scam Uses Rubio’s Voice To Target Government Officials

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

An unknown impersonator used AI to mimic the voice and writing style of Secretary of State Marco Rubio, and consequently contacting high-level officials through voice and text messages.

In a rush? Here are the quick facts:

  • At least five high-level figures were targeted.
  • Messages were sent via Signal using a fake Rubio display name.
  • FBI warns of growing AI-based impersonation campaigns.

The Washington Post received a State Department cable showing that at least five people received messages from the impersonator, including three foreign ministers, a U.S. governor, and a member of Congress.

According to The Post, the impersonator started its’ campaign in mid-June by creating a Signal account displaying the name “Marco.Rubio@state.gov” instead of Rubio’s actual email address. The impersonator then began sending messages through the encrypted app Signal.

“The actor left voicemails on Signal for at least two targeted individuals and in one instance, sent a text message inviting the individual to communicate on Signal,” the cable stated, as

U.S. officials remain unaware about the identity of the scammer, but believe the goal is to access sensitive information or accounts. The State Department says it will “carry out a thorough investigation and continue to implement safeguards to prevent this from happening in the future,” as reported by The Post

The Rubio case joins other recent instances of this growing trend, as AI tools make this scam easter than ever. An impersonator pretending to be White House Chief of Staff Susie Wiles sent fake messages to senators and executives during May. “You just need 15 to 20 seconds of audio of the person,” said Hany Farid, a digital forensics expert at UC Berkeley said to The Post.

“You upload it to any number of services, click a button that says ‘I have permission to use this person’s voice,’ and then you type what you want him to say,” Farid added.

The FBI has warned that malicious actors are using AI voice cloning in “ongoing” campaigns to manipulate U.S. officials, as reported by The Post. The agency urges anyone receiving suspicious messages to report them immediately. Impersonating a federal official to deceive or gain something is a crime.

Concerns over AI exploitation are now intersecting with broader federal cybersecurity fears. Elon Musk’s Department of Government Efficiency (DOGE) has sparked alarm among IT experts, who warn the group poses an unprecedented threat to national systems .

Despite lacking qualified personnel, DOGE has sought and reportedly gained access to critical U.S. agencies such as the Treasury, OPM, and FAA.

Experts say even read-only access could allow data exfiltration or systemic disruption. “This is the largest data breach and the largest IT security breach in our country’s history—at least that’s publicly known,” a federal contractor stated.

The overlap between AI-powered impersonation campaigns and unregulated access to sensitive systems has created what one expert called a “perfect storm” of digital insecurity.