
Image by Michael Vadon, from Wikimedia Commons
AI Scam Uses Rubio’s Voice To Target Government Officials
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
An unknown impersonator used AI to mimic the voice and writing style of Secretary of State Marco Rubio, and consequently contacting high-level officials through voice and text messages.
In a rush? Here are the quick facts:
- At least five high-level figures were targeted.
- Messages were sent via Signal using a fake Rubio display name.
- FBI warns of growing AI-based impersonation campaigns.
The Washington Post received a State Department cable showing that at least five people received messages from the impersonator, including three foreign ministers, a U.S. governor, and a member of Congress.
According to The Post, the impersonator started its’ campaign in mid-June by creating a Signal account displaying the name “Marco.Rubio@state.gov” instead of Rubio’s actual email address. The impersonator then began sending messages through the encrypted app Signal.
“The actor left voicemails on Signal for at least two targeted individuals and in one instance, sent a text message inviting the individual to communicate on Signal,” the cable stated, as
U.S. officials remain unaware about the identity of the scammer, but believe the goal is to access sensitive information or accounts. The State Department says it will “carry out a thorough investigation and continue to implement safeguards to prevent this from happening in the future,” as reported by The Post
The Rubio case joins other recent instances of this growing trend, as AI tools make this scam easter than ever. An impersonator pretending to be White House Chief of Staff Susie Wiles sent fake messages to senators and executives during May. “You just need 15 to 20 seconds of audio of the person,” said Hany Farid, a digital forensics expert at UC Berkeley said to The Post.
“You upload it to any number of services, click a button that says ‘I have permission to use this person’s voice,’ and then you type what you want him to say,” Farid added.
The FBI has warned that malicious actors are using AI voice cloning in “ongoing” campaigns to manipulate U.S. officials, as reported by The Post. The agency urges anyone receiving suspicious messages to report them immediately. Impersonating a federal official to deceive or gain something is a crime.
Concerns over AI exploitation are now intersecting with broader federal cybersecurity fears. Elon Musk’s Department of Government Efficiency (DOGE) has sparked alarm among IT experts, who warn the group poses an unprecedented threat to national systems .
Despite lacking qualified personnel, DOGE has sought and reportedly gained access to critical U.S. agencies such as the Treasury, OPM, and FAA.
Experts say even read-only access could allow data exfiltration or systemic disruption. “This is the largest data breach and the largest IT security breach in our country’s history—at least that’s publicly known,” a federal contractor stated.
The overlap between AI-powered impersonation campaigns and unregulated access to sensitive systems has created what one expert called a “perfect storm” of digital insecurity.

Photo by 2H Media on Unsplash
Anthropic Proposes Transparency Framework For AI Model Development
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The AI company Anthropic proposed a transparency framework on Monday for advanced AI models and companies developing frontier AI systems, intended for application at regional or international levels. The startup outlined safety measures, actionable steps, and minimum standards to enhance AI transparency.
In a rush? Here are the quick facts:
- Anthropic proposed a transparency framework for advanced AI models and companies developing frontier AI systems.
- The tech company acknowledges the rapid pace of AI development and the urgency to agree on safety frameworks to develop safe products.
- The proposed framework is aimed at large companies in the industry.
Anthropic explained in an announcement on its website that the development of AI models has progressed more rapidly than the creation of safeguards and agreements by companies, governments, or academia. The tech company urged all stakeholders to accelerate their efforts to ensure the safe development of AI products and offered its transparency framework as a model or reference.
“We need interim steps to ensure that very powerful AI is developed securely, responsibly, and transparently,” states the announcement. “We are therefore proposing a targeted transparency framework, one that could be applied at the federal, state, or international level, and which applies only to the largest AI systems and developers while establishing clear disclosure requirements for safety practices.”
Anthropic’s approach has been simplified to keep it flexible and lightweight. “It should not impede AI innovation, nor should it slow our ability to realize AI’s benefits—including lifesaving drug discovery, swift delivery of public benefits, and critical national security functions,” clarified the company.
The tech company acknowledged that rigid frameworks and standards could quickly become outdated as the technology continues to advance at a rapid pace.
Anthropic suggests that AI transparency requirements should apply only to large frontier model developers, in order to avoid burdening small startups and low-impact developers. The proposed threshold is $100 million in annual revenue or $1 billion in yearly capital expenditures.
Large developers should also create a public Secure Development Framework that includes how they will mitigate risks, including harms caused by misaligned models and the creation of chemical, nuclear, biological, or radiological weapons.
One of the strictest proposals is aimed at protecting whistleblowers. “Explicitly make it a violation of law for a lab to lie about its compliance with its framework,” wrote the Anthropic in the document shared. The company emphasized that transparency standards should include a minimum baseline and remain flexible, with lightweight requirements to help achieve consensus.
Anthropic expects the announcement and the proposed transparency framework to serve as guidelines for governments to adopt into law and promote “responsible practice.”
After releasing its latest AI model, Claude 4, Anthropic included a safety warning —labeling the model Opus 4 on Safety Level 3 due to its potential risks.