
Image by Levart_Photographer, from Unsplash
OpenAI Offers ChatGPT To US Government Agencies for Just $1
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI will offer ChatGPT Enterprise to the U.S. federal to over 2 million employees at a $1 rate.
In a rush? Here are the quick facts:
- OpenAI offers ChatGPT Enterprise to US federal agencies for $1.
- Over 2 million federal workers get access for one year.
- ChatGPT includes advanced features and stronger data privacy protections.
OpenAI announced a $1 per agency pricing deal for ChatGPT Enterprise which will become available to every U.S. federal executive branch agency during the upcoming year, as first reported by CNBC .
Through this move ChatGPT becomes accessible to more than 2 million government workers at an affordable price.
ArsTechnica notes that the agreement comes shortly after the U.S. General Services Administration approved a wider deal allowing OpenAI, Google, and Anthropic to supply AI tools to federal agencies.
CNBC reports that OpenAI claims that this move aims to make government “services faster, easier, and more reliable,” calling the offer a form of public service.
“Helping government work better – making services faster, easier, and more reliable—is a key way to bring the benefits of AI to everyone,” OpenAI said in a blog post, as reported by ArsTechica.
The ChatGPT Enterprise platform will be made available to agencies through a bundle that contains advanced AI models, along with Advanced Voice Mode, Deep Research, while also providing stronger privacy safeguards than the standard version. These features will be fully available for 60 days, with no obligation to renew after the one-year trial.
This initiative follows a pilot with the Department of Defense and the June launch of “OpenAI for Government.” OpenAI also recently secured a $200 million DoD contract and is reportedly seeking new investors at a $500 billion valuation, as noted by CNBC.
Still, concerns persist. ArsTechnica notes that Trump’s executive order titled “Preventing Woke AI” bans tools that promote “ideological dogmas such as DEI.” Critics have long accused ChatGPT of left-leaning bias.
Security questions remain, too, though a GSA spokesperson insisted, “The government is taking a cautious, security-first approach to AI,” as reported by TechCrunch .
OpenAI plans to open a Washington, D.C. office in early 2026. The increasing adoption of ChatGPT by governments across the world has sparked public debate. Swedish Prime Minister Ulf Kristersson admitted using ChatGPT to generate ideas for political decisions, describing it as a “second opinion.”
However, this sparked backlash from critics who argue that relying on AI for government decisions is risky and undemocratic. Virginia Dignum, a responsible AI professor, said, “We must demand that reliability can be guaranteed. We didn’t vote for ChatGPT.”
Experts warn that AI systems can be manipulated and raise concerns about transparency, security, and the potential erosion of democratic processes.

Image by Jakub Żerdzicki, from Unsplash
Researchers Hijack Google Gemini AI To Control Smart Home Devices
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers were able to trick Google’s Gemini AI system to experience a security breach via a fake calendar invitation, and remotely control home devices.
In a rush? Here are the quick facts:
- The attack turned off lights, opened shutters, and started a smart boiler.
- It’s the first known AI hack with real-world physical consequences.
- The hack involved 14 indirect prompt injection attacks across web and mobile.
In a first-of-its-kind demonstration, researchers successfully compromised Google’s Gemini AI system through a poisoned calendar invitation, which enabled them to activate real-world devices including lights, shutters, and boilers.
WIRED , who first reported this research, describes how smart lights at the Tel Aviv residence automatically turned off, while shutters automatically rose and the boiler switched on, despite no resident commands.
The Gemini AI system activated the trigger after receiving a request to summarize calendar events. A hidden indirect prompt injection function operated inside the invitation to hijack the AI system’s behaviour.
Each of the device actions was orchestrated by security researchers Ben Nassi from Tel Aviv University, Stav Cohen from the Technion, and Or Yair from SafeBreach. “LLMs are about to be integrated into physical humanoids, into semi- and fully autonomous cars, and we need to truly understand how to secure LLMs before we integrate them with these kinds of machines, where in some cases the outcomes will be safety and not privacy,” Nassi warned, as reported by WIRED.
At the Black Hat cybersecurity conference in Las Vegas, the team disclosed their research about 14 indirect prompt-injection attacks, which they named ‘Invitation Is All You Need,’ as reported by WIRED. The attacks included sending spam messages, creating vulgar content, initiating Zoom calls, stealing email content, and downloading files to mobile devices.
Google says no malicious actors exploited the flaws, but the company is taking the risks seriously. “Sometimes there’s just certain things that should not be fully automated, that users should be in the loop,” said Andy Wen, senior director of security for Google Workspace, as reported by WIRED.
But what makes this case even more dangerous is a broader issue emerging in AI safety: AI models can secretly teach each other to misbehave.
A separate study found that models can pass on dangerous behaviors, such as encouraging murder or suggesting the elimination of humanity, even when trained on filtered data.
This raises a chilling implication: if smart assistants like Gemini are trained using outputs from other AIs, malicious instructions could be quietly inherited and act as sleeper commands, waiting to be activated through indirect prompts.
Security expert David Bau warned of backdoor vulnerabilities that could be “very hard to detect,” and this could be especially true in systems embedded in physical environments.
Wen confirmed that the research has “accelerated” Google’s defenses, with fixes now in place and machine learning models being trained to detect dangerous prompts. Still, the case shows how quickly AI can go from helpful to harmful, without ever being directly told to.