Researchers Hijack Google Gemini AI To Control Smart Home Devices - 1

Image by Jakub Żerdzicki, from Unsplash

Researchers Hijack Google Gemini AI To Control Smart Home Devices

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Researchers were able to trick Google’s Gemini AI system to experience a security breach via a fake calendar invitation, and remotely control home devices.

In a rush? Here are the quick facts:

  • The attack turned off lights, opened shutters, and started a smart boiler.
  • It’s the first known AI hack with real-world physical consequences.
  • The hack involved 14 indirect prompt injection attacks across web and mobile.

In a first-of-its-kind demonstration, researchers successfully compromised Google’s Gemini AI system through a poisoned calendar invitation, which enabled them to activate real-world devices including lights, shutters, and boilers.

WIRED , who first reported this research, describes how smart lights at the Tel Aviv residence automatically turned off, while shutters automatically rose and the boiler switched on, despite no resident commands.

The Gemini AI system activated the trigger after receiving a request to summarize calendar events. A hidden indirect prompt injection function operated inside the invitation to hijack the AI system’s behaviour.

Each of the device actions was orchestrated by security researchers Ben Nassi from Tel Aviv University, Stav Cohen from the Technion, and Or Yair from SafeBreach. “LLMs are about to be integrated into physical humanoids, into semi- and fully autonomous cars, and we need to truly understand how to secure LLMs before we integrate them with these kinds of machines, where in some cases the outcomes will be safety and not privacy,” Nassi warned, as reported by WIRED.

At the Black Hat cybersecurity conference in Las Vegas, the team disclosed their research about 14 indirect prompt-injection attacks, which they named ‘Invitation Is All You Need,’ as reported by WIRED. The attacks included sending spam messages, creating vulgar content, initiating Zoom calls, stealing email content, and downloading files to mobile devices.

Google says no malicious actors exploited the flaws, but the company is taking the risks seriously. “Sometimes there’s just certain things that should not be fully automated, that users should be in the loop,” said Andy Wen, senior director of security for Google Workspace, as reported by WIRED.

But what makes this case even more dangerous is a broader issue emerging in AI safety: AI models can secretly teach each other to misbehave.

A separate study found that models can pass on dangerous behaviors, such as encouraging murder or suggesting the elimination of humanity, even when trained on filtered data.

This raises a chilling implication: if smart assistants like Gemini are trained using outputs from other AIs, malicious instructions could be quietly inherited and act as sleeper commands, waiting to be activated through indirect prompts.

Security expert David Bau warned of backdoor vulnerabilities that could be “very hard to detect,” and this could be especially true in systems embedded in physical environments.

Wen confirmed that the research has “accelerated” Google’s defenses, with fixes now in place and machine learning models being trained to detect dangerous prompts. Still, the case shows how quickly AI can go from helpful to harmful, without ever being directly told to.

Google Confirms Data Theft By ShinyHunters - 2

Image by Brett Jordan, from Unsplash

Google Confirms Data Theft By ShinyHunters

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Google confirmed that its Salesforce system experienced a breach in June when hackers used phone scams to steal customer contact information, and threatened to leak the data publicly.

In a rush? Here are the quick facts:

  • Only basic SMB contact details were accessed before shutdown.
  • ShinyHunters may launch public data leak site to pressure victims.
  • One unnamed company paid $400K ransom to avoid a leak.

Google has confirmed that it was among the latest victims of a widespread data breach targeting companies using Salesforce CRM systems. The attacks are part of a global extortion campaign linked to a hacking group known as ShinyHunters , who Google tracks under the codename UNC6040.

“In June, one of Google’s corporate Salesforce instances was impacted by similar UNC6040 activity described in this post. Google responded to the activity, performed an impact analysis and began mitigations,” the company said in an update.

The breach exposed contact information and related notes of small and medium-sized businesses, which Google described as “basic and largely publicly available business information, such as business names and contact details.” The data was accessed only briefly before Google shut down access.

The attackers used voice phishing (vishing) as a social engineering tactic to impersonate IT support staff, and trick employees into granting access. The attackers deceived victims into authorizing a fake version of Salesforce’s “Data Loader” application, which eventually allowed them to steal sensitive data.

In some cases, data was only partially exfiltrated before detection; in others, entire datasets were taken.

Google suspects ShinyHunters may now escalate the attacks by launching a public data leak site, putting even more pressure on victims. According to BleepingComputer , the group has conducted previous cyberattacks against major companies including Cisco and Adidas and Louis Vuitton.

BleepingComputer also notes that one company has already paid a ransom of 4 Bitcoins (around $400,000) to prevent data exposure.

The incident shows a troubling trend where criminal groups use phone scams to target support staff. Salesforce has become a preferred system for these breaches. Google and cybersecurity experts predict additional attacks will occur throughout the upcoming months so businesses need to remain vigilant.