Malicious Google Calendar Invites Can Make ChatGPT Leak Your Emails - 1

Image by Gaining Visuals, from Unsplash

Malicious Google Calendar Invites Can Make ChatGPT Leak Your Emails

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A security researcher reported how a fake Google Calendar invitation can steal private email content from ChatGPT when Gmail connectors are enabled.

In a rush? Here are the quick facts:

  • Attack works if Gmail and Calendar connectors are enabled in ChatGPT.
  • Automatic Google connectors allow ChatGPT to access data without explicit prompts.
  • Indirect prompt injection hides malicious instructions inside calendar event text.

Eito Miyamura explained the attack method on X, showing how hackers use calendar invites with hidden instructions, then waits for the victim to ask ChatGPT to perform a task, as first reported by Tom’s Hardware .

The attacker embeds malicious commands in the event, which ChatGPT then executes automatically following the malicious instructions. “All you need? The victim’s email address,” Miyamura claims.

We got ChatGPT to leak your private email data 💀💀 All you need? The victim’s email address. ⛓️💥🚩📧 On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2 — Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025

Tom’s Hardware notes that in mid-August, OpenAI added native Gmail, Google Calendar, and Google Contacts connectors to ChatGPT. After granting permission, the assistant has automatic access to the users’ Google account data. This means even a casual question like “What’s on my calendar today?” can access your calendar.

The help center of OpenAI explains that these connectors activate automatic data access only when enabled, though you can turn it off in settings to select sources manually.

Tom’s Hardware explains that the Model Context Protocol enables developers to create custom connectors, however OpenAI does not monitor these connections. Miyamura highlights this point as this attack depends on a new overall ecosystem.

The attack method, called indirect prompt injection, conceals harmful commands inside authorized data access points, which in this case are text embedded in calendar events. Similar attacks were reported in August, showing how compromised invites could steer Google’s Gemini AI and even control smart-home devices .

The system remains inactive unless Gmail and Calendar services are linked inside ChatGPT. Users who want to minimize risks should disconnect their sources and turn off automatic data access.

Experts advise changing Google Calendar’s “Automatically add invitations” setting so only invites from known contacts appear and hiding declined events.

AI in Healthcare: New Stanford Benchmark Measures Real-World Performance - 2

Image by Irwan, from Unsplash

AI in Healthcare: New Stanford Benchmark Measures Real-World Performance

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Stanford researchers conducted virtual EHR tests of AI agents, which report how models like Claude 3.5 can assist doctors with routine healthcare tasks.

In a rush? Here are the quick facts:

  • AI agents can perform tasks like ordering tests and prescribing medications.
  • Claude 3.5 Sonnet v2 achieved the highest success rate at 70%.
  • Many AI models struggled with complex workflows and system interoperability.

Stanford researchers are setting new evaluation criteria to determine whether AI systems’ are able to perform real-world medical tasks. While AI demonstrated potential for medical applications in various fields, experts warn it still needs further testing.

“Working on this project convinced me that AI won’t replace doctors anytime soon,” said Kameron Black, co-author and Clinical Informatics Fellow at Stanford Health Care.

In order to investigate this, the team developed MedAgentBench , a virtual electronic health record (EHR) system, built to assess how AI agents performed medical procedures that doctors do on a daily basis.

It is important to note that unlike chatbots, AI agents can act autonomously, handling complex, multistep tasks using patient data, ordering tests, and prescribing medications.

“Chatbots say things. AI agents can do things,” said Jonathan Chen, associate professor of medicine and biomedical data science and senior author. “This means they could theoretically directly retrieve patient information from the electronic medical record, reason about that information, and take action by directly entering in orders for tests and medications. This is a much higher bar for autonomy in the high-stakes world of medical care. We need a benchmark to establish the current state of AI capability on reproducible tasks that we can optimize toward,” Chen added.

In order to test the virtual system, the researchers gained data from 100 patient profiles, which accumulated 785,000 records. Secondly, about a dozen large language models (LLMs) were tested on 300 clinical tasks.

The results showed that the Claude 3.5 Sonnet v2 model achieved a 70% success rate as the top-performing model, however many models failed to handle complex workflows, as well as system integration processes.

“We hope this benchmark can help model developers track progress and further advance agent capabilities,” said Yixing Jiang, PhD student and co-author.

The experts predict that AI agents will take over basic clinical administrative work, hopefully decreasing physician burnout without fully replacing human doctors from practice.

“I’m passionate about finding solutions to clinician burnout,” Black said. “I hope that by working on agentic AI applications in healthcare that augment our workforce, we can help offload burden from clinicians and divert this impending crisis,” Black added.