
Photo by Agence Olloweb on Unsplash
Google DeepMind Launches Open-Source Watermark Tool to Help Detect AI-Generated Text
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Google DeepMind launched SynthID-Text, a new free open-source tool
- SynthID technology can now detect AI-generated text, audio, video, and images
- The research was published in Nature with more technical details
Google DeepMind launched an open-source watermark tool called SynthID-Text this Wednesday to help detect AI-generated text. The tool is available to businesses and developers for free and works by embedding invisible watermarks—undetectable to the human eye—into the text during generation, by altering the probabilities of words.
“Here we describe SynthID-Text, a production-ready text watermarking scheme that preserves text quality and enables high detection accuracy, with minimal latency overhead,” states the abstract of the research published in Nature . “To enable watermarking at scale, we develop an algorithm integrating watermarking with speculative sampling, an efficiency technique frequently used in production systems.”
According to MIT Technology Review , the tech giant’s AI research laboratory developed the SynthID technology to create multiple AI watermark tools that can now recognize AI-generated text, music, video, and images. Google DeepMind shared a video explaining how the technology works across multiple types of media.
Here’s how SynthID watermarks AI-generated content across modalities. ↓ pic.twitter.com/CVxgP3bnt2 — Google DeepMind (@GoogleDeepMind) October 23, 2024
SynthID is available through the company’s Google Responsible Generative AI Toolkit, and researchers are working along with Hugging Face—a collaborative platform for developers that hosts other open-source projects like LeRobot’s tutorial for building AI-powered robots at home —to share it on their site as well.
“Now, other [generative] AI developers will be able to use this technology to help them detect whether text outputs have come from their own [large language models], making it easier for more developers to build AI responsibly,” said Pushmeet Kohli, the vice president of research at Google DeepMind, to MIT Technology Review.
SynthID has been tested in Google’s Gemini products, and millions of users weren’t able to differentiate between watermarked and non-watermarked content. However, researchers acknowledged that it has limitations when the text has been edited or translated, but they remain optimistic and believe the tool could help combat misinformation and improve AI safety.
Multiple tech companies have been announcing AI-labeling strategies for the past few months. Meta announced in February a system to identify AI content across Instagram, Facebook, and Threads, Google required users to label AI content in March, and Tiktok added labels to AI-generated content in May.

Photo by Domenico Loia on Unsplash
Anthropic Releases New AI Models And a New Feature That Can Control PCs
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Anthropic launched an upgrade to its Claude 3.5 Sonnet
- The new “computer use” feature allows the AI model to control the computer
- The new model Claude 3.5 Haiku will be released by the end of the month
Anthropic announced yesterday the latest upgrade to its AI model Claude 3.5 Sonnet, a new model called Claude 3.5 Haiku, and a new feature capable of using computers.
According to the information shared by the startup on its website , the new Claude 3.5 Sonnet is superior to its previous version in multiple areas like coding and includes the new capability called “computer use” which allows the technology to control the PCs as requested by users.
Claude 3.5 Haiku can perform as well as one of the company’s largest models Claude 3 Opus, for a reduced cost, but it’s not available yet, it will be released by the end of the month.
The new computer use feature is in beta mode on the API for developers, ready to test and perform actions like moving a cursor, typing, clicking, and looking at screens. “We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time,” states the document.