Google Launches SynthID Text Tool for Identifying Artificial Intelligence Content
Google has announced the development of a new tool called SynthID Text, aimed at distinguishing AI-generated content from human-written posts. The company has also released an open-source version of this tool.
SynthID Text is part of a broader suite of “watermarking” tools tailored for output from generative AI. The watermarking system helps combat the spread of misinformation and deception by AI-powered chatbots, as well as fight cheating in schools and workplaces.
In a statement, Bushmeet Kohli, Vice President of Research at DeepMind, said: “While SynthID is not a magical solution for identifying AI-generated content, it represents a foundational building block in developing more reliable identification tools.”
Researcher Scott Arnsen from the University of Texas at Austin, who previously worked on AI safety at OpenAI, expressed optimism, saying: “I hope that companies follow DeepMind’s approach in this field, like OpenAI and Anthropic, along with other large language models.”
It is worth noting that Google unveiled a watermark for images last year and recently introduced one for AI-generated video content. In May, the company announced the integration of SynthID into its Gemini smart program, making the tool available for free on Hugging Face, an open repository for AI datasets.
The company published a research paper in Nature magazine showing that SynthID generally outperformed other AI watermarking techniques in texts. The study evaluated how easily responses from different AI models with watermarks could be detected.
SynthID tool adds an invisible watermark directly to the text when created by an AI model. The Google DeepMind team demonstrated that using the SynthID watermark does not affect the quality, accuracy, creativity, or speed of the generated text. This conclusion was reached through a comprehensive experiment involving the use of the watermark in Gemini outputs by millions of users.