Skip to main content

Google’s invisible AI watermark will help identify generative text and video

Google’s invisible AI watermark will help identify generative text and video

/

Google’s SynthID watermarking tool can already detect AI-generated images and audio, and now it encompasses more media.

Share this story

A trippy graphic displaying a collection of items like paintbrushes, books, phone messages, and a notepad to represent generative AI. A large pair of eyes and hands can be seen at the center of the image.
Illustration by Haein Jeong / The Verge

Among Google’s swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums.

Google’s DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team’s new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated as well as AI-generated text.

Watermarking AI-generated content will matter increasingly as the technology gains prevalence, especially when AI gets used for malicious purposes. It’s already been used to spread political misinformation, claim someone said something they haven’t, and create nonconsensual sexual content.

SynthID was announced last August and started as a tool to imprint AI imagery in a way that humans can’t visually decipher — but can be detected by the system. The approach is different from other aspiring watermarking protocol standards like C2PA, which adds cryptographic metadata to AI-generated content.

Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind’s Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.


Related: