Google is launching a way to quickly check if an image, video, audio file, or snippet of text has been created using one of its AI tools.
SynthID Detector, announced Tuesday at Google I/O 2025, is a verification portal that helps you identify content generated by AI using Google’s SynthID watermarking technology. The user can upload a file, and the SynthID detector determines whether the entire sample (part of it) is AI created.
SynthID Detector’s debut is because the web is filled with AI-generated media. According to one estimate, the number of Deepfake videos alone skyrocketed by 550% between 2019 and 2024. Of the top 20 most viewed posts on US Facebook last fall, four are according to the era “evidently created by AI.”

Of course, there are limitations to the SynthID detector. Detect media created using tools that use Google’s SynthID specifications (mainly Google products). Microsoft has its own content watermarking technology, just like Meta and Openai.
Synthid is not even a perfect technique. Google admits that it can be avoided, especially with text.
At that first point, Google claims that its SynthID standard is already in use at a large scale. According to The Tech Giants, more than 10 billion media outlets have been shown through SynthID since its release in 2023.