Google’s SynthID text watermarking technology, a tool the company created to make AI-generated text easier to identify, is now available open-source through the Google Responsible Generative AI Toolkit, the company announced on X.
“Now, other [generative] AI developers will be able to use this technology to help them detect whether text outputs have come from their own [large language models], making it easier for more developers to build AI responsibly,” Pushmeet Kohli, the vice president of research at Google DeepMind, told MIT Technology Review.
Watermarks have become increasingly important tools as large language models are used to spread political misinformation, generate nonconsensual sexual content, and for other malicious…
In this article: