The tool, named SynthID, will embed changes to individual pixels in images, creating a watermark that can be identified by computers but remains invisible to the human eye. 

Nonetheless, DeepMind has warned that the tool is not “foolproof against extreme image manipulation”.

The beta version of SynthID is currently available for select users of Vertex AI (Google’s platform for building AI apps and models) and can only be applied to Imagen, Google’s AI image generator.

“While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information – both intentionally or unintentionally,” DeepMind writes in a blog post.

“Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”

The company has explained that the tool relies on two separate algorithms: one to identify...