Google’s SynthID AI Tool That Detects AI Images to Minimize Deepfakes

Share the joy

SynthID AI Tool by Google

AI is a tool. Its impact on misinformation depends on how it is used. Even though AI can be exploited to spread false information, it can also be employed to detect and mitigate misinformation. 

Recently, Google announced the launch of its SynthID AI tool. It aims to combat and detect deepfakes. 

What it does is that it embeds the watermarked directly into images. These images are created by Imagen. It is one of Google’s text-to-image generators. Even if you modify the image by adding filters or altering its colors, the AI-generated label is intact. 

Incoming images can also be scanned. The tool will identify it by scanning the watermark. 

According to the post:

“Today, in partnership with Google Cloud, we’re launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images. This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.” – SynthID AI

Google acknowledged that this tool is not perfect. Based on its internal testing, it found that the tool is accurate against many image manipulations. 

If you are a customer of Vertex AI, you may try its beta version. Google stated that this tool will continue to evolve. There’s a huge chance it will expand into other Google products. 

Deepfakes and Altered Images

Deepfakes and altered images can be used to create convincing content that misrepresents reality. It can lead to the spread of false information, false narratives, and the manipulation of public opinion. 

Furthermore, deepfake technology can be used to manipulate images and videos in ways that infringe upon an individual’s privacy. This can involve superimposing faces onto explicit or compromising content, which can be used for blackmail or harassment. 

The prevalence of deepfakes and altered images also erodes trust in visual media. When people can’t trust the authenticity of what they see, it becomes more challenging to distinguish between genuine and manipulated content. 

Altered images can also be used for malicious purposes. These would include creating fake videos or public figures making controversial statements, which can cause political and social unrest. 

They can also be used to impersonate individuals, which can have severe consequences in various contexts, including financial fraud and identity theft. 

To address these issues, various strategies are being developed. Technology companies, including Google, and researchers are working on tools to detect deepfakes and altered images. These tools can identify manipulated content and flag it for review. 

Google did not disclose how the said AI tool creates the imperceptible watermarks. One reason might be to avoid bad actors to use and bypass it. 

Despite it being useful, it is not foolproof. This is especially true if the bad actors utilize extreme image manipulations. However, it does provide an approach to empower people to work with content generated by AI and use it responsibly. 

If you are a customer of Imagen, you might want to try this out. The system can add watermarks to an image and identify the image that carries the stamp.

Share the joy

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On