Google Sues Scammers Over Bard AI

Share the joy

Google Bard AI 

It is unfortunate to know that generative AI is being exploited by scammers. Like any technology, generative AI can be used for both positive and negative purposes. 

Some scammers tried to use the hype around this technology and they used Google Bard to trick people into downloading malware. The company filed a lawsuit against some individuals believed to be from Vietnam.

The scammers set up social media pages. They also run ads that urge users to download Google Bard. If the user downloads it, they are actually downloading malware that steals their social media credentials. 

“We are seeking an order to stop the scammers from setting up domains like these and allow us to have them disabled with U.S. domain registrars. If this is successful, it will serve as a deterrent and provide a clear mechanism for preventing similar scams in the future.” – Google

These scammers are not affiliated with Google but they do pretend to be. What’s more, is that they used Google trademarkets to entice unsuspecting victims. They also used Facebook to promote their posts to distribute malware. 

In their posts, they imply that Bard is a paid service when it is actually free if users go to bard.google.com 

This is just one of the ways generative AI can be weaponized. It is a concern that has been discussed in the field of AI ethics and security. 

Ways Generative AI Can Be Weaponized

Other ways would include the use of deepfake videos and audio. Generative AI can be utilized to create highly realistic deepfake videos and video recordings. This technology can be misused to manipulate public opinion. It can also spread false information or create convincing impersonations of individuals. 

Scammers can also employ genitive AI to generate fake text, images, and other content that appears legitimate. It can be used to create and spread misinformation, fake news, or propaganda. 

Furthermore, this technology can be used to generate highly convincing phishing emails or messages. They may appear to come from legitimate sources, thereby, increasing the likelihood that individuals will fall victim to scams. 

AI-powered chatbots can also be programmed for malicious purposes, engaging with users to extract sensitive information, spread malware, or carry out other malicious activities. 

How to Stay Safe? 

Staying safe from scams related to generative AI involves a combination of awareness, vigilance, and adopting good cybersecurity practices. 

You should stay informed. Keep yourself informed about the latest developments in generative AI and the potential risks associated with it. Stay updated on common tactics used by scammers, such as deep fake videos, fake content, and phishing attacks

Verify the authenticity of information before trusting or sharing it. Use reputable sources to cross-check facts and news. Be skeptical of unsolicited messages. This is especially true if they are too good to be true or are designed to evoke a strong emotional response. 

You should also be cautious with information from unknown or unverified sources. Use official channels to communicate with organizations or individuals. Avoid clicking on links or downloading attachments from unfamiliar sources. 

By adopting good cybersecurity practices, you can reduce the risk of falling victim to scams related to generative AI. 


Share the joy

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On