AI in the Wild: Malicious Applications of Mainstream AI Tools

Read Time:6 Minute, 4 Second

It’s not all funny limericks, bizarre portraits, and hilarious viral skits. ChatGPT, Bard, DALL-E, Craiyon, Voice.ai, and a whole host of other mainstream artificial intelligence tools are great for whiling away an afternoon or helping you with your latest school or work assignment; however, cybercriminals are bending AI tools like these to aid in their schemes, adding a whole new dimension to phishing, vishing, malware, and social engineering.  

Here are some recent reports of AI’s use in scams plus a few pointers that might tip you off should any of these happen to you. 

1. AI Voice Scams

Vishing – or phishing over the phone – is not a new scheme; however, AI voice mimickers are making these scamming phone calls more believable than ever. In Arizona, a fake kidnapping phone call caused several minutes of panic for one family, as a mother received a demand for ransom to release her alleged kidnapped daughter. On the phone, the mother heard a voice that sounded exactly like her child’s, but it turned out to be an AI-generated facsimile.    

In reality, the daughter was not kidnapped. She was safe and sound. The family didn’t lose any money because they did the right thing: They contacted law enforcement and kept the scammer on the phone while they located the daughter.1 

Imposter scams accounted for a loss of $2.6 billion in the U.S. in 2022. Emerging AI scams could increase that staggering total. Globally, about 25% of people have either experienced an AI voice scam or know someone who has, according to McAfee’s Beware the Artificial Imposter report. Additionally, the study discovered that 77% of voice scam targets lost money as a result.  

How to hear the difference 

No doubt about it, it’s frightening to hear a loved one in distress, but try to stay as calm as possible if you receive a phone call claiming to be someone in trouble. Do your best to really listen to the “voice” of your loved one. AI voice technology is incredible, but there are still some kinks in the technology. For example, does the voice have unnatural hitches? Do words cut off just a little too early? Does the tone of certain words not quite match your loved one’s accent? To pick up on these small details, a level head is necessary. 

What you can do as a family today to avoid falling for an AI vishing scam is to agree on a family password. This can be an obscure word or phrase that is meaningful to you. Keep this password to yourselves and never post about it on social media. This way, if a scammer ever calls you claiming to have or be a family member, this password could determine a fake emergency from a real one. 

2. Deepfake Ransom and Fake Advertisements

Deepfake, or the digital manipulation of an authentic image, video, or audio clip, is an AI capability that unsettles a lot of people. It challenges the long-held axiom that “seeing is believing.” If you can’t quite believe what you see, then what’s real? What’s not? 

The FBI is warning the public against a new scheme where cybercriminals are editing explicit footage and then blackmailing innocent people into sending money or gift cards in exchange for not posting the compromising content.2 

Deepfake technology was also at the center of an incident involving a fake ad. A scammer created a fake ad depicting Martin Lewis, a trusted finance expert, advocating for an investment venture. The Facebook ad attempted to add legitimacy to its nefarious endeavor by including the deepfaked Lewis.3  

How to respond to ransom demands and questionable online ads 

No response is the best response to a ransom demand. You’re dealing with a criminal. Who’s to say they won’t release their fake documents even if you give in to the ransom? Involve law enforcement as soon as a scammer approaches you, and they can help you resolve the issue. 

Just because a reputable social media platform hosts an advertisement doesn’t mean that the advertiser is a legitimate business. Before buying anything or investing your money with a business you found through an advertisement, conduct your own background research on the company. All it takes is five minutes to look up its Better Business Bureau rating and other online reviews to determine if the company is reputable. 

To identify a deepfake video or image, check for inconsistent shadows and lighting, face distortions, and people’s hands. That’s where you’ll most likely spot small details that aren’t quite right. Like AI voices, deepfake technology is often accurate, but it’s not perfect. 

3. AI-generated Malware and Phishing Emails

Content generation tools have some safeguards in place to prevent them from creating text that could be used illegally; however, some cybercriminals have found ways around those rules and are using ChatGPT and Bard to assist in their malware and phishing operations. For example, if a criminal asked ChatGPT to write a key-logging malware, it would refuse. But if they rephrased and asked it to compose code that captures keystrokes, it may comply with that request. One researcher demonstrated that even someone with little knowledge of coding could use ChatGPT, thus making malware creation simpler and more available than ever.4 Similarly, AI text generation tools can create convincing phishing emails and create them quickly. In theory, this could speed up a phisher’s operation and widen their reach. 

How to avoid AI-written malware and phishing attempts 

You can avoid AI-generated malware and phishing correspondences the same way you deal with the human-written variety: Be careful and distrust anything that seems suspicious. To steer clear of malware, stick to websites you know you can trust. A safe browsing tool like McAfee web protection – which is included in McAfee+ – can doublecheck that you stay off of sketchy websites. 

As for phishing, when you see emails or texts that demand a quick response or seem out of the ordinary, be on alert. Traditional phishing correspondences are usually riddled with typos, misspellings, and poor grammar. AI-written lures are often written well and rarely contain errors. This means that you must be diligent in vetting every message in your inbox. 

Slow Down, Keep Calm, and Be Confident 

While the debate about regulating AI heats up, the best thing you can do is to use AI responsibly. Be transparent when you use it. And if you suspect you’re encountering a malicious use of AI, slow down and try your best to evaluate the situation with a clear mind. AI can create some convincing content, but trust your instincts and follow the above best practices to keep your money and personal information out of the hands of cybercriminals. 

1CNN, “‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping 

2NBC News, “FBI warns about deepfake porn scams 

3BBC, “Martin Lewis felt ‘sick’ seeing deepfake scam ad on Facebook 

4Dark Reading, “Researcher Tricks ChatGPT Into Building Undetectable Steganoraphy Malware 

The post AI in the Wild: Malicious Applications of Mainstream AI Tools appeared first on McAfee Blog.

Read More