As AI deepfakes and malware understandably grab the headlines, one thing gets easily overlooked—AI also works on your side. It protects you from fraud and malware as well.
For some time now, we’ve kept our eye on AI here at McAfee. Particularly as scammers cook up fresh gluts of AI-driven hustles. And there are plenty of them.
We’ve uncovered how scammers need only a few seconds of a voice recording to clone it using AI—which has led to all manner of imposter scams. We also showed how scammers can use AI writing tools to power their chats in romance scams, to the extent of writing love poems with AI. Recently, we shared word of fake news sites packed with bogus articles generated almost entirely with AI. AI-generated videos even played a role in a scam for “Barbie” movie tickets.
Law enforcement, government agencies, and other regulatory bodies have taken note. In April, the U.S. Federal Trade Commission (FTC) warned consumers that AI now “turbocharges” fraud online. The commission cited a proliferation of AI tools can generate convincing text, images, audio, and videos.
While not typically malicious in and of themselves, scammers twist these technologies to bilk victims out of their money and personal information. Likewise, just as legitimate application developers use AI to create code, hackers use AI to create malware.
There’s no question that all these AI-driven scams mark a major change in the way we stay safe online. Yet you have a powerful ally on your side. It’s AI, as well. And it’s out there, spotting scams and malware. In fact, you’ll find it in our online protection software. We’ve put AI to work on your behalf for some time now.
With a closer look at how AI works on your side, along with several steps that can help you spot AI fakery, you can stay safer out there. Despite the best efforts of scammers, hackers, and their AI tools.
AI in the battle against AI-driven fraud and malware.
One way to think about online protection is this: it’s a battle to keep you safe. Hackers employ new forms of attack that try to work around existing protections. Meanwhile, security professionals create technological advances that counter these attacks and proactively prevent them—which hackers try to work around once again. And on it goes. As technology evolves, so does this battle. And the advent of AI marks a decidedly new era in the struggle.
As a result, security professionals also employ AI to protect people from AI-driven attacks.
Companies now check facial scans for skin texture and translucency to determine if someone is using a mask to trick facial recognition ID. Banks employ other tools to detect suspicious mouse movements and transaction details that might be suspicious. Additionally, developers scan their code with AI tools to detect vulnerabilities that might lurk deep in their apps—in places that would take human teams hundreds, if not thousands of staff hours to detect. If at all. Code can get quite complex.
For us, we’ve used AI in our online protection for years now. McAfee has used AI for evaluating events, files, and website characteristics. We have further used AI for detection, which has proven highly effective against entirely new forms of attack.
We’ve also used these technologies to catalog sites for identifying sites that host malicious files or phishing operations. Moreover, cataloging has helped us shape out parental control features such that we can block content based on customer preferences with high accuracy.
And we continue to evolve it so that it detects threats even faster and yet more accurately than before. Taken together, AI-driven protection like ours quashes threats in three ways:
It detects suspicious events and behaviors. AI provides a particularly powerful tool against entirely new threats (also known as zero-day threats). By analyzing the behavior of files for patterns that are consistent with malware behavior, it can prevent a previously unknown file or process from doing harm.
It further detects threats by referencing known malware signatures and behaviors. This combats zero-day and pre-existing threats alike. AI can spot zero-day threats by comparing them to malware fingerprints and behaviors it has learned. Similarly, its previous learnings help AI quickly spot pre-existing threats in this manner as well.
It automatically classifies threats and adds them to the body of threat intelligence. AI-driven threat protection gets stronger over time. The more threats it encounters, the more rapidly and readily it can determine if files are malicious or benign. Furthermore, AI automatically classifies threats at a speed and scale unmatched by traditional processes. The body of threat intelligence improves immensely as a result.
What does AI-driven protection look like for you? It can identify malicious websites before you can connect to them. It can prevent new forms of ransomware from encrypting your photos and files. And it can keep spyware from stealing your personal information by spotting apps that would connect them to a bad actor’s command-and-control server.
As a result, you get faster and more comprehensive protection with AI that works in conjunction with online protection software—and our security professionals develop them both.
Protect yourself from AI voice clone attacks.
Yet, as it is with any kind of scam, it can take more than technology to spot an AI-driven scam. It calls for eyeballing the content you come across critically. You can spot an AI-driven scam with your eyes, along with your ears and even your gut.
Take AI voice clone attacks, for example. You can protect yourself from them by taking the following steps:
Set a verbal codeword with kids, family members, or trusted close friends. Make sure it’s one only you and those closest to you know. (Banks and alarm companies often set up accounts with a codeword in the same way to ensure that you’re really you when you speak with them.) Ensure everyone knows and uses it in messages when they ask for help.
Always question the source. In addition to voice cloning tools, scammers have other tools that can spoof phone numbers so that they look legitimate. Even if it’s a voicemail or text from a number you recognize, stop, pause, and think. Does that really sound like the person you think it is? Hang up and call the person directly or try to verify the information before responding.
Think before you click and share. Who is in your social media network? How well do you really know and trust them? The wider your connections, the more risk you might be opening yourself up to when sharing content about yourself. Be thoughtful about the friends and connections you have online and set your profiles to “friends and families” only so that they aren’t available to the greater public.
Protect your identity. Identity monitoring services can notify you if your personal information makes its way to the dark web and provide guidance for protective measures. This can help shut down other ways that a scammer can attempt to pose as you.
Clear your name from data broker sites. How’d that scammer get your phone number anyway? Chances are, they pulled that information off a data broker site. Data brokers buy, collect, and sell detailed personal information, which they compile from several public and private sources, such as local, state, and federal records, in addition to third parties. Our Personal Data Cleanup scans some of the riskiest data broker sites and shows you which ones are selling your personal info.
Three ways to spot AI-generated fakes.
As AI continues its evolution, it gets trickier and trickier to spot it in images, video, and audio. Advances in AI give images a clarity and crispness that they didn’t have before, deepfake videos play more smoothly, and voice cloning gets uncannily accurate.
Yet even with the best AI, scammers often leave their fingerprints all over the fake news content they create. Look for the following:
1) Consider the context
AI fakes usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Look to see if the text even makes sense. And like legitimate news articles, does it include identifying information — like date, time, and place of publication, along with the author’s name.
2) Evaluate the claim
Does the image seem too bizarre to be real? Too good to be true? Today, “Don’t believe everything you read on the internet,” now includes “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, other known and reputable sites will report on the event—and have done their own fact-checking.
3) Check for distortions
The bulk of AI technology still renders fingers and hands poorly. It often creates eyes that might have a soulless or dead look to them — or that show irregularities between them. Also, shadows might appear in places where they look unnatural. Further, the skin tone might look uneven. In deepfaked videos, the voice and facial expressions might not exactly line up, making the subject look robotic and stiff.
AI is on your side in this new era of online protection.
The battle between hackers and the people behind online protection continues. And while the introduction of AI has unleashed all manner of new attacks, the pattern prevails. Hackers and security professionals tap into the same technologies and continually up the game against each other.
Understandably, AI conjures questions, uncertainty, and, arguably, fear. Yet you can rest assured that, behind the headlines of AI threats, security professionals use AI technology for protection. For good.
Yet an online scam remains an online scam. Many times, it takes common sense and a sharp eye to spot a hustle when you see one. If anything, that remains one instance where humans still have a leg up on AI. Humans have gut instincts. They can sense when something looks, feels, or sounds …off. Rely on that instinct. And give yourself time to let it speak to you. In a time of AI-driven fakery, it still stands as an excellent first line of defense.
The post Artificial Intelligence and Winning the Battle Against Deepfakes and Malware appeared first on McAfee Blog.
More Stories
Cryptomining Malware Found in Popular Open Source Packages
Cryptomining malware hits popular npm packages rspack and vant, posing risks to open source tools Read More
Interpol Identifies Over 140 Human Traffickers in New Initiative
A new digital operation has enabled Interpol to identify scores of human traffickers operating between South America and Europe Read...
ICO Warns of Mobile Phone Festive Privacy Snafu
The Information Commissioner’s Office has warned that millions of Brits don’t know how to erase personal data from their old...
Friday Squid Blogging: Squid Sticker
A sticker for your water bottle. Blog moderation policy. Read More
Italy’s Data Protection Watchdog Issues €15m Fine to OpenAI Over ChatGPT Probe
OpenAI must also initiate a six-month public awareness campaign across Italian media, explaining how it processes personal data for AI...
Ukraine’s Security Service Probes GRU-Linked Cyber-Attack on State Registers
The Security Service of Ukraine has accused Russian-linked actors of perpetrating a cyber-attack against the state registers of Ukraine Read...