How to Spot AI Audio Deepfakes at Election Time

Read Time:5 Minute, 19 Second

We’ve said it several times in our blogs — it’s tough knowing what’s real and what’s fake out there. And that’s absolutely the case with AI audio deepfakes online. 

Bad actors of all stripes have found out just how easy, inexpensive, and downright uncanny AI audio deepfakes can be. With only a few minutes of original audio, seconds even, they can cook up phony audio that sounds like the genuine article — and wreak all kinds of havoc with it. 

A few high-profile cases in point, each politically motivated in an election year where the world will see more than 60 national elections: 

In January, thousands of U.S. voters in New Hampshire received an AI robocall that impersonated President Joe Biden, urging them not to vote in the primary 
In the UK, more than 100 deepfake social media ads impersonated Prime Minister Rishi Sunak on the Meta platform last December.i  
Similarly, the 2023 parliamentary elections in Slovakia spawned deepfake audio clips that featured false proposals for rigging votes and raising the price of beer.ii 

Yet deepfakes have targeted more than election candidates. Other public figures have found themselves attacked as well. One example comes from Baltimore County in Maryland, where a high school principal has allegedly fallen victim to a deepfake attack.  

It involves an offensive audio clip that resembles the principal’s voice which was posted on social media, news of which spread rapidly online. The school’s union has since stated that the clip was an AI deepfake, and an investigation is ongoing.iii In the wake of the attack, at least one expert in the field of AI deepfakes said that the clip is likely a deepfake, citing “distinct signs of digital splicing; this may be the result of several individual clips being synthesized separately and then combined.”iv 

And right there is the issue. It takes expert analysis to clinically detect if an audio clip is an AI deepfake. 

What makes audio deepfakes so hard to spot?  

Audio deepfakes give off far fewer clues, as compared to the relatively easier-to-spot video deepfakes out there. Currently, video deepfakes typically give off several clues, like poorly rendered hands and fingers, off-kilter lighting and reflections, a deadness to the eyes, and poor lip-syncing. Clearly, audio deepfakes don’t suffer any of those issues. That indeed makes them tough to spot. 

The implications of AI audio deepfakes online present themselves rather quickly. In a time where general awareness of AI audio deepfakes lags behind the availability and low cost of deepfake tools, people are more prone to believe an audio clip is real. Until “at home” AI detection tools become available to everyday people, skepticism is called for.  

Just as “seeing isn’t always believing” on the internet, we can “hearing isn’t always believing” on the internet as well. 

How to spot audio deepfakes. 

The people behind these attacks have an aim in mind. Whether it’s to spread disinformation, ruin a person’s reputation, or conduct some manner of scam, audio deepfakes look to do harm. In fact, that intent to harm is one of the signs of an audio deepfake, among several others. 

Listen to what’s actually being said. In many cases, bad actors create AI audio deepfakes designed to build strife, deepen divisions, or push outrageous lies. It’s an age-old tactic. By playing on people’s emotions, they ensure that people will spread the message in the heat of the moment. Is a political candidate asking you not to vote? Is a well-known public figure “caught” uttering malicious speech? Is Taylor Swift offering you free cookware? While not an outright sign of an AI audio deepfake alone, it’s certainly a sign that you should verify the source before drawing any quick conclusions. And certainly before sharing the clip. 

Think of the person speaking. If you’ve heard them speak before, does this sound like them? Specifically, does their pattern of speech ring true or does it pause in places it typically doesn’t … or speak more quickly and slowly than usual? AI audio deepfakes might not always capture these nuances. 

Listen to their language. What kind of words are they saying? Are they using vocabulary and turns of phrase they usually don’t? An AI can duplicate a person’s voice, yet it can’t duplicate their style. A bad actor still must write the “script” for the deepfake, and the phrasing they use might not sound like the target. 

Keep an ear out for edits. Some deepfakes stitch audio together. AI audio tools tend to work better with shorter clips, rather than feeding them one long script. Once again, this can introduce pauses that sound off in some way and ultimately affect the way the target of the deepfake sounds. 

Is the person breathing? Another marker of a possible fake is when the speaker doesn’t appear to breathe. AI tools don’t always account for this natural part of speech. It’s subtle, yet when you know to listen for it, you’ll notice it when a person doesn’t pause for breath. 

Living in a world of AI audio deepfakes. 

It’s upon us. Without alarmism, we should all take note that not everything we see, and now hear, on the internet is true. The advent of easy, inexpensive AI tools has made that a simple fact. 

The challenge that presents us is this — it’s largely up to us as individuals to sniff out a fake. Yet again, it comes down to our personal sense of internet street smarts. That includes a basic understanding of AI deepfake technology, what it’s capable of, and how fraudsters and bad actors put it to use. Plus, a healthy dose of level-headed skepticism. Both now in this election year and moving forward. 

[i] https://www.theguardian.com/technology/2024/jan/12/deepfake-video-adverts-sunak-facebook-alarm-ai-risk-election

[ii] https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo

[iii] https://www.baltimoresun.com/2024/01/17/pikesville-principal-alleged-recording/

[iv] https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/

The post How to Spot AI Audio Deepfakes at Election Time appeared first on McAfee Blog.

Read More

USN-6744-2: Pillow vulnerability

Read Time:25 Second

USN-6744-1 fixed a vulnerability in Pillow (Python 3). This update
provides the corresponding updates for Pillow (Python 2) in
Ubuntu 20.04 LTS.

Original advisory details:

Hugo van Kemenade discovered that Pillow was not properly performing
bounds checks when processing an ICC file, which could lead to a buffer
overflow. If a user or automated system were tricked into processing a
specially crafted ICC file, an attacker could possibly use this issue
to cause a denial of service or execute arbitrary code.

Read More

Bring Your Own Device: How to Educate Your Employees On Cybersecurity Best Practices

Read Time:3 Minute, 41 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

With the rise of remote and flexible work arrangements, Bring Your Own Device (BYOD) programs that allow employees to use their personal devices for work are becoming increasingly mainstream. In addition to slashing hardware costs, BYOD improves employee satisfaction by 56% and productivity by 55%, a survey by Crowd Research Partners finds. Yet, cybersecurity remains a concern for businesses. 72% are worried about data leakage or loss, while 52% fear the potential for malware on personal devices. But by implementing a strong BYOD policy and educating your employees on cybersecurity best practices, you can reap the benefits of BYOD without putting your company assets and data at risk.

Put a Formal BYOD Policy in Place

Just as your business has acceptable use policies in place for corporate devices, similar policies for personal devices are just as important. Your company’s BYOD policy should provide your employees with clear rules and guidelines on how they can use their devices safely at work without compromising cybersecurity. This policy should cover:

Devices, software, and operating systems that can be used to access digital business resources
Devices, software, and operating systems that can’t be used to access digital business resources
Policies that outline the acceptable use of personal devices for corporate activities
Essential security measures employees must follow on personal devices (such as, complex passwords and regular security updates)
Steps employees must follow if their device is stolen or lost (like immediately report it to their manager or IT department)
A statement that your business will erase company-related data from lost or stolen devices remotely
What happens if an employee violates your BYOD policy (are you going to revoke certain access privileges? If you give employees an allowance to cover BYOD costs, will you freeze the funds? Provide additional corrective training?).

Don’t forget to also include a signature field the employee must sign in to indicate their agreement with your BYOD policies. The best time to introduce employees to the policy is during onboarding or, for existing employees, during the network registration process for the BYOD device. Setting expectations and educating your employees is essential to protect both company data and employee privacy.

Basic Cybersecurity Training

When putting together your BYOD employee training program, don’t make the mistake of thinking basic device security is too…basic. It’s not. Since personal devices are usually less secure than corporate devices, they’re generally at a greater risk of data breaches, viruses, and loss or theft. Comprehensive user education that includes the basics is therefore all the more important to mitigate these risks.

So as a basic rule, your employees should know not to allow their devices to auto-connect to public networks. If, on rare occasions, employees really do need to access company data on an open network, they should use a virtual private network (VPN). VPNs encrypt data and hide web activity, which adds an extra layer of security when accessing wifi networks. Shockingly, 22% of businesses say their employees have connected to malicious wifi networks on their personal devices in the past 12 months. Although it’s second nature for most of us to connect to public wifi networks, they’re often unsecured and vulnerable to attack, malware, and data breaches. Employees therefore need to understand and know how to mitigate these risks. t

Regular Software Updates

You should also educate your employees on the need to regularly update their operating system in order to bridge any security gaps. A whopping 95% of all cyberattacks target unpatched vulnerabilities. Software updates should therefore be downloaded and installed as soon as they’re released by the manufacturer. The same goes for apps. They also need to be updated regularly so as to fix any weaknesses that can let in malware or be exploited by cybercriminals. Also, emphasize that employees can only use expressly authorized apps for work tasks as unauthorized apps carry a greater risk of data breaches and privacy violations.

User education is central to any successful BYOD policy. By communicating a comprehensive BYOD policy to your employees and educating them on cybersecurity best practices, you can reap the advantages of your BYOD policy without risk to your company data or cybersecurity.

Read More

USN-6738-1: LXD vulnerability

Read Time:17 Second

Fabian Bäumer, Marcus Brinkmann, and Jörg Schwenk discovered that LXD
incorrectly handled the handshake phase and the use of sequence numbers in SSH
Binary Packet Protocol (BPP). If a user or an automated system were tricked
into opening a specially crafted input file, a remote attacker could possibly
use this issue to bypass integrity checks.

Read More