How to Protect Yourself Against AI Voice Cloning Attacks

Read Time:2 Minute, 55 Second

Imagine receiving a call from a loved one, only to discover it’s not them but a convincing replica created by voice cloning technology. This scenario might sound like something out of a sci-fi movie, but it became a chilling reality for a Brooklyn couple featured in a New Yorker article who thought their loved ones were being held for ransom. The perpetrators used voice cloning to extort money from the couple as they feared for the lives of the husband’s parents.  

Their experience is a stark reminder of the growing threat of voice cloning attacks and the importance of safeguarding our voices in the digital age. Voice cloning, also known as voice synthesis or voice mimicry, is a technology that allows individuals to replicate someone else’s voice with remarkable accuracy. While initially developed for benign purposes such as voice assistants and entertainment, it has also become a tool for malicious actors seeking to exploit unsuspecting victims. 

As AI tools become more accessible and affordable, the prevalence of deepfake attacks, including voice cloning, is increasing. So, how can you safeguard yourself and your loved ones against voice cloning attacks? Here are some practical steps to take: 

Verify Caller Identity: If you receive a call or message that raises suspicion, take steps to verify the caller’s identity. Ask questions that only the real person would know the answer to, such as details about past experiences or shared memories. Contact the person through an alternative means of communication to confirm their identity. 
Establish a Unique Safe Word: Create a unique safe word or phrase with your loved ones that only you would know. In the event of a suspicious call or message, use this safe word to verify each other’s identity. Avoid using easily guessable phrases and periodically change the safe word for added security. 
Don’t Transfer Money Through Unconventional Methods: Fraudsters often employ tactics that make retrieving your funds difficult. If you’re asked to wire money, use cryptocurrency, or purchase gift cards and disclose the card numbers and PINs, proceed with caution as these are common indicators of a scam. 
Use Technology Safeguards: While technology can be used for malicious purposes, it can also help protect against voice cloning attacks. Tools like Project Mockingbird, currently in development at McAfee, aim to detect AI-generated deepfakes, including audio-based clones. Stay informed about advancements in security technology and consider utilizing such tools to bolster your defenses. 
Educate Yourself and Others: Knowledge is your best defense against emerging threats. Take the time to educate yourself and those around you about the dangers of voice cloning and other forms of social engineering attacks. Encourage your loved ones to be skeptical of unsolicited calls or messages, especially if they involve urgent requests for money or personal information. 
Report Suspicious Activity: If you believe you’ve been targeted by a voice cloning attack, report it to the appropriate authorities immediately. Organizations like the Federal Trade Commission (FTC) and the Internet Crime Complaint Center (IC3) are equipped to investigate and address cybercrimes. 

Voice cloning attacks represent a new frontier in cybercrime. With vigilance and preparedness, it’s possible to mitigate the risks and protect yourself and your loved ones. By staying informed, establishing safeguards, and remaining skeptical of unexpected communications, you can thwart would-be attackers and keep your voice secure in an increasingly digitized world. 

The post How to Protect Yourself Against AI Voice Cloning Attacks appeared first on McAfee Blog.

Read More

USN-6726-3: Linux kernel (Xilinx ZynqMP) vulnerabilities

Read Time:1 Minute, 25 Second

Pratyush Yadav discovered that the Xen network backend implementation in
the Linux kernel did not properly handle zero length data request, leading
to a null pointer dereference vulnerability. An attacker in a guest VM
could possibly use this to cause a denial of service (host domain crash).
(CVE-2023-46838)

It was discovered that the IPv6 implementation of the Linux kernel did not
properly manage route cache memory usage. A remote attacker could use this
to cause a denial of service (memory exhaustion). (CVE-2023-52340)

It was discovered that the device mapper driver in the Linux kernel did not
properly validate target size during certain memory allocations. A local
attacker could use this to cause a denial of service (system crash).
(CVE-2023-52429, CVE-2024-23851)

Dan Carpenter discovered that the netfilter subsystem in the Linux kernel
did not store data in properly sized memory locations. A local user could
use this to cause a denial of service (system crash). (CVE-2024-0607)

Several security issues were discovered in the Linux kernel.
An attacker could possibly use these to compromise the system.
This update corrects flaws in the following subsystems:
– Architecture specifics;
– Cryptographic API;
– Android drivers;
– EDAC drivers;
– GPU drivers;
– Media drivers;
– MTD block device drivers;
– Network drivers;
– NVME drivers;
– TTY drivers;
– Userspace I/O drivers;
– F2FS file system;
– GFS2 file system;
– IPv6 Networking;
– AppArmor security module;
(CVE-2023-52464, CVE-2023-52448, CVE-2023-52457, CVE-2023-52443,
CVE-2023-52439, CVE-2023-52612, CVE-2024-26633, CVE-2024-26597,
CVE-2023-52449, CVE-2023-52444, CVE-2023-52609, CVE-2023-52469,
CVE-2023-52445, CVE-2023-52451, CVE-2023-52470, CVE-2023-52454,
CVE-2023-52436, CVE-2023-52438)

Read More