USN-6386-3: Linux kernel vulnerabilities

Read Time:49 Second

Jana Hofmann, Emanuele Vannacci, Cedric Fournet, Boris Kopf, and Oleksii
Oleksenko discovered that some AMD processors could leak stale data from
division operations in certain situations. A local attacker could possibly
use this to expose sensitive information. (CVE-2023-20588)

It was discovered that the bluetooth subsystem in the Linux kernel did not
properly handle L2CAP socket release, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-40283)

It was discovered that some network classifier implementations in the Linux
kernel contained use-after-free vulnerabilities. A local attacker could use
this to cause a denial of service (system crash) or possibly execute
arbitrary code. (CVE-2023-4128)

Lonial Con discovered that the netfilter subsystem in the Linux kernel
contained a memory leak when handling certain element flush operations. A
local attacker could use this to expose sensitive information (kernel
memory). (CVE-2023-4569)

Read More

glibc-2.37-10.fc38

Read Time:54 Second

FEDORA-2023-2b8c11ee75

Packages in this update:

glibc-2.37-10.fc38

Update description:

Security fix for CVE-2023-4911, CVE-2023-4806, and CVE-2023-4527.

CVE-2023-4911: If a tunable of the form NAME=NAME=VAL is passed in the environment of a setuid program and NAME is valid, it may result in a buffer overflow, which could be exploited to achieve escalated privileges. This flaw was introduced in glibc 2.34.

CVE-2023-4806: When an NSS plugin only implements the _gethostbyname2_r and _getcanonname_r callbacks, getaddrinfo could use memory that was freed during buffer resizing, potentially causing a crash or read or write to arbitrary memory.

CVE-2023-4527: If the system is configured in no-aaaa mode via /etc/resolv.conf, getaddrinfo is called for the AF_UNSPEC address family, and a DNS response is received over TCP that is larger than 2048 bytes, getaddrinfo may potentially disclose stack contents via the returned address data, or crash.

This update contains changes to ELF destructor ordering, improving compatibility with a certain VPN software product.

Read More

glibc-2.36-14.fc37

Read Time:49 Second

FEDORA-2023-028062484e

Packages in this update:

glibc-2.36-14.fc37

Update description:

Security fix for CVE-2023-4911, CVE-2023-4806, and CVE-2023-4527.

CVE-2023-4911: If a tunable of the form NAME=NAME=VAL is passed in the environment of a setuid program and NAME is valid, it may result in a buffer overflow, which could be exploited to achieve escalated privileges. This flaw was introduced in glibc 2.34.

CVE-2023-4806: When an NSS plugin only implements the _gethostbyname2_r and _getcanonname_r callbacks, getaddrinfo could use memory that was freed during buffer resizing, potentially causing a crash or read or write to arbitrary memory.

CVE-2023-4527: If the system is configured in no-aaaa mode via /etc/resolv.conf, getaddrinfo is called for the AF_UNSPEC address family, and a DNS response is received over TCP that is larger than 2048 bytes, getaddrinfo may potentially disclose stack contents via the returned address data, or crash.

Read More

glibc-2.38-6.fc39

Read Time:49 Second

FEDORA-2023-63e5a77522

Packages in this update:

glibc-2.38-6.fc39

Update description:

Security fix for CVE-2023-4911, CVE-2023-4806, and CVE-2023-4527.

CVE-2023-4911: If a tunable of the form NAME=NAME=VAL is passed in the environment of a setuid program and NAME is valid, it may result in a buffer overflow, which could be exploited to achieve escalated privileges. This flaw was introduced in glibc 2.34.

CVE-2023-4806: When an NSS plugin only implements the _gethostbyname2_r and _getcanonname_r callbacks, getaddrinfo could use memory that was freed during buffer resizing, potentially causing a crash or read or write to arbitrary memory.

CVE-2023-4527: If the system is configured in no-aaaa mode via /etc/resolv.conf, getaddrinfo is called for the AF_UNSPEC address family, and a DNS response is received over TCP that is larger than 2048 bytes, getaddrinfo may potentially disclose stack contents via the returned address data, or crash.

Read More

USN-6409-1: GNU C Library vulnerabilities

Read Time:26 Second

It was discovered that the GNU C Library incorrectly handled the
GLIBC_TUNABLES environment variable. An attacker could possibly use this
issue to perform a privilege escalation attack. (CVE-2023-4911)

It was discovered that the GNU C Library incorrectly handled certain DNS
responses when the system was configured in no-aaaa mode. A remote attacker
could possibly use this issue to cause the GNU C Library to crash,
resulting in a denial of service. This issue only affected Ubuntu 23.04.
(CVE-2023-4527)

Read More

USN-6408-1: libXpm vulnerabilities

Read Time:44 Second

Yair Mizrahi discovered that libXpm incorrectly handled certain malformed
XPM image files. If a user were tricked into opening a specially crafted
XPM image file, a remote attacker could possibly use this issue to consume
memory, leading to a denial of service. (CVE-2023-43786)

Yair Mizrahi discovered that libXpm incorrectly handled certain malformed
XPM image files. If a user were tricked into opening a specially crafted
XPM image file, a remote attacker could use this issue to cause libXpm to
crash, leading to a denial of service, or possibly execute arbitrary code.
(CVE-2023-43787)

Alan Coopersmith discovered that libXpm incorrectly handled certain
malformed XPM image files. If a user were tricked into opening a specially
crafted XPM image file, a remote attacker could possibly use this issue to
cause libXpm to crash, leading to a denial of service. (CVE-2023-43788,
CVE-2023-43789)

Read More

USN-6407-1: libx11 vulnerabilities

Read Time:44 Second

Gregory James Duck discovered that libx11 incorrectly handled certain
keyboard symbols. If a user were tricked into connecting to a malicious X
server, a remote attacker could use this issue to cause libx11 to crash,
resulting in a denial of service, or possibly execute arbitrary code.
(CVE-2023-43785)

Yair Mizrahi discovered that libx11 incorrectly handled certain malformed
XPM image files. If a user were tricked into opening a specially crafted
XPM image file, a remote attacker could possibly use this issue to consume
memory, leading to a denial of service. (CVE-2023-43786)

Yair Mizrahi discovered that libx11 incorrectly handled certain malformed
XPM image files. If a user were tricked into opening a specially crafted
XPM image file, a remote attacker could use this issue to cause libx11 to
crash, leading to a denial of service, or possibly execute arbitrary code.
(CVE-2023-43787)

Read More

Artificial Intelligence and Winning the Battle Against Deepfakes and Malware

Read Time:8 Minute, 47 Second

As AI deepfakes and malware understandably grab the headlines, one thing gets easily overlooked—AI also works on your side. It protects you from fraud and malware as well.  

For some time now, we’ve kept our eye on AI here at McAfee. Particularly as scammers cook up fresh gluts of AI-driven hustles. And there are plenty of them.  

We’ve uncovered how scammers need only a few seconds of a voice recording to clone it using AI—which has led to all manner of imposter scams. We also showed how scammers can use AI writing tools to power their chats in romance scams, to the extent of writing love poems with AI. Recently, we shared word of fake news sites packed with bogus articles generated almost entirely with AI. AI-generated videos even played a role in a scam for “Barbie” movie tickets. 

Law enforcement, government agencies, and other regulatory bodies have taken note. In April, the U.S. Federal Trade Commission (FTC) warned consumers that AI now “turbocharges” fraud online. The commission cited a proliferation of AI tools can generate convincing text, images, audio, and videos.  

While not typically malicious in and of themselves, scammers twist these technologies to bilk victims out of their money and personal information. Likewise, just as legitimate application developers use AI to create code, hackers use AI to create malware. 

There’s no question that all these AI-driven scams mark a major change in the way we stay safe online. Yet you have a powerful ally on your side. It’s AI, as well. And it’s out there, spotting scams and malware. In fact, you’ll find it in our online protection software. We’ve put AI to work on your behalf for some time now. 

With a closer look at how AI works on your side, along with several steps that can help you spot AI fakery, you can stay safer out there. Despite the best efforts of scammers, hackers, and their AI tools. 

AI in the battle against AI-driven fraud and malware. 

One way to think about online protection is this: it’s a battle to keep you safe. Hackers employ new forms of attack that try to work around existing protections. Meanwhile, security professionals create technological advances that counter these attacks and proactively prevent them—which hackers try to work around once again. And on it goes. As technology evolves, so does this battle. And the advent of AI marks a decidedly new era in the struggle. 

As a result, security professionals also employ AI to protect people from AI-driven attacks.  

Companies now check facial scans for skin texture and translucency to determine if someone is using a mask to trick facial recognition ID. Banks employ other tools to detect suspicious mouse movements and transaction details that might be suspicious. Additionally, developers scan their code with AI tools to detect vulnerabilities that might lurk deep in their apps—in places that would take human teams hundreds, if not thousands of staff hours to detect. If at all. Code can get quite complex. 

For us, we’ve used AI in our online protection for years now. McAfee has used AI for evaluating events, files, and website characteristics. We have further used AI for detection, which has proven highly effective against entirely new forms of attack.  

We’ve also used these technologies to catalog sites for identifying sites that host malicious files or phishing operations. Moreover, cataloging has helped us shape out parental control features such that we can block content based on customer preferences with high accuracy.  

And we continue to evolve it so that it detects threats even faster and yet more accurately than before. Taken together, AI-driven protection like ours quashes threats in three ways:  

 It detects suspicious events and behaviors. AI provides a particularly powerful tool against entirely new threats (also known as zero-day threats). By analyzing the behavior of files for patterns that are consistent with malware behavior, it can prevent a previously unknown file or process from doing harm.  

 It further detects threats by referencing known malware signatures and behaviors. This combats zero-day and pre-existing threats alike. AI can spot zero-day threats by comparing them to malware fingerprints and behaviors it has learned. Similarly, its previous learnings help AI quickly spot pre-existing threats in this manner as well.   
 It automatically classifies threats and adds them to the body of threat intelligence. AI-driven threat protection gets stronger over time. The more threats it encounters, the more rapidly and readily it can determine if files are malicious or benign. Furthermore, AI automatically classifies threats at a speed and scale unmatched by traditional processes. The body of threat intelligence improves immensely as a result.  

What does AI-driven protection look like for you? It can identify malicious websites before you can connect to them. It can prevent new forms of ransomware from encrypting your photos and files. And it can keep spyware from stealing your personal information by spotting apps that would connect them to a bad actor’s command-and-control server.  

As a result, you get faster and more comprehensive protection with AI that works in conjunction with online protection software—and our security professionals develop them both.   

Protect yourself from AI voice clone attacks. 

Yet, as it is with any kind of scam, it can take more than technology to spot an AI-driven scam. It calls for eyeballing the content you come across critically. You can spot an AI-driven scam with your eyes, along with your ears and even your gut. 

Take AI voice clone attacks, for example. You can protect yourself from them by taking the following steps: 

Set a verbal codeword with kids, family members, or trusted close friends. Make sure it’s one only you and those closest to you know. (Banks and alarm companies often set up accounts with a codeword in the same way to ensure that you’re really you when you speak with them.) Ensure everyone knows and uses it in messages when they ask for help. 
Always question the source. In addition to voice cloning tools, scammers have other tools that can spoof phone numbers so that they look legitimate. Even if it’s a voicemail or text from a number you recognize, stop, pause, and think. Does that really sound like the person you think it is? Hang up and call the person directly or try to verify the information before responding.  
Think before you click and share. Who is in your social media network? How well do you really know and trust them? The wider your connections, the more risk you might be opening yourself up to when sharing content about yourself. Be thoughtful about the friends and connections you have online and set your profiles to “friends and families” only so that they aren’t available to the greater public. 
Protect your identity. Identity monitoring services can notify you if your personal information makes its way to the dark web and provide guidance for protective measures. This can help shut down other ways that a scammer can attempt to pose as you. 
Clear your name from data broker sites. How’d that scammer get your phone number anyway? Chances are, they pulled that information off a data broker site. Data brokers buy, collect, and sell detailed personal information, which they compile from several public and private sources, such as local, state, and federal records, in addition to third parties. Our Personal Data Cleanup scans some of the riskiest data broker sites and shows you which ones are selling your personal info. 

Three ways to spot AI-generated fakes.   

As AI continues its evolution, it gets trickier and trickier to spot it in images, video, and audio. Advances in AI give images a clarity and crispness that they didn’t have before, deepfake videos play more smoothly, and voice cloning gets uncannily accurate.   

Yet even with the best AI, scammers often leave their fingerprints all over the fake news content they create. Look for the following:  

1) Consider the context   

AI fakes usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Look to see if the text even makes sense. And like legitimate news articles, does it include identifying information — like date, time, and place of publication, along with the author’s name.   

2) Evaluate the claim  

Does the image seem too bizarre to be real? Too good to be true? Today, “Don’t believe everything you read on the internet,” now includes “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, other known and reputable sites will report on the event—and have done their own fact-checking.  

3) Check for distortions  

The bulk of AI technology still renders fingers and hands poorly. It often creates eyes that might have a soulless or dead look to them — or that show irregularities between them. Also, shadows might appear in places where they look unnatural. Further, the skin tone might look uneven. In deepfaked videos, the voice and facial expressions might not exactly line up, making the subject look robotic and stiff.   

AI is on your side in this new era of online protection. 

The battle between hackers and the people behind online protection continues. And while the introduction of AI has unleashed all manner of new attacks, the pattern prevails. Hackers and security professionals tap into the same technologies and continually up the game against each other. 

Understandably, AI conjures questions, uncertainty, and, arguably, fear. Yet you can rest assured that, behind the headlines of AI threats, security professionals use AI technology for protection. For good. 

Yet an online scam remains an online scam. Many times, it takes common sense and a sharp eye to spot a hustle when you see one. If anything, that remains one instance where humans still have a leg up on AI. Humans have gut instincts. They can sense when something looks, feels, or sounds …off. Rely on that instinct. And give yourself time to let it speak to you. In a time of AI-driven fakery, it still stands as an excellent first line of defense. 

The post Artificial Intelligence and Winning the Battle Against Deepfakes and Malware appeared first on McAfee Blog.

Read More