FEDORA-2024-71f0f16533
Packages in this update:
kernel-6.7.6-100.fc38
Update description:
The 6.7.6 stable kernel update contains a number of important fixes across the tree.
kernel-6.7.6-100.fc38
The 6.7.6 stable kernel update contains a number of important fixes across the tree.
kernel-6.7.6-200.fc39
The 6.7.6 stable kernel update contains a number of important fixes across the tree.
There are correlations between the populations of the Illex Argentines squid and water temperatures.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
It was discovered that a race condition existed in the ATM (Asynchronous
Transfer Mode) subsystem of the Linux kernel, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-51780)
It was discovered that a race condition existed in the AppleTalk networking
subsystem of the Linux kernel, leading to a use-after-free vulnerability. A
local attacker could use this to cause a denial of service (system crash)
or possibly execute arbitrary code. (CVE-2023-51781)
Zhenghan Wang discovered that the generic ID allocator implementation in
the Linux kernel did not properly check for null bitmap when releasing IDs.
A local attacker could use this to cause a denial of service (system
crash). (CVE-2023-6915)
Robert Morris discovered that the CIFS network file system implementation
in the Linux kernel did not properly validate certain server commands
fields, leading to an out-of-bounds read vulnerability. An attacker could
use this to cause a denial of service (system crash) or possibly expose
sensitive information. (CVE-2024-0565)
Jann Horn discovered that the TLS subsystem in the Linux kernel did not
properly handle spliced messages, leading to an out-of-bounds write
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2024-0646)
Marek Marczykowski-Górecki discovered that the Xen event channel
infrastructure implementation in the Linux kernel contained a race
condition. An attacker in a guest VM could possibly use this to cause a
denial of service (paravirtualized device unavailability). (CVE-2023-34324)
Zheng Wang discovered a use-after-free in the Renesas Ethernet AVB driver
in the Linux kernel during device removal. A privileged attacker could use
this to cause a denial of service (system crash). (CVE-2023-35827)
Tom Dohrmann discovered that the Secure Encrypted Virtualization (SEV)
implementation for AMD processors in the Linux kernel contained a race
condition when accessing MMIO registers. A local attacker in a SEV guest VM
could possibly use this to cause a denial of service (system crash) or
possibly execute arbitrary code. (CVE-2023-46813)
It was discovered that the io_uring subsystem in the Linux kernel contained
a race condition, leading to a null pointer dereference vulnerability. A
local attacker could use this to cause a denial of service (system crash).
(CVE-2023-46862)
It was discovered that a race condition existed in the ATM (Asynchronous
Transfer Mode) subsystem of the Linux kernel, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-51780)
It was discovered that a race condition existed in the AppleTalk networking
subsystem of the Linux kernel, leading to a use-after-free vulnerability. A
local attacker could use this to cause a denial of service (system crash)
or possibly execute arbitrary code. (CVE-2023-51781)
It was discovered that the netfilter subsystem in the Linux kernel did not
properly validate inner tunnel netlink attributes, leading to a null
pointer dereference vulnerability. A local attacker could use this to cause
a denial of service (system crash). (CVE-2023-5972)
It was discovered that the TLS subsystem in the Linux kernel did not
properly perform cryptographic operations in some situations, leading to a
null pointer dereference vulnerability. A local attacker could use this to
cause a denial of service (system crash) or possibly execute arbitrary
code. (CVE-2023-6176)
Jann Horn discovered that a race condition existed in the Linux kernel when
handling io_uring over sockets, leading to a use-after-free vulnerability.
A local attacker could use this to cause a denial of service (system crash)
or possibly execute arbitrary code. (CVE-2023-6531)
Xingyuan Mo discovered that the netfilter subsystem in the Linux kernel did
not properly handle dynset expressions passed from userspace, leading to a
null pointer dereference vulnerability. A local attacker could use this to
cause a denial of service (system crash). (CVE-2023-6622)
Zhenghan Wang discovered that the generic ID allocator implementation in
the Linux kernel did not properly check for null bitmap when releasing IDs.
A local attacker could use this to cause a denial of service (system
crash). (CVE-2023-6915)
Robert Morris discovered that the CIFS network file system implementation
in the Linux kernel did not properly validate certain server commands
fields, leading to an out-of-bounds read vulnerability. An attacker could
use this to cause a denial of service (system crash) or possibly expose
sensitive information. (CVE-2024-0565)
Jann Horn discovered that the io_uring subsystem in the Linux kernel did
not properly handle the release of certain buffer rings. A local attacker
could use this to cause a denial of service (system crash) or possibly
execute arbitrary code. (CVE-2024-0582)
It was discovered that the TIPC protocol implementation in the Linux kernel
did not properly handle locking during tipc_crypto_key_revoke() operations.
A local attacker could use this to cause a denial of service (kernel
deadlock). (CVE-2024-0641)
Jann Horn discovered that the TLS subsystem in the Linux kernel did not
properly handle spliced messages, leading to an out-of-bounds write
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2024-0646)
It was discovered that a race condition existed in the ATM (Asynchronous
Transfer Mode) subsystem of the Linux kernel, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-51780)
It was discovered that a race condition existed in the AppleTalk networking
subsystem of the Linux kernel, leading to a use-after-free vulnerability. A
local attacker could use this to cause a denial of service (system crash)
or possibly execute arbitrary code. (CVE-2023-51781)
Zhenghan Wang discovered that the generic ID allocator implementation in
the Linux kernel did not properly check for null bitmap when releasing IDs.
A local attacker could use this to cause a denial of service (system
crash). (CVE-2023-6915)
Robert Morris discovered that the CIFS network file system implementation
in the Linux kernel did not properly validate certain server commands
fields, leading to an out-of-bounds read vulnerability. An attacker could
use this to cause a denial of service (system crash) or possibly expose
sensitive information. (CVE-2024-0565)
Jann Horn discovered that the io_uring subsystem in the Linux kernel did
not properly handle the release of certain buffer rings. A local attacker
could use this to cause a denial of service (system crash) or possibly
execute arbitrary code. (CVE-2024-0582)
Jann Horn discovered that the TLS subsystem in the Linux kernel did not
properly handle spliced messages, leading to an out-of-bounds write
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2024-0646)
Zhenghan Wang discovered that the generic ID allocator implementation in
the Linux kernel did not properly check for null bitmap when releasing IDs.
A local attacker could use this to cause a denial of service (system
crash).
This year marks the world’s biggest election year yet.
An estimated four billion voters will head to the polls across more than 60 national elections worldwide in 2024 — all at a time when artificial intelligence (AI) continues to make history of its own. Without question, the harmful use of AI will play a role in election interference worldwide.
In fact, it already has.
In January, thousands of U.S. voters in New Hampshire received an AI robocall that impersonated President Joe Biden, urging them not to vote in the primary. In the UK, more than 100 deepfake social media ads impersonated Prime Minister Rishi Sunak on the Meta platform last December[ii]. Similarly, the 2023 parliamentary elections in Slovakia spawned deepfake audio clips that featured false proposals for rigging votes and raising the price of beer[iii].
We can’t put it more plainly. The harmful use of AI has the potential to influence an election.
In just over a year, AI tools have rapidly evolved, offering a wealth of benefits. It analyzes health data on massive scales, which promotes better healthcare outcomes. It helps supermarkets bring the freshest produce to the aisles by streamlining the supply chain. And it does plenty of helpful everyday things too, like recommending movies and shows in our streaming queues based on what we like.
Yet as with practically any technology, whether AI helps or harms is up to the person using it. And plenty of bad actors have chosen to use it for harm. Scammers have used it to dupe people with convincing “deepfakes” that impersonate everyone from Taylor Swift to members of their own family with phony audio, video, and photos created by AI. Further, AI has also helped scammers spin up phishing emails and texts that look achingly legit, all on a massive scale thanks to AI’s ease of use.
Now, consider how those same deepfakes and scams might influence an election year. We have no doubt, the examples cited above are only the start.
Within this climate, we’ve pledged to help prevent deceptive AI content from interfering with this year’s global elections as part of the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” We join leading tech companies such as Adobe, Google, IBM, Meta, Microsoft, and TikTok to play our part in protecting elections and the electoral process.
Collectively, we’ll bring our respective powers to combat deepfakes and other harmful uses of AI. That includes digital content such as AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other figures in democratic elections. Likewise, it further covers content that provides false info about when, where, and how people can cast their vote.
A set of seven principles guide the way for this accord, with each signatory of the pledge lending their strengths to the cause:
Even before joining the accord, we’ve played a strong role on the counts of Detection, Public Awareness, and Resilience. The accord only bolsters our efforts by aligning them with others. To mention a few of our efforts to date:
Earlier this year, we announced our Project Mockingbird — a new detection technology that can help spot AI-cloned audio in messages and videos. (You can see it in action here in our blog on the Taylor Swift deepfake scam) From there, you can expect to see similar detection technologies from us that cover all manner of content, such as video, photos, and text.
We’ve created McAfee Scam Protection, an AI-powered feature that puts a stop to scams before you click or tap a risky link. It detects suspicious links and sends you an alert if one crops up in texts, emails, or social media — all important when scammers use election cycles to siphon money from victims with politically themed phishing sites.
And as always, we pour plenty of effort into awareness, here in our blogs, along with our research reports and guides. When it comes to combatting the harmful use of AI, technology provides part of the solution — the other part is people. With an understanding of how bad actors use AI, what that looks like, and a healthy dose of internet street smarts, people can protect themselves even better from scams and flat-out disinformation.
In all, we see the tech accord as one important step that tech and media companies can take to keep people safe from harmful AI-generated content. Now in this election year. And moving forward as AI continues to shape and reshape what we see and hear online.
Yet beyond this accord and the companies that have signed on remains an important point: the accord represents just one step in preserving the integrity of elections in the age of AI. As tech companies, we can, and will, do our part to prevent harmful AI from influencing elections. However, fair elections remain a product of nations and their people. With that, the rule of law comes unmistakably into play.
Legislation and regulations that curb the harmful use of AI and that levy penalties on its creators will provide another vital step in the broader solution. One example: we’ve seen how the U.S. Federal Communications Commission’s (FCC) recently made AI robocalls illegal. With its ruling, the FCC gives State Attorney Generals across the country new tools to go after the bad actors behind nefarious robocalls[iv]. And that’s very much a step in the right direction.
Protecting people from the ill use of AI calls for commitment from all corners. Globally, we face a challenge tremendously imposing in nature. Yet not insurmountable. Collectively, we can keep people safer. Text from the accord we co-signed puts it well, “The protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders.”
We’re proud to say that we’ll contribute to that goal with everything we can bring to bear.
[i] https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5
[ii] https://www.theguardian.com/technology/2024/jan/12/deepfake-video-adverts-sunak-facebook-alarm-ai-risk-election
[iii] https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo
[iv] https://docs.fcc.gov/public/attachments/DOC-400393A1.pdf
The post McAfee Joins Tech Accord to Combat Use of AI in 2024 Elections appeared first on McAfee Blog.
New research:
LLM Agents can Autonomously Hack Websites
Abstract: In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents.
In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs.
Law enforcement agencies involved in Operation Cronos have announced they have been in contact with the LockBit kingpin aka LockbitSupp