The US House of Representatives approved the new bill with an overwhelming vote of 414-0
Monthly Archives: March 2024
apache-commons-configuration-2.10.1-1.fc39
FEDORA-2024-fa7b758114
Packages in this update:
apache-commons-configuration-2.10.1-1.fc39
Update description:
This update contains security fixes for CVE-2024-29131 and CVE-2024-29133.
See https://github.com/apache/commons-configuration/blob/master/RELEASE-NOTES.txt for changes in versions 2.10.0 and 2.10.1.
Security Leaders Acknowledge API Security Gaps Despite Looming Threat
Most decision-makers have experienced API security problems over the past year, yet many haven’t invested in a robust API security strategy, Fastly reveals
USN-6707-2: Linux kernel (ARM laptop) vulnerabilities
Lonial Con discovered that the netfilter subsystem in the Linux kernel did
not properly handle element deactivation in certain cases, leading to a
use-after-free vulnerability. A local attacker could use this to cause a
denial of service (system crash) or possibly execute arbitrary code.
(CVE-2024-1085)
Notselwyn discovered that the netfilter subsystem in the Linux kernel did
not properly handle verdict parameters in certain cases, leading to a use-
after-free vulnerability. A local attacker could use this to cause a denial
of service (system crash) or possibly execute arbitrary code.
(CVE-2024-1086)
Several security issues were discovered in the Linux kernel.
An attacker could possibly use these to compromise the system.
This update corrects flaws in the following subsystems:
– Network drivers;
– PWM drivers;
(CVE-2024-26597, CVE-2024-26599)
USN-6704-2: Linux kernel (Raspberry Pi) vulnerabilities
It was discovered that the NVIDIA Tegra XUSB pad controller driver in the
Linux kernel did not properly handle return values in certain error
conditions. A local attacker could use this to cause a denial of service
(system crash). (CVE-2023-23000)
Quentin Minster discovered that the KSMBD implementation in the Linux
kernel did not properly handle session setup requests. A remote attacker
could possibly use this to cause a denial of service (memory exhaustion).
(CVE-2023-32247)
Lonial Con discovered that the netfilter subsystem in the Linux kernel did
not properly handle element deactivation in certain cases, leading to a
use-after-free vulnerability. A local attacker could use this to cause a
denial of service (system crash) or possibly execute arbitrary code.
(CVE-2024-1085)
Notselwyn discovered that the netfilter subsystem in the Linux kernel did
not properly handle verdict parameters in certain cases, leading to a use-
after-free vulnerability. A local attacker could use this to cause a denial
of service (system crash) or possibly execute arbitrary code.
(CVE-2024-1086)
It was discovered that a race condition existed in the SCSI Emulex
LightPulse Fibre Channel driver in the Linux kernel when unregistering FCF
and re-scanning an HBA FCF table, leading to a null pointer dereference
vulnerability. A local attacker could use this to cause a denial of service
(system crash). (CVE-2024-24855)
ICO Probes Kate Middleton Medical Record Breach
The ICO said it is assessing the reported breach of Kate Middleton’s medical records at The London Clinic
USN-6708-1: Graphviz vulnerability
It was discovered that Graphviz incorrectly handled certain config6a files.
An attacker could possibly use this issue to cause a denial of service.
Public AI as an Alternative to Corporate AI
This mini-essay was my contribution to a round table on Power and Governance in the Age of AI. It’s nothing I haven’t said here before, but for anyone who hasn’t read my longer essays on the topic, it’s a shorter introduction.
The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.
To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.
Widely available public models and computing infrastructure would yield numerous benefits to the United States and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.
Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models, but they could offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.
The key piece of the ecosystem the government would dictate when creating an AI public option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation can, in principle, guarantee more democratically-aligned outcomes than an unregulated private market.
The need for such competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders to wrest control of the future of AI from unaccountable corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to corporate control that could erode our democracy.
Fake Obituary Sites Send Grievers to Porn and Scareware Pages
Secureworks is warning of fake obituary sites which expose visitors to fake AV scams
python-cryptography-42.0.5-1.fc40
FEDORA-2024-534c900eff
Packages in this update:
python-cryptography-42.0.5-1.fc40
Update description:
Update to upstream version 42.0.5
Fixes CVE-2024-26130