JFrog scanned over eight million artifacts in the most common open-source software registries
Yearly Archives: 2022
USN-5695-1: Linux kernel (GCP) vulnerabilities
It was discovered that the SUNRPC RDMA protocol implementation in the Linux
kernel did not properly calculate the header size of a RPC message payload.
A local attacker could use this to expose sensitive information (kernel
memory). (CVE-2022-0812)
Moshe Kol, Amit Klein and Yossi Gilad discovered that the IP implementation
in the Linux kernel did not provide sufficient randomization when
calculating port offsets. An attacker could possibly use this to expose
sensitive information. (CVE-2022-1012, CVE-2022-32296)
Duoming Zhou discovered that race conditions existed in the timer handling
implementation of the Linux kernel’s Rose X.25 protocol layer, resulting in
use-after-free vulnerabilities. A local attacker could use this to cause a
denial of service (system crash). (CVE-2022-2318)
Roger Pau Monné discovered that the Xen virtual block driver in the Linux
kernel did not properly initialize memory pages to be used for shared
communication with the backend. A local attacker could use this to expose
sensitive information (guest kernel memory). (CVE-2022-26365)
Roger Pau Monné discovered that the Xen paravirtualization frontend in the
Linux kernel did not properly initialize memory pages to be used for shared
communication with the backend. A local attacker could use this to expose
sensitive information (guest kernel memory). (CVE-2022-33740)
It was discovered that the Xen paravirtualization frontend in the Linux
kernel incorrectly shared unrelated data when communicating with certain
backends. A local attacker could use this to cause a denial of service
(guest crash) or expose sensitive information (guest kernel memory).
(CVE-2022-33741, CVE-2022-33742)
Oleksandr Tyshchenko discovered that the Xen paravirtualization platform in
the Linux kernel on ARM platforms contained a race condition in certain
situations. An attacker in a guest VM could use this to cause a denial of
service in the host OS. (CVE-2022-33744)
NCSC CEO Calls for International Standards on IoT Security
Lindy Cameron argues that smart cities are becoming an attractive target for threat actors, including nation states
libxml2-2.10.3-1.fc35
FEDORA-2022-fcf5dbb447
Packages in this update:
libxml2-2.10.3-1.fc35
Update description:
Update to 2.10.3
Fix CVE-2022-40303
Fix CVE-2022-40304
Adversarial ML Attack that Secretly Gives a Language Model a Point of View
Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.”
Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization.
Model spinning introduces a “meta-backdoor” into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary.
Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims.
To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call “pseudo-words,” and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary’s meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models.
This new attack dovetails with something I’ve been worried about for a while, something Latanya Sweeney has dubbed “persona bots.” This is what I wrote in my upcoming book (to be published in February):
One example of an extension of this technology is the “persona bot,” an AI posing as an individual on social media and other online groups. Persona bots have histories, personalities, and communication styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and to appear knowledgeable. Then, once in a while, the AI might post something relevant to a political issue, maybe an article about a healthcare worker having an allergic reaction to the COVID-19 vaccine, with worried commentary. Or maybe it might offer its developer’s opinions about a recent election, or racial justice, or any other polarizing subject. One persona bot can’t move public opinion, but what if there were thousands of them? Millions?
These are chatbots on a very small scale. They would participate in small forums around the Internet: hobbyist groups, book groups, whatever. In general they would behave normally, participating in discussions like a person does. But occasionally they would say something partisan or political, depending on the desires of their owners. Because they’re all unique and only occasional, it would be hard for existing bot detection techniques to find them. And because they can be replicated by the millions across social media, they could have a greater effect. They would affect what we think, and—just as importantly—what we think others think. What we will see as robust political discussions would be persona bots arguing with other persona bots.
Attacks like these add another wrinkle to that sort of scenario.
mingw-expat-2.4.9-1.fc37
FEDORA-2022-dcb1d7bcb1
Packages in this update:
mingw-expat-2.4.9-1.fc37
Update description:
Update to 2.4.9, fixes CVE-2022-30674.
mingw-expat-2.4.9-1.fc35
FEDORA-2022-c22feb71ba
Packages in this update:
mingw-expat-2.4.9-1.fc35
Update description:
Update to 2.4.9, fixes CVE-2022-30674.
mingw-expat-2.4.9-1.fc36
FEDORA-2022-d93b3bd8b9
Packages in this update:
mingw-expat-2.4.9-1.fc36
Update description:
Update to 2.4.9, fixes CVE-2022-30674.
Lesson Learned: How SolarWinds Strengthened its Security Post-Incident
Tim Brown, CISO and VP of security at SolarWinds shared his experiences remediating a major cyber-attack during Mandiant’s mWISE event on October 18, 2022
CVE-2021-42553
A buffer overflow vulnerability in stm32_mw_usb_host of STMicroelectronics allows an attacker to execute arbitrary code when the descriptor contains more endpoints than USBH_MAX_NUM_ENDPOINTS. The library is typically integrated when using a RTOS such as FreeRTOS on STM32 MCUs.