Friday Squid Blogging: Zaqistan Flag

Read Time:13 Second

The fictional nation of Zaqistan (in Utah) has a squid on its flag.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Read More

USN-6261-1: Linux kernel (IoT) vulnerabilities

Read Time:55 Second

It was discovered that the IP-VLAN network driver for the Linux kernel did
not properly initialize memory in some situations, leading to an out-of-
bounds write vulnerability. An attacker could use this to cause a denial of
service (system crash) or possibly execute arbitrary code. (CVE-2023-3090)

Shir Tamari and Sagi Tzadik discovered that the OverlayFS implementation in
the Ubuntu Linux kernel did not properly perform permission checks in
certain situations. A local attacker could possibly use this to gain
elevated privileges. (CVE-2023-32629)

It was discovered that the netfilter subsystem in the Linux kernel did not
properly handle some error conditions, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-3390)

Tanguy Dubroca discovered that the netfilter subsystem in the Linux kernel
did not properly handle certain pointer data type, leading to an out-of-
bounds write vulnerability. A privileged attacker could use this to cause a
denial of service (system crash) or possibly execute arbitrary code.
(CVE-2023-35001)

Read More

Indirect Instruction Injection in Multi-Modal LLMs

Read Time:31 Second

Interesting research: “(Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs“:

Abstract: We demonstrate how images and sounds can be used for indirect prompt and instruction injection in multi-modal LLMs. An attacker generates an adversarial perturbation corresponding to the prompt and blends it into an image or audio recording. When the user asks the (unmodified, benign) model about the perturbed image or audio, the perturbation steers the model to output the attacker-chosen text and/or make the subsequent dialog follow the attacker’s instruction. We illustrate this attack with several proof-of-concept examples targeting LLaVa and PandaGPT.

Read More