Interesting research:
Using jet propulsion inspired by squid, researchers demonstrate a microjet system that delivers medications directly into tissues, matching the effectiveness of traditional needles.
Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6114
Interesting research:
Using jet propulsion inspired by squid, researchers demonstrate a microjet system that delivers medications directly into tissues, matching the effectiveness of traditional needles.
These are two attacks against the system components surrounding LLMs:
We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user inputs and generated model outputs can adversely affect these other components in the broader implemented system.
[…]
When confronted with a sensitive topic, Microsoft 365 Copilot and ChatGPT answer questions that their first-line guardrails are supposed to stop. After a few lines of text they halt—seemingly having “second thoughts”—before retracting the original answer (also known as Clawback), and replacing it with a new one without the offensive content, or a simple error message. We call this attack “Second Thoughts.”
[…]
After asking the LLM a question, if the user clicks the Stop button while the answer is still streaming, the LLM will not engage its second-line guardrails. As a result, the LLM will provide the user with the answer generated thus far, even though it violates system policies.
In other words, pressing the Stop button halts not only the answer generation but also the guardrails sequence. If the stop button isn’t pressed, then ‘Second Thoughts’ is triggered.
What’s interesting here is that the model itself isn’t being exploited. It’s the code around the model:
By attacking the application architecture components surrounding the model, and specifically the guardrails, we manipulate or disrupt the logical chain of the system, taking these components out of sync with the intended data flow, or otherwise exploiting them, or, in turn, manipulating the interaction between these components in the logical chain of the application implementation.
In modern LLM systems, there is a lot of code between what you type and what the LLM receives, and between what the LLM produces and what you see. All of that code is exploitable, and I expect many more vulnerabilities to be discovered in the coming year.
The NHS Trust is investigating the incident with the help of the National Crime Agency
Romania’s national security council suggested that Russia is behind these attacks, amid a court order for a recount of votes in the first round of the country’s presidential election
A British hospital is grappling with a major cyberattack that has crippled its IT systems and disrupted patient care.
Read more in my article on the Hot for Security blog.
A report from the charity the Cyber Helpline found that 98% of cyber enabled crimes result in no further action from the police or justice system
php-extras-8.0.30-2.el9
Security fixes backported from 8.1.31
PDO DBLIB:
Fixed bug GHSA-5hqh-c84r-qjcv (Integer overflow in the dblib quoter causing OOB writes). (CVE-2024-11236) (nielsdos)
PDO Firebird:
Fixed bug GHSA-5hqh-c84r-qjcv (Integer overflow in the firebird quoter causing OOB writes). (CVE-2024-11236) (nielsdos)
A malicious PyPI package “aiocpa,” that stole crypto wallet data via obfuscated code, has been removed after being reported by Reversing Labs researchers
A new cyber-attack technique uses Godot Engine to deploy undetectable malware via GodLoader, infecting more than 17,000 devices
What makes Mimic particularly unusual is that it exploits the API of a legitimate Windows file search tool (“Everything” by Voidtools) to quickly locate files for encryption.
Find out more about the threat in my article on the Tripwire State of Security blog.