Interesting research: “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training“:
Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
Especially note one of the sentences from the abstract: “For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024.”
And this deceptive behavior is hard to detect and remove.
More Stories
Identity Attacks Now Comprise a Third of Intrusions
IBM warns of infostealer surge as attackers automate credential theft and adopt AI to generate highly convincing phishing emails en...
Microsoft Thwarts $4bn in Fraud Attempts
Microsoft has blocked fraud worth $4bn as threat actors ramp up AI use Read More
CISA Throws Lifeline to CVE Program with Last-Minute Contract Extension
MITRE will be able to keep running the CVE program for at least the next 11 months Read More
Network Edge Devices the Biggest Entry Point for Attacks on SMBs
Sophos found that compromise of network edge devices, such as VPN appliances, accounted for 30% of incidents impacted SMBs in...
ICO Issues Merseyside-Based Law Firm £60,000 Fine After Cyber-Attack
A UK Law firm has been fined £60,000 after data stolen during a 2022 cyber-attack was published on the dark...
Smashing Security podcast #413: Hacking the hackers… with a credit card?
A cybersecurity firm is buying access to underground crime forums to gather intelligence. Does that seem daft to you? And...