Interesting essay on the poisoning of LLMs—ChatGPT in particular:
Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.
They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.
Once they do update it, we only have their word—pinky-swear promises—that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google.
More Stories
Fake PoC Exploit Targets Security Researchers with Infostealer
Trend Micro detailed how attackers are using a fake proof-of-concept for a critical Microsoft vulnerability, designed to steal sensitive data...
Smashing Security podcast #399: Honey in hot water, and reset your devices
Ever wonder how those "free" browser extensions that promise to save you money actually work? We dive deep into the...
Space Bears ransomware: what you need to know
The Space Bears ransomware gang stands out from the crowd by presenting itself better than many legitimate companies, with corporate...
Fancy Product Designer Plugin Flaws Expose WordPress Sites
Critical Fancy Product Designer plugin flaws risk remote code execution and SQL injection attacks on WordPress sites Read More
Japan Faces Prolonged Cyber-Attacks Linked to China’s MirrorFace
Cyber-attacks by China-linked MirrorFace targeted Japan’s national security information in major campaigns operating since 2019 Read More