Interesting essay on the poisoning of LLMs—ChatGPT in particular:
Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.
They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.
Once they do update it, we only have their word—pinky-swear promises—that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google.
More Stories
Friday Squid Blogging: Squid Game Season Two Teaser
The teaser for Squid Game Season Two dropped. Blog moderation policy. Read More
Clever Social Engineering Attack Using Captchas
This is really interesting. It’s a phishing attack targeting GitHub users, tricking them to solve a fake Captcha that actually...
US Cyberspace Solarium Commission Outlines Ten New Cyber Policy Priorities
In its fourth annual report, the US Cyberspace Solarium Commission highlighted the need to focus on securing critical infrastructure and...
Cybersecurity Skills Gap Leaves Cloud Environments Vulnerable
A new report by Check Point Software highlights a significant increase in cloud security incidents, largely due to a lack...
Going for Gold: HSBC Approves Quantum-Safe Technology for Tokenized Bullions
The bank giant and Quantinuum trialed the first application of quantum-secure technology for buying and selling tokenized physical gold Read...
This Windows PowerShell Phish Has Scary Potential
Many GitHub users this week received a novel phishing email warning of critical security holes in their code. Those who...