Yet another adversarial ML attack:
Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.
So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set then let initialisation bias do the rest of the work.
Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.
Research paper.
More Stories
Friday Squid Blogging: Live Colossal Squid Filmed
A live colossal squid was filmed for the first time in the ocean. It’s only a juvenile: a foot long....
Midnight Blizzard Targets European Diplomats with Wine Tasting Phishing Lure
Russian state actor Midnight Blizzard is using fake wine tasting events as a lure to spread malware for espionage purposes,...
Age Verification Using Facial Scans
Discord is testing the feature: “We’re currently running tests in select regions to age-gate access to certain spaces or user...
NTLM Hash Exploit Targets Poland and Romania Days After Patch
An NTLM hash disclosure spoofing vulnerability that leaks hashes with minimal user interaction has been observed being exploited in the...
Senators Urge Cyber-Threat Sharing Law Extension Before Deadline
Bipartisan support grows in Congress to extend Cybersecurity Information Sharing Act for 10 years Read More
Identity Attacks Now Comprise a Third of Intrusions
IBM warns of infostealer surge as attackers automate credential theft and adopt AI to generate highly convincing phishing emails en...