User generated PPKG file for Bulk Enroll may have unencrypted sensitive information exposed.
CVE-2021-27779
VersionVault Express exposes sensitive information that an attacker can use to impersonate the server or eavesdrop on communications with the server.
PIXM releases new computer vision solution for mobile phishing
Computer vision cybersecurity startup PIXM has expanded its line of antiphishing products with the launch of PIXM Mobile, a solution to protect individuals and enterprises from targeted and unknown phishing attacks on mobile devices.
The cloud-based mobile product is aimed at identifying phishing attacks on mobile devices in real time, as a user clicks on a malicious link, using computer vision technology.
PIXM Mobile is designed to support any mobile application, including SMS — used in “smishing” attacks — social media, and business collaboration apps, as well as email and web-based phishing pages.
What SLTTs Should Know About the FREE CIS SecureSuite Membership
CIS has made CIS SecureSuite Membership free to SLTT governments in the United States. Learn how this can help you revamp your organization’s cybersecurity […]
Organizations Urged to Fix 41 Vulnerabilities Added to CISA’s Catalog of Exploited Flaws
The newly added vulnerabilities span six years, with the oldest disclosed in 2016
logrotate-3.18.1-3.fc35
FEDORA-2022-eccaf1aee8
Packages in this update:
logrotate-3.18.1-3.fc35
Update description:
fix potential DoS from unprivileged users via the state file (CVE-2022-1348)
logrotate-3.20.1-1.fc36
FEDORA-2022-87c0f05204
Packages in this update:
logrotate-3.20.1-1.fc36
Update description:
fix potential DoS from unprivileged users via the state file (CVE-2022-1348)
logrotate-3.18.0-4.fc34
FEDORA-2022-71ece75de1
Packages in this update:
logrotate-3.18.0-4.fc34
Update description:
fix potential DoS from unprivileged users via the state file (CVE-2022-1348)
Manipulating Machine-Learning Systems through the Order of the Training Data
Yet another adversarial ML attack:
Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.
So what happens if the bad guys can cause the order to be not random? You guessed it—all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set then let initialisation bias do the rest of the work.
Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.
Research paper.
Airline passengers left stranded after ransomware attack
An Indian airline says that an “attempted ransomware attack” against its IT infrastructure caused flights to be delayed or canceled, and left passengers stranded.
Read more in my article on the Hot for Security blog.