Authentication vendor completes investigation into incident
Monthly Archives: April 2022
Global Dwell Time Drops but EMEA Lags
USN-5380-1: Bash vulnerability
It was discovered that Bash did not properly drop privileges
when the binary had the setuid bit enabled. An attacker could
possibly use this issue to escalate privileges.
libinput-1.20.1-1.fc36
FEDORA-2022-998f810306
Packages in this update:
libinput-1.20.1-1.fc36
Update description:
libinput 1.20.1, fixes a format string vulnerability (CVE-2022-1215)
libinput-1.19.4-1.fc35
FEDORA-2022-8d7a412c72
Packages in this update:
libinput-1.19.4-1.fc35
Update description:
libinput 1.19.4, fixes CVE-2022-1215 with a format string vulnerability
libinput-1.19.4-1.fc34
FEDORA-2022-63de6726ce
Packages in this update:
libinput-1.19.4-1.fc34
Update description:
libinput 1.19.4, fixes CVE-2022-1215 with a format string vulnerability
When Misconfigurations Open the Door to Russian Attackers
Organizations need to address security misconfigurations in their environments so that Russian state-sponsored threat actors don’t get to them first.
Undetectable Backdoors in Machine-Learning Models
New paper: “Planting Undetectable Backdoors in Machine Learning Models:
Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.
First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.
Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an “adversarially robust” classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.
Oracle Critical Patch Update Advisory – April 2022
Community Defense Against Ransomware
This virtual Cybersecurity Modernization Summit explored the ongoing challenge of cybersecurity in the state and local government and higher education communities.