Drones as an attack vector: Vendors need to step up

Read Time:31 Second

Critical infrastructure operators, law enforcement, and every level of government are all busy incorporating drones into their day-to-day operations. Drones are being used to support an array of applications for traditional infrastructure as well as agriculture, utilities, manufacturing, oil and gas, mining, and heavy industries.

Drone makers and industry end-users are just now starting to recognize that all elements of their connected enterprises have what Jono Anderson, principal, strategy and innovation at KPMG, calls “robust capabilities that encompass individual drones, connected fleets of drones, cloud/enterprise capabilities, and all communications between them.”

To read this article in full, please click here

Read More

Spring4Shell: Assessing the risk

Read Time:51 Second

When a significant vulnerability like Spring4Shell is discovered, how do you determine if you are at risk? Insurance or verification services might require you to run external tests on web properties. These reports often show spurious exposures that may or may not lead to more issues on your website. You must research false-positive reports and inform management whether the item found is acceptable risk.

I’ve seen false positives on external scans due to an open port and associating that port with a known issue even if the service is not run on that port. Whenever you have a pen test or vulnerability scan, know that you can disagree with the findings and explain to the researcher how the item in question is not making you insecure. However, these processes take time away from other security duties, and sometimes we agree with the findings and find workarounds and mitigations as that may be faster than arguing with the auditor.

To read this article in full, please click here

Read More

Undetectable Backdoors in Machine-Learning Models

Read Time:1 Minute, 25 Second

New paper: “Planting Undetectable Backdoors in Machine Learning Models:

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.

Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an “adversarially robust” classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Read More