OPSWAT mobile hardware offers infrastructure security for the air gap

Read Time:41 Second

Infrastructure protection vendor OPSWAT has announced the availability of its new MetaDefender Kiosk K2100 hardware, designed to provide a mobile option for users who want the company’s media-scanning capabilities to work in the field.

OPSWAT’s MetaDefender line of kiosks is designed to address a potential security weakness for critical infrastructure defended by air gaps. In order to patch those systems, audit them, or move data among them, removable media like SD cards, USB sticks and sometimes even DVDs are used by field service personnel.

The vulnerability of the removable media is, therefore, a potential problem, according to OPSWAT vice president of products Pete Lund, not least in the sense that that media could be used to move sensitive information off of critical infrastructure.

To read this article in full, please click here

Read More

Microsoft attributes Charlie Hebdo attacks to Iranian nation-state threat group

Read Time:38 Second

Microsoft’s Digital Threat Analysis Center (DTAC) has attributed a recent influence operation targeting the satirical French magazine Charlie Hebdo to an Iranian nation-state actor. Microsoft dubbed the threat group, which calls itself Holy Souls, NEPTUNIUM. It has also been identified as Emennet Pasargad by the US Department of Justice.

In January, the group claimed to have obtained the personal information of more than 200,000 Charlie Hebdo customers after access to a database, which Microsoft believes was in response to a cartoon contest conducted by the magazine. The information included a spreadsheet detailing the full names, telephone numbers, and home and email addresses of accounts that had subscribed to, or purchased merchandise from, the publication.

To read this article in full, please click here

Read More

Attacking Machine Learning Systems

Read Time:7 Minute, 30 Second

The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn’t lose sight of the fact that insecure software will be the likely attack vector for most ML systems.

We are amazed by real-world demonstrations of adversarial attacks on ML systems, such as a 3D-printed object that looks like a turtle but is recognized (from any orientation) by the ML system as a gun. Or adding a few stickers that look like smudges to a stop sign so that it is recognized by a state-of-the-art system as a 45 mi/h speed limit sign. But what if, instead, somebody hacked into the system and just switched the labels for “gun” and “turtle” or swapped “stop” and “45 mi/h”? Systems can only match images with human-provided labels, so the software would never notice the switch. That is far easier and will remain a problem even if systems are developed that are robust to those adversarial attacks.

At their core, modern ML systems have complex mathematical models that use training data to become competent at a task. And while there are new risks inherent in the ML model, all of that complexity still runs in software. Training data are still stored in memory somewhere. And all of that is on a computer, on a network, and attached to the Internet. Like everything else, these systems will be hacked through vulnerabilities in those more conventional parts of the system.

This shouldn’t come as a surprise to anyone who has been working with Internet security. Cryptography has similar vulnerabilities. There is a robust field of cryptanalysis: the mathematics of code breaking. Over the last few decades, we in the academic world have developed a variety of cryptanalytic techniques. We have broken ciphers we previously thought secure. This research has, in turn, informed the design of cryptographic algorithms. The classified world of the NSA and its foreign counterparts have been doing the same thing for far longer. But aside from some special cases and unique circumstances, that’s not how encryption systems are exploited in practice. Outside of academic papers, cryptosystems are largely bypassed because everything around the cryptography is much less secure.

I wrote this in my book, Data and Goliath:

The problem is that encryption is just a bunch of math, and math has no agency. To turn that encryption math into something that can actually provide some security for you, it has to be written in computer code. And that code needs to run on a computer: one with hardware, an operating system, and other software. And that computer needs to be operated by a person and be on a network. All of those things will invariably introduce vulnerabilities that undermine the perfection of the mathematics…

This remains true even for pretty weak cryptography. It is much easier to find an exploitable software vulnerability than it is to find a cryptographic weakness. Even cryptographic algorithms that we in the academic community regard as “broken”—meaning there are attacks that are more efficient than brute force—are usable in the real world because the difficulty of breaking the mathematics repeatedly and at scale is much greater than the difficulty of breaking the computer system that the math is running on.

ML systems are similar. Systems that are vulnerable to model stealing through the careful construction of queries are more vulnerable to model stealing by hacking into the computers they’re stored in. Systems that are vulnerable to model inversion—this is where attackers recover the training data through carefully constructed queries—are much more vulnerable to attacks that take advantage of unpatched vulnerabilities.

But while security is only as strong as the weakest link, this doesn’t mean we can ignore either cryptography or ML security. Here, our experience with cryptography can serve as a guide. Cryptographic attacks have different characteristics than software and network attacks, something largely shared with ML attacks. Cryptographic attacks can be passive. That is, attackers who can recover the plaintext from nothing other than the ciphertext can eavesdrop on the communications channel, collect all of the encrypted traffic, and decrypt it on their own systems at their own pace, perhaps in a giant server farm in Utah. This is bulk surveillance and can easily operate on this massive scale.

On the other hand, computer hacking has to be conducted one target computer at a time. Sure, you can develop tools that can be used again and again. But you still need the time and expertise to deploy those tools against your targets, and you have to do so individually. This means that any attacker has to prioritize. So while the NSA has the expertise necessary to hack into everyone’s computer, it doesn’t have the budget to do so. Most of us are simply too low on its priorities list to ever get hacked. And that’s the real point of strong cryptography: it forces attackers like the NSA to prioritize.

This analogy only goes so far. ML is not anywhere near as mathematically sound as cryptography. Right now, it is a sloppy misunderstood mess: hack after hack, kludge after kludge, built on top of each other with some data dependency thrown in. Directly attacking an ML system with a model inversion attack or a perturbation attack isn’t as passive as eavesdropping on an encrypted communications channel, but it’s using the ML system as intended, albeit for unintended purposes. It’s much safer than actively hacking the network and the computer that the ML system is running on. And while it doesn’t scale as well as cryptanalytic attacks can—and there likely will be a far greater variety of ML systems than encryption algorithms—it has the potential to scale better than one-at-a-time computer hacking does. So here again, good ML security denies attackers all of those attack vectors.

We’re still in the early days of studying ML security, and we don’t yet know the contours of ML security techniques. There are really smart people working on this and making impressive progress, and it’ll be years before we fully understand it. Attacks come easy, and defensive techniques are regularly broken soon after they’re made public. It was the same with cryptography in the 1990s, but eventually the science settled down as people better understood the interplay between attack and defense. So while Google, Amazon, Microsoft, and Tesla have all faced adversarial ML attacks on their production systems in the last three years, that’s not going to be the norm going forward.

All of this also means that our security for ML systems depends largely on the same conventional computer security techniques we’ve been using for decades. This includes writing vulnerability-free software, designing user interfaces that help resist social engineering, and building computer networks that aren’t full of holes. It’s the same risk-mitigation techniques that we’ve been living with for decades. That we’re still mediocre at it is cause for concern, with regard to both ML systems and computing in general.

I love cryptography and cryptanalysis. I love the elegance of the mathematics and the thrill of discovering a flaw—or even of reading and understanding a flaw that someone else discovered—in the mathematics. It feels like security in its purest form. Similarly, I am starting to love adversarial ML and ML security, and its tricks and techniques, for the same reasons.

I am not advocating that we stop developing new adversarial ML attacks. It teaches us about the systems being attacked and how they actually work. They are, in a sense, mechanisms for algorithmic understandability. Building secure ML systems is important research and something we in the security community should continue to do.

There is no such thing as a pure ML system. Every ML system is a hybrid of ML software and traditional software. And while ML systems bring new risks that we haven’t previously encountered, we need to recognize that the majority of attacks against these systems aren’t going to target the ML part. Security is only as strong as the weakest link. As bad as ML security is right now, it will improve as the science improves. And from then on, as in cryptography, the weakest link will be in the software surrounding the ML system.

This essay originally appeared in the May 2020 issue of IEEE Computer. I forgot to reprint it here.

Read More

The ethics of biometric data use in security

Read Time:4 Minute, 38 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

In a world where you can scan the veins in your hand to unlock a smartphone, how do you maintain control over personal data? Biometric authentication, the use of distinctive human features like iris patterns, fingerprints and even gait in lieu of a password, is gaining ground in the tech world.

Proponents tout its inherent, hard-to-replicate qualities as a security benefit, while detractors see the same features as an invasion of privacy. Both sides may be right.

The problems with biometrics

Unlike a password, you can’t forget your face at home. But also, unlike a password, you can’t reset your face — meaning you’re out of luck if someone steals a photo of it.

In 2016, a biometrics researcher helped investigators hack into a murder victim’s phone with only a photo of the man’s fingerprint. While security systems are getting more advanced all the time, current technology also allows cybercriminals to run wild with a single piece of biometric data, accessing everything from laptop logins to bank accounts.

By its very nature, biometric authentication requires third parties to store biometric data. What happens if the information is exposed?

In addition to potential hacking, breaching people’s personal data might reveal something they’d rather keep private. Vein patterns could reveal that a person has a vascular disorder, raising their insurance premiums. Fingerprints could expose a chromosomal disease.

True, people give this same information to their doctors, and a medical data breach could have the same repercussions. But handing off biometric data to a commercial company — which isn’t bound by HIPAA or sworn to do no harm — is a much grayer area.

Another issue that occasionally plagues biometric authentication is injuries and natural bodily changes. A single paper cut can derail a fingerprint scanner, and an aging eye throws iris scanners for a loop. People will have to update their photos every few years to remind the system what they look like.

Some facial recognition programs can even predict how long a person will live. Insurance companies have expressed interest in getting hold of this data, since the way a person ages says a lot about their health. If stolen biometric data fed into an algorithm predicts a person won’t make it past 50, will their employer pass them up for a promotion?

In the event of an accident, your family won’t easily be able to access your accounts if you use biometric authentication, since it’s not as simple as writing down a list of passwords. Maybe that’s a good thing — but maybe not.

Another ethical dilemma with biometric data use is identifying people without their consent. Most people are used to being on camera at the grocery store, but if that same camera snaps a photo without permission and stores it for later retrieval, they probably won’t be too happy.

Some people point out that you have no right to privacy in a public space, and that’s true — to an extent. But where do you draw the line between publicity and paparazzi? Is it OK to snap a stranger’s photo while you’re talking to them, or is that considered rude and intrusive?

The benefits of biometric data

Of course, no one would be handing off a photo of their face if the technology was good for nothing.

It’s quick, easy, and convenient to log into your phone by putting your thumb on the home button. Though it’s possible for a hacker to find a picture of your thumbprint, they’d also have to snag your phone along with it to log in, essentially having to bypass a two-factor authentication system. Who has time for that just to steal a reel of cat photos?

Hackers also can’t brute-force their way into guessing what your face looks like. Letter and number combinations are finite, but the subtle variations of the human body are limitless. Nobody can create a program to replicate your biometric data by chance. Consequently, biometric authentication is an extremely strong security measure.

Police can also use biometric analysis to get criminals off the streets. Unlike a human with questionable accuracy, a camera is a reliable witness. It’s not perfect, of course, but it’s much better than asking shaken crime victims for a description of who mugged them. Smart cameras equipped with facial recognition can prevent wrongful detainments and even acquit people who would otherwise languish in jail.

The flip side is that facial recognition does occasionally get it wrong — people have been arrested for crimes they didn’t commit thanks to camera footage of a lookalike. As camera technology improves, hopefully the incidence of people being wrongfully accused will lessen. But for the few outliers who still get misidentified, the consequences can be grave.

Facing the facts

Ultimately, people will have to decide for themselves if they’re comfortable using biometric technology. You probably won’t encounter any problems using biometric authentication to access your phone or laptop, and it can vastly improve your security. The bigger ethical debate is in how third parties can use publicly available data — whether legal or leaked — to further their own gains. In the meantime, just know that your face is probably already in a database, so keep an eye out for doppelgangers.

Read More

USN-5842-1: EditorConfig Core C vulnerability

Read Time:14 Second

Mark Esler and David Fernandez Gonzalez discovered that
EditorConfig Core C incorrectly handled memory when handling
certain inputs. An attacker could possibly use this issue to cause
applications using EditorConfig Core C to crash, resulting in a
denial of service, or possibly execute arbitrary code.

Read More

Will your incident response team fight or freeze when a cyberattack hits?

Read Time:46 Second

If there’s an intrusion or a ransomware attack on your company, will your security team come out swinging, ready for a real fight? CISOs may feel their staff is always primed with the technical expertise and training they need, but there’s still a chance they might freeze up when the pressure is on, says Bec McKeown, director of human science at cybersecurity training platform Immersive Labs.

“You may have a crisis playbook and crisis policies and you may assume those are the first things you’ll reach for during an incident. But that’s not always the case, because the way your brain works isn’t just fight or flight. It’s fight, flight, or freeze,” she says. “I’ve heard people say, ‘We knew how to respond to a crisis, but we didn’t know what to do when it actually happened.’”

To read this article in full, please click here

Read More