Category Archives: News

When Your Smart ID Card Reader Comes With Malware

Read Time:6 Minute, 56 Second

Millions of U.S. government employees and contractors have been issued a secure smart ID card that enables physical access to buildings and controlled spaces, and provides access to government computer networks and systems at the cardholder’s appropriate security level. But many government employees aren’t issued an approved card reader device that lets them use these cards at home or remotely, and so turn to low-cost readers they find online. What could go wrong? Here’s one example.

A sample Common Access Card (CAC). Image: Cac.mil.

KrebsOnSecurity recently heard from a reader — we’ll call him “Mark” because he wasn’t authorized to speak to the press — who works in IT for a major government defense contractor and was issued a Personal Identity Verification (PIV) government smart card designed for civilian employees. Not having a smart card reader at home and lacking any obvious guidance from his co-workers on how to get one, Mark opted to purchase a $15 reader from Amazon that said it was made to handle U.S. government smart cards.

The USB-based device Mark settled on is the first result that currently comes up one when searches on Amazon.com for “PIV card reader.” The card reader Mark bought was sold by a company called Saicoo, whose sponsored Amazon listing advertises a “DOD Military USB Common Access Card (CAC) Reader” and has more than 11,700 mostly positive ratings.

The Common Access Card (CAC) is the standard identification for active duty uniformed service personnel, selected reserve, DoD civilian employees, and eligible contractor personnel. It is the principal card used to enable physical access to buildings and controlled spaces, and provides access to DoD computer networks and systems.

Mark said when he received the reader and plugged it into his Windows 10 PC, the operating system complained that the device’s hardware drivers weren’t functioning properly. Windows suggested consulting the vendor’s website for newer drivers.

The Saicoo smart card reader that Mark purchased. Image: Amazon.com

So Mark went to the website mentioned on Saicoo’s packaging and found a ZIP file containing drivers for Linux, Mac OS and Windows:

Image: Saicoo

Out of an abundance of caution, Mark submitted Saicoo’s drivers file to Virustotal.com, which simultaneously scans any shared files with more than five dozen antivirus and security products. Virustotal reported that some 43 different security tools detected the Saicoo drivers as malicious. The consensus seems to be that the ZIP file currently harbors a malware threat known as Ramnit, a fairly common but dangerous trojan horse that spreads by appending itself to other files.

Image: Virustotal.com

Ramnit is a well-known and older threat — first surfacing more than a decade ago — but it has evolved over the years and is still employed in more sophisticated data exfiltration attacks. Amazon said in a written statement that it was investigating the reports.

“Seems like a potentially significant national security risk, considering that many end users might have elevated clearance levels who are using PIV cards for secure access,” Mark said.

Mark said he contacted Saicoo about their website serving up malware, and received a response saying the company’s newest hardware did not require any additional drivers. He said Saicoo did not address his concern that the driver package on its website was bundled with malware.

In response to KrebsOnSecurity’s request for comment, Saicoo sent a somewhat less reassuring reply.

“From the details you offered, issue may probably caused by your computer security defense system as it seems not recognized our rarely used driver & detected it as malicious or a virus,” Saicoo’s support team wrote in an email.

“Actually, it’s not carrying any virus as you can trust us, if you have our reader on hand, please just ignore it and continue the installation steps,” the message continued. “When driver installed, this message will vanish out of sight. Don’t worry.”

Saicoo’s response to KrebsOnSecurity.

The trouble with Saicoo’s apparently infected drivers may be little more than a case of a technology company having their site hacked and responding poorly. Will Dormann, a vulnerability analyst at CERT/CC, wrote on Twitter that the executable files (.exe) in the Saicoo drivers ZIP file were not altered by the Ramnit malware — only the included HTML files.

Dormann said it’s bad enough that searching for device drivers online is one of the riskiest activities one can undertake online.

“Doing a web search for drivers is a VERY dangerous (in terms of legit/malicious hit ration) search to perform, based on results of any time I’ve tried to do it,” Dormann added. “Combine that with the apparent due diligence of the vendor outlined here, and well, it ain’t a pretty picture.”

But by all accounts, the potential attack surface here is enormous, as many federal employees clearly will purchase these readers from a myriad of online vendors when the need arises. Saicoo’s product listings, for example, are replete with comments from customers who self-state that they work at a federal agency (and several who reported problems installing drivers).

A thread about Mark’s experience on Twitter generated a strong response from some of my followers, many of whom apparently work for the U.S. government in some capacity and have government-issued CAC or PIV cards.

Two things emerged clearly from that conversation. The first was general confusion about whether the U.S. government has any sort of list of approved vendors. It does. The General Services Administration (GSA), the agency which handles procurement for federal civilian agencies, maintains a list of approved card reader vendors at idmanagement.gov (Saicoo is not on that list). [Thanks to @MetaBiometrics and @shugenja for the link!]

The other theme that ran through the Twitter discussion was the reality that many people find buying off-the-shelf readers more expedient than going through the GSA’s official procurement process, whether it’s because they were never issued one or the reader they were using simply no longer worked or was lost and they needed another one quickly.

“Almost every officer and NCO [non-commissioned officer] I know in the Reserve Component has a CAC reader they bought because they had to get to their DOD email at home and they’ve never been issued a laptop or a CAC reader,” said David Dixon, an Army veteran and author who lives in Northern Virginia. “When your boss tells you to check your email at home and you’re in the National Guard and you live 2 hours from the nearest [non-classified military network installation], what do you think is going to happen?”

Interestingly, anyone asking on Twitter about how to navigate purchasing the right smart card reader and getting it all to work properly is invariably steered toward militarycac.com. The website is maintained by Michael Danberry, a decorated and retired Army veteran who launched the site in 2008 (its text and link-heavy design very much takes one back to that era of the Internet and webpages in general). His site has even been officially recommended by the Army (PDF). Mark shared emails showing Saicoo itself recommends militarycac.com.

Image: Militarycac.com.

“The Army Reserve started using CAC logon in May 2006,” Danberry wrote on his “About” page. “I [once again] became the ‘Go to guy’ for my Army Reserve Center and Minnesota. I thought Why stop there? I could use my website and knowledge of CAC and share it with you.”

Danberry did not respond to requests for an interview — no doubt because he’s busy doing tech support for the federal government. The friendly message on Danberry’s voicemail instructs support-needing callers to leave detailed information about the issue they’re having with CAC/PIV card readers.

Dixon said Danberry has “done more to keep the Army running and connected than all the G6s [Army Chief Information Officers] put together.”

In many ways, Mr. Danberry is the equivalent of that little known software developer whose tiny open-sourced code project ends up becoming widely adopted and eventually folded into the fabric of the Internet.  I wonder if he ever imagined 15 years ago that his website would one day become “critical infrastructure” for Uncle Sam?

Read More

Keyloggers explained: How attackers record computer inputs

Read Time:40 Second

What is a keylogger?

A keylogger is a tool that can record and report on a computer user’s activity as they interact with a computer. The name is a short version of keystroke logger, and one of the main ways keyloggers keep track of you is by recording what you type as you type it. But as you’ll see, there are different kind of keyloggers, and some record a broader range of inputs.

Someone watching everything you do may sound creepy, and keyloggers are often installed by malicious hackers for nefarious purposes. But there are legitimate, or at least legal, uses for keyloggers as well, as parents can use them to keep track of kids online and employers can similarly monitor their workers.

To read this article in full, please click here

Read More

Google to launch repository service with security-tested versions of open-source software packages

Read Time:38 Second

Developers across the enterprise space are concerned about the security of the open-source software supply chain which they heavily depend on for their application development. In response, Google plans to make its own security-hardened internal open-source component repository available as a new paid service called Assured Open Source Software (Assured OSS).

The service will contain common open-source packages that have been built from source code after the code’s provenance and that of its dependencies has been vetted and the code has been reviewed and tested for vulnerabilities. The resulting packages will contain rich metadata that’s compliant with the new Supply chain Levels for Software Artifacts (SLSA) framework and will be digitally signed by Google.

To read this article in full, please click here

Read More

Mind the (Communication) Gap: How Security Leaders Can Become Dev and Ops Whisperers

Read Time:8 Minute, 22 Second

Developers, Ops and DevOps teams must incorporate security into their processes – often a hard sell. Here’s how security leaders can successfully align with them to weave security into their tools and workflows.

Establishing security controls across the enterprise used to be the exclusive realm of security teams. Not anymore. As a result, security leaders must get buy-in from developers, IT/OT Ops and DevOps teams to build security into their daily processes. The key is not just better communication, but engaging with these groups where they are, using the language they speak. As this post explains, security departments must transform from gatekeeping naysayers to business partners.

The changing world of enterprise security

As technology environments expand and evolve, incorporating a myriad of asset types and network architectures, including cloud platforms and IaC automation, more and more IT teams find themselves managing a wider array of critically important assets. Consequently, they must implement, adhere to and maintain security controls to protect the data, applications and assets they oversee. In order to protect these assets, it’s become imperative that development teams, DevOps groups and operations teams (both IT and OT) build security into their daily operations and tasks. But security isn’t the core competency of these teams – nor should we expect it to be! Their business goals, priorities and motivations are different – sometimes diametrically so – and that can create inherent resistance to the security team’s goals of mitigating risk throughout the organization.

Let’s start by outlining these differences and the ways to bridge the communication gap that often underpins the resistance these business units aim at the security team.

Why we do what we do

Let’s establish the motivations for the three key players in today’s technology landscape: Security, Development and DevOps. Security teams are driven by the familiar CIA triad of Confidentiality, Integrity and Availability. Controls put in place throughout the organization are designed to ensure one or more of these tenets. Protecting data from exposure, ensuring that assets aren’t compromised and building resiliency into the infrastructure are all key motivations for security.

Development teams aren’t motivated in the same way at all. While there may be some cursory acknowledgement of keeping systems up and running, security isn’t typically top of mind. Developers are builders at their core. They create new functionality, drive sales through new features and architect new software from the ground up. They see security as an obstacle to their goal of building something new. It’s hard to rapidly write and deploy code that delivers fancy new features to your end users when you have to conduct security scans, check in code for review and do whatever else the security team requires. It’s a recipe for immediate conflict: the need to balance efficient coding practices while deploying code that is secure, safe and free of errors that could lead to a compromise of the application.

DevOps teams, however, straddle the line between these groups, shuttling the code, applications and infrastructure from testbeds out into production. Like developers, DevOps sees security as an obstacle, but here, the primary driver isn’t necessarily creating something new, but rather, finding efficient ways to complete their tasks. This usually revolves around immense amounts of automation, which allows DevOps teams to be fast, flexible and able to address large-scale deployments with minimal effort. Here, security is seen as a hindrance to these automation processes because it requires multiple checks to ensure production deployments are safe and secured, and because it creates checkboxes to add to existing DevOps tasks which often aren’t as automated as their other workflows. This can dramatically slow down the deployment process, and that’s where the rub happens again.

Looking across the lines, there doesn’t seem to be an obvious place where these teams can intersect and find common ground. Or is there? Security leaders who know that these goals are far more aligned than most realize tend to succeed at breaking down resistance and building a stronger, more seamless security program that meaningfully reduces risk for the organization. It all starts with changing the message to align with what’s important to each of these teams.

WIIFM wins the day yet again

A common mistake security teams make with their communication programs is to assume that everyone understands that security is important, and they repeat a heavy security-focused message. But for most non-security business units in an organization, we often fail to explain security in terms that highlight WIIFM, i.e. “What’s In It For Me?”. Realistically, most non-security business units within an organization view security as an obstacle to their own efforts and commonly write it off as “that other team’s job”. Web content filters block websites. Endpoint security prevents the installation of fun games and applications. Email security stops people from clicking on enticing links promising lottery winnings, package delivery updates or tax solutions (seriously, please don’t click on anything like that). When security controls are seen as coming from “the department of no”, is it any wonder that developers and DevOps admins are hesitant to allow security controls into their domains?

Security doesn’t have to be “the department of no”, and the more we embrace security controls that sync with the way users, admins and engineers do business, the easier it is to show them what’s in it for them and the organization as a whole. So, let’s look at some recommendations across the People-Process-Tools triad where you can improve buy-in from your development and DevOps teams and tear down the obstacles that often prevent security teams from maturing and meeting their goals.

Table: People-Process-Tools Recommendations for Security Teams

Development Teams

DevOps Teams

People

Don’t require your developers to become security professionals and learn a bunch of security tools. They live in coding environments, and it’s in everyone’s interest for them to remain there.
That said, teaching secure coding practices IS valuable, and helps to bring a measure of security into the overall process by having developers do what they normally do: write code.
In communications with developers, acknowledge that there is no expectation that they must use additional security tools or that they are responsible for understanding security at the same level as the security professionals in the org. 

DevOps teams generally have a better alignment with security, but they’re still very busy folks and also shouldn’t be burdened with the expectation that they’ll be security experts. 
Communications with DevOps teams should, like with developers, highlight the areas where security controls have integration mechanisms that DevOps teams will require to keep their automation engines running.

Process

Focus on policy-based controls which provide output to developers in code. That is to say, show where broken code is and what code could be used to fix it instead of a PDF report that shows a “critical severity vulnerability in your app”.
Translate security findings into work requests that show exactly what needs to be done and where. (this can and should be automated!)
Developers value real-time responses. They’re trying to build new functionality quickly, and waiting on security processes to provide feedback before they can ship code is anathema to the way they work. 

Security controls should be implemented within a DevOps workflow as early in the process as possible. As these teams deal in scale, strong security controls that ensure images, applications, containers and other assets are secured before they’re rolled out in the thousands means that DevOps teams can focus on fixing something just once.
As much as possible, adopt a red light/green light (or a go/no go) stance for any issues or problems detected from a security standpoint. This is easier to automate into existing DevOps workflow decision-making trees and prevents DevOps from having to slow down and translate security findings into concrete tasks.
Expanding the above point: where possible, provide specific remediation steps to DevOps teams. They’re focused on “getting it done”, and the more time spent trying to figure out what to do only serves to impede their workflows.

Tools

Any software product you bring into the development workflows MUST integrate natively into their existing development tools. Do not expect developers to learn new security tools. Instead, bring security into their build environments (and yes, these tools exist!)
Security findings should automatically be translated into developer work requests and integrated into issue tracking systems (ex. Jira)

Security tools in the DevOps world must have easy and robust APIs and allow for integrations wherever and however the DevOps team works. This means support for multiple cloud platforms, toolsets and scripting languages.
Like with developers, security tools should integrate seamlessly into existing DevOps tools and output findings in a way that’s relevant to their workloads. This means integration into IT ticketing systems or other workflow management tools as well (ex. ServiceNOW).

Conclusion

At the end of the day, security professionals and leaders need to communicate with their audiences where the audiences themselves are, not where the security folks are. We must demonstrate that the security controls that we want to implement won’t be obstacles to the creation of new software features and functionality your development teams are focused on. We also need to show how those same controls won’t impede the speed and scalability that DevOps teams thrive on. If we are successful in communicating like this, security teams can move away from being seen as “the department of no”, and instead be recognized as a business partner who empowers the organization to be more operationally efficient while also reducing and mitigating risk to the core mission. 

Learn More

Read the white paper 3 Levels of Security Strategy for Business Risk Decisions
Watch the on-demand webinar Cyber Leadership Lessons from the New World of Work: What’s Next?
Read the blog Aligning Cybersecurity and Business: Nobody Said It Was Easy

Read More

Terrascan Joins the Nessus Community, Enabling Nessus To Validate Modern Cloud Infrastructures

Read Time:5 Minute, 53 Second

The addition of Terrascan to the Nessus family of products helps users better secure cloud native infrastructure by identifying misconfigurations, security weaknesses, and policy violations by scanning Infrastructure as Code repositories. 

Twenty-three years ago when Nessus was created by Renaud Deraison, the computers the scan engine was designed for were physically attached to hubs and switches, changed infrequently and were always available unless somebody shut their system down for the weekend. Fast forward to the current day: Almost every organization has a cloud-first strategy and new workloads can be spun up and down in minutes across the globe with just a few clicks. 

It would only follow that the tactics and tools security professionals use to evaluate the security of systems would need to change. This is especially true when dealing with cloud native environments which require security practitioners to do a deep dive into the guts of configuration files (aka Infrastructure as Code) in order to validate things like secrets management, RBAC, encryption, user privileges and other controls configured by individual developers. 

That’s why we decided to integrate Nessus with Terrascan, our open-source IaC security analyzer. Terrascan enables cloud security practitioners to scan infrastructure code and find security issues as part of the software delivery process. Including Terrascan in Nessus enables Nessus users to expand the scope of their security assessments to include the validation of modern cloud infrastructure before it gets deployed. 

Intro to Terrascan: Cloud-First Security Testing 

As the creator of Terrascan, I see this as a super exciting opportunity for both the Nessus and Terrascan communities. Coming from a security and risk management background, I can relate to the challenges that security teams face when their company moves to a cloud-first strategy. In addition to having to deal with securing a whole new stack of technology, security team members have to deal with new workflows and requirements. 

I remember one of my first public cloud projects. The company had decided it needed to rewrite a critical customer-facing system using a cloud-native architecture in order to stay competitive. It set an aggressive timeline of 12 weeks for the first working release. To accomplish this, they created a cross-functional team that included representatives from the business, software development, architecture, security, and operations teams. I was moved from security into the project team. 

At first the task seemed daunting. There was a lot to learn and implement in a short period of time, and as the representative of the security team, I wanted to make sure security was embedded into every decision we made. That meant having a scalable way to review and provision network security settings and configuration, identity and access management policies, and ensuring that any cloud resource was configured following security best practices.

Speed, Consistency and Scalability with Infrastructure as Code 

Around that time I discovered a tool called Terraform and the concept of Infrastructure as Code (IaC). Using Terraform we were able to quickly provision our infrastructure in a consistent manner where the code to provision our infrastructure lived side by side to our application code. This was a huge benefit compared to the way things were done in our on-premises data center, where we used ticketing systems to engage the security team and where developers perceived security as a black box that a siloed team handled.

But wait? What about standardization and security checks?

I soon realized IaC tools like Terraform still require a high level of governance in order to ensure each team and developer adhere to security standards and build infrastructure uniformly. Without a process of ensuring standards and security controls, development teams could quickly and consistently push misconfigured resources into production at scale. This translated into having to spend an increasing amount of time performing manual reviews of Terraform templates and code. Not only was this process an inefficient use of time but it was also not scalable, and would result either in missing product-delivery deadlines and/or releasing code to production that had not been tested for security issues. 

With that I knew we were going to need a different approach to security in the cloud and started thinking about a solution. 

Security Standard Enforcement and Risk Mitigation with Terrascan

With this in mind I developed an open source code scanner for Terraform – Terrascan

Why Terrascan?

Typically, businesses decide to embark into cloud transformation journeys to reduce costs and increase the agility of their development teams. What I’ve found is that by adopting a tool like Terrascan and other cloud security best practices the security team can contribute to these goals while reducing risks in the environment. Adopting a policy-as-code approach to security allows for better understanding of security controls across the organization where issues are found as early as possible during development. This helps reduce the cost of finding and fixing issues in production, while empowering development teams to release with confidence.

Today, Terrascan features a pluggable architecture that uses the same approach to scan multiple IaC and cloud-native solutions including Terraform, AWS CloudFormation, Azure Resource Manager, Kubernetes, Helm, and Kustomize, and is used to address two core challenges:

Standardization: Terrascan helps implement policy as code by including 500+ policies across multiple providers that assess for misconfigurations using the Open Policy Agent (OPA) engine. Using the underlying Rego language, policies can be easily written and extended to include any standards specific to your environment. Examples of common security weaknesses that can be easily mitigated using Terrascan include: 

Server-side encryption misconfigurations
Use of AWS Key Management Service (KMS) with Customer Managed Keys (CMS)
Encryption in-transit SSL/TLS not enabled and configured properly
Security Groups open to the public internet
Inadvertent public exposure of cloud services
Access logs not enabled on resources that support them

Risk Mitigation: Using Terrascan you can easily integrate cloud infrastructure security into DevOps pipelines to prevent security issues from reaching production. This includes integration with tools like Atlantis, Argo CD, GitLab and GitHub Actions. 

Why Open Source?

Our belief at Tenable is that security is an important, foundational concern of any cloud project. The creation of open source tools like Terrascan helps to standardize and democratize security in a way that anyone can contribute to. It benefits all organizations, and the community itself, to have security policies open for everyone to look at so we can quickly identify the best practices. Then we can apply those best practices consistently across all applications by exposing our code base to community members to actively contribute to the underpinning policy base and easily modify to meet their specific needs. 

Get Started Today with Terrascan!

Accessing the new Terrascan capabilities within Nessus is a breeze. The first thing you will need to do is download the latest Nessus version (10.1.2 or later). Once you log back in, click on the new Terrascan resource item on the left-hand side navigation menu to install.

Once installed, you’re ready to launch your first Terrascan assessments. It’s that easy!

Read More