UMG, a major music corporation, reported a July 2024 data breach affecting 680 US residents
Category Archives: News
Your robot vacuum cleaner might be spying on you
When Sean Kelly bought a top-of-the-line vacuum cleaner, he imagined he was making a safe purchase.
Little did he know that the cleaning machine scuttling about his family’s feet contained a security flaw that could let anyone see and hear their every move.
Read more in my article on the Hot for Security blog.
Advanced Threat Group GoldenJackal Exploits Air-Gapped Systems
GoldenJackal targeted air-gapped government systems from May 2022 to March 2024, ESET found
Board-CISO Mismatch on Cyber Responsibility, NCSC Research Finds
The UK NCSC found that there is a lot of confusion between board members and security leaders of who is responsible for cybersecurity within their organizations
ICO Releases New Data Protection Audit Framework
The UK’s ICO said the framework is designed to help businesses build trust and encourage a positive data protection culture
EU Urged to Harmonize Incident Reporting Requirements
Risk managers association FERMA has warned that new EU cyber legislation means there is an inconsistent approach to incident reporting requirements
CIS Benchmarks October 2024 Update
Here is an overview of the CIS Benchmarks that the Center for Internet Security updated or released for October 2024.
Tech Professionals Highlight Critical AI Security Skills Gap
A new O’Reilly survey showed a shortage of AI security skills, while AI-enabled security tools become tech professionals’ top priority for the coming year
Largest Recorded DDoS Attack is 3.8 Tbps
CLoudflare just blocked the current record DDoS attack: 3.8 terabits per second. (Lots of good information on the attack, and DDoS in general, at the link.)
News article.
Reducing Alert Fatigue by Streamlining SOC Processes
The content of this post is solely the responsibility of the author. LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article.
We wanted to know what was going on within our vast networks; modern tools have made it possible for us to know too much.
Some data is good, so petabytes of data is better, right? In theory, yes, but we all know that in practice, it really means a barrage of alerts, late nights at the office, and that feeling of guilt when you have to leave some alerts uninvestigated. SOCs today are drowning as they try to keep up with the new workload brough on by AI-induced threats, SaaS-based risks, proliferating forms of ransomware, the underground criminal as-a-Service economy, and complex networks (private cloud, public cloud, hybrid cloud, multi-cloud, on-premises, and more). Oh, and more AI-induced threats.
However, SOCs have one tool with which they can fight back. By weilding automation to their advantage, modern SOCs can cut a lot of the needless notifications before they end up as unfinished to-dos on their plate. And that will lead to more positive outcomes all around.
The Plague of Alert Fatigue
One unsurprising headline reads, “Alert fatigue pushes security analysts to the limit.” And that isn’t even the most exciting news of the day. As noted by Grant Oviatt, Head of Security Operations at Prophet Security, “Despite automation advancements, investigating alerts is still mostly a manual job, and the number of alerts has only gone up over the past five years. Some automated tools meant to lighten the load for analysts can actually add to it by generating even more alerts that need human attention.”
Today, alert fatigue comes from a number of places:
Too many alerts | Thanks to all those tools; firewalls, EDR, IPS, IDS, and more.
Too many false positives | This leads to wasted time investigating flops.
Not enough context | A lack of enriching information makes you blind to which alerts might actually be viable.
Not enough personnel | Even throwing more people at the problem won’t help if you don’t have enough people. Given the amount of threats and alerts today, however, it’s likely you’d need to increase your SOC by a factor of 100.
As noted in Helpnet Security, “Today’s security tools generate an incredible volume of event data. This makes it difficult for security practitioners to distinguish between background noise and serious threats…[M]any systems are prone to false positives, which are triggered either by harmless activity or by overly sensitive anomaly thresholds. This can desensitize defenders who may end up missing important attack signals.”
To increase the signal-to-noise ratio and winnow down this deluge of data, SOC automation processes are needed to streamline security operations. And those automated processes are only made more effective by adding the enhancing capabilities of artificial intelligence (AI) (including machine learning (ML) and Large Language Models (LLMs) specifically).
Filtering False Positives
Automation gives us all the problems on a silver platter, faithfully finding anything we’ve programmed it to and delivering it to our back porch like a hunting dog. But as any SOC knows, those dead birds pile up. And that makes it harder to find the ones that count. One study revealed that 33% of organizations were “late to response to cyberattacks” because they were dealing with a false positive.
Anyone with a SOAR tool can tell you that automation is great, but alone it is not enough to bat down barrages of false positives. Even the best automated solutions (homegrown or otherwise) often catch too many alerts in their net (to be fair, there are altogether too many threats out there and they’re just following the rules). Something more is needed to pare down the catch before it reaches your SOC.
Pairing automation with AI is the real sweet spot in security today. AI-infused solutions use their ability to hunt anomalies, their advanced algorithms that can sift out spam from baseline-pattern traffic, and quickly tell you which alerts are duds. By combining this “technological hunch” (heuristics, often) with automation, modern security solutions can follow up that lead by launching investigations and actually doing the digging for you. This not only helps you ferret out bad alerts, but can also lead you to knowing, of all the alerts that are valid, which are the most important. Which leads to our next point.
Prioritizing Real Threats
In addition to automation (not in lieu of), modern public Large Language Models (LLMs) can work with your current automated systems to make better, more complex decisions and not only find but prioritize alerts by severity.
LLMs enhance automation alone to make not just “if/then” condition-based calls, but higher-level assessments by detecting patterns, learning from past protocols, and adjusting its decision-making capabilities based on continuous input. With their ability to investigate different outcomes nearly simultaneously, AI-based automated tools can run probabilities on your vetted, valid alerts and let you know which presents the most salient threat to your enterprise. How’s that for efficiency?
Now, not only do you know which alerts are not worth your time, but your know which of all the real threats is the most important. That means your SOC can get right to what matters most and leave the guesswork to the algorithms and automation (which, let’s face it, do all that exponentially faster – and don’t fatigue).
Conclusion
Human experts will always be needed for the hard jobs (like programming and integrating AI to your environment in the first place), but with the help of machine learning, LLMs, automation, and more, your SOCs will only have to do the hard jobs. And isn’t that how they prefer to use their expertise, anyway?