FEDORA-2024-aebaa73b1f
Packages in this update:
pdns-recursor-5.1.2-1.fc41
Update description:
Update to latest upstream
pdns-recursor-5.1.2-1.fc41
Update to latest upstream
pdns-recursor-4.9.9-1.fc39
Update to latest upstream
Here is an overview of the CIS Benchmarks that the Center for Internet Security updated or released for October 2024.
A new O’Reilly survey showed a shortage of AI security skills, while AI-enabled security tools become tech professionals’ top priority for the coming year
It was discovered that WEBrick incorrectly handled having both a Content-
Length header and a Transfer-Encoding header. A remote attacker could
possibly use this issue to perform a HTTP request smuggling attack.
CLoudflare just blocked the current record DDoS attack: 3.8 terabits per second. (Lots of good information on the attack, and DDoS in general, at the link.)
News article.
The content of this post is solely the responsibility of the author. LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article.
We wanted to know what was going on within our vast networks; modern tools have made it possible for us to know too much.
Some data is good, so petabytes of data is better, right? In theory, yes, but we all know that in practice, it really means a barrage of alerts, late nights at the office, and that feeling of guilt when you have to leave some alerts uninvestigated. SOCs today are drowning as they try to keep up with the new workload brough on by AI-induced threats, SaaS-based risks, proliferating forms of ransomware, the underground criminal as-a-Service economy, and complex networks (private cloud, public cloud, hybrid cloud, multi-cloud, on-premises, and more). Oh, and more AI-induced threats.
However, SOCs have one tool with which they can fight back. By weilding automation to their advantage, modern SOCs can cut a lot of the needless notifications before they end up as unfinished to-dos on their plate. And that will lead to more positive outcomes all around.
One unsurprising headline reads, “Alert fatigue pushes security analysts to the limit.” And that isn’t even the most exciting news of the day. As noted by Grant Oviatt, Head of Security Operations at Prophet Security, “Despite automation advancements, investigating alerts is still mostly a manual job, and the number of alerts has only gone up over the past five years. Some automated tools meant to lighten the load for analysts can actually add to it by generating even more alerts that need human attention.”
Today, alert fatigue comes from a number of places:
Too many alerts | Thanks to all those tools; firewalls, EDR, IPS, IDS, and more.
Too many false positives | This leads to wasted time investigating flops.
Not enough context | A lack of enriching information makes you blind to which alerts might actually be viable.
Not enough personnel | Even throwing more people at the problem won’t help if you don’t have enough people. Given the amount of threats and alerts today, however, it’s likely you’d need to increase your SOC by a factor of 100.
As noted in Helpnet Security, “Today’s security tools generate an incredible volume of event data. This makes it difficult for security practitioners to distinguish between background noise and serious threats…[M]any systems are prone to false positives, which are triggered either by harmless activity or by overly sensitive anomaly thresholds. This can desensitize defenders who may end up missing important attack signals.”
To increase the signal-to-noise ratio and winnow down this deluge of data, SOC automation processes are needed to streamline security operations. And those automated processes are only made more effective by adding the enhancing capabilities of artificial intelligence (AI) (including machine learning (ML) and Large Language Models (LLMs) specifically).
Automation gives us all the problems on a silver platter, faithfully finding anything we’ve programmed it to and delivering it to our back porch like a hunting dog. But as any SOC knows, those dead birds pile up. And that makes it harder to find the ones that count. One study revealed that 33% of organizations were “late to response to cyberattacks” because they were dealing with a false positive.
Anyone with a SOAR tool can tell you that automation is great, but alone it is not enough to bat down barrages of false positives. Even the best automated solutions (homegrown or otherwise) often catch too many alerts in their net (to be fair, there are altogether too many threats out there and they’re just following the rules). Something more is needed to pare down the catch before it reaches your SOC.
Pairing automation with AI is the real sweet spot in security today. AI-infused solutions use their ability to hunt anomalies, their advanced algorithms that can sift out spam from baseline-pattern traffic, and quickly tell you which alerts are duds. By combining this “technological hunch” (heuristics, often) with automation, modern security solutions can follow up that lead by launching investigations and actually doing the digging for you. This not only helps you ferret out bad alerts, but can also lead you to knowing, of all the alerts that are valid, which are the most important. Which leads to our next point.
In addition to automation (not in lieu of), modern public Large Language Models (LLMs) can work with your current automated systems to make better, more complex decisions and not only find but prioritize alerts by severity.
LLMs enhance automation alone to make not just “if/then” condition-based calls, but higher-level assessments by detecting patterns, learning from past protocols, and adjusting its decision-making capabilities based on continuous input. With their ability to investigate different outcomes nearly simultaneously, AI-based automated tools can run probabilities on your vetted, valid alerts and let you know which presents the most salient threat to your enterprise. How’s that for efficiency?
Now, not only do you know which alerts are not worth your time, but your know which of all the real threats is the most important. That means your SOC can get right to what matters most and leave the guesswork to the algorithms and automation (which, let’s face it, do all that exponentially faster – and don’t fatigue).
Human experts will always be needed for the hard jobs (like programming and integrating AI to your environment in the first place), but with the help of machine learning, LLMs, automation, and more, your SOCs will only have to do the hard jobs. And isn’t that how they prefer to use their expertise, anyway?
The content of this post is solely the responsibility of the author. LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article.
We wanted to know what was going on within our vast networks; modern tools have made it possible for us to know too much.
Some data is good, so petabytes of data is better, right? In theory, yes, but we all know that in practice, it really means a barrage of alerts, late nights at the office, and that feeling of guilt when you have to leave some alerts uninvestigated. SOCs today are drowning as they try to keep up with the new workload brough on by AI-induced threats, SaaS-based risks, proliferating forms of ransomware, the underground criminal as-a-Service economy, and complex networks (private cloud, public cloud, hybrid cloud, multi-cloud, on-premises, and more). Oh, and more AI-induced threats.
However, SOCs have one tool with which they can fight back. By weilding automation to their advantage, modern SOCs can cut a lot of the needless notifications before they end up as unfinished to-dos on their plate. And that will lead to more positive outcomes all around.
One unsurprising headline reads, “Alert fatigue pushes security analysts to the limit.” And that isn’t even the most exciting news of the day. As noted by Grant Oviatt, Head of Security Operations at Prophet Security, “Despite automation advancements, investigating alerts is still mostly a manual job, and the number of alerts has only gone up over the past five years. Some automated tools meant to lighten the load for analysts can actually add to it by generating even more alerts that need human attention.”
Today, alert fatigue comes from a number of places:
Too many alerts | Thanks to all those tools; firewalls, EDR, IPS, IDS, and more.
Too many false positives | This leads to wasted time investigating flops.
Not enough context | A lack of enriching information makes you blind to which alerts might actually be viable.
Not enough personnel | Even throwing more people at the problem won’t help if you don’t have enough people. Given the amount of threats and alerts today, however, it’s likely you’d need to increase your SOC by a factor of 100.
As noted in Helpnet Security, “Today’s security tools generate an incredible volume of event data. This makes it difficult for security practitioners to distinguish between background noise and serious threats…[M]any systems are prone to false positives, which are triggered either by harmless activity or by overly sensitive anomaly thresholds. This can desensitize defenders who may end up missing important attack signals.”
To increase the signal-to-noise ratio and winnow down this deluge of data, SOC automation processes are needed to streamline security operations. And those automated processes are only made more effective by adding the enhancing capabilities of artificial intelligence (AI) (including machine learning (ML) and Large Language Models (LLMs) specifically).
Automation gives us all the problems on a silver platter, faithfully finding anything we’ve programmed it to and delivering it to our back porch like a hunting dog. But as any SOC knows, those dead birds pile up. And that makes it harder to find the ones that count. One study revealed that 33% of organizations were “late to response to cyberattacks” because they were dealing with a false positive.
Anyone with a SOAR tool can tell you that automation is great, but alone it is not enough to bat down barrages of false positives. Even the best automated solutions (homegrown or otherwise) often catch too many alerts in their net (to be fair, there are altogether too many threats out there and they’re just following the rules). Something more is needed to pare down the catch before it reaches your SOC.
Pairing automation with AI is the real sweet spot in security today. AI-infused solutions use their ability to hunt anomalies, their advanced algorithms that can sift out spam from baseline-pattern traffic, and quickly tell you which alerts are duds. By combining this “technological hunch” (heuristics, often) with automation, modern security solutions can follow up that lead by launching investigations and actually doing the digging for you. This not only helps you ferret out bad alerts, but can also lead you to knowing, of all the alerts that are valid, which are the most important. Which leads to our next point.
In addition to automation (not in lieu of), modern public Large Language Models (LLMs) can work with your current automated systems to make better, more complex decisions and not only find but prioritize alerts by severity.
LLMs enhance automation alone to make not just “if/then” condition-based calls, but higher-level assessments by detecting patterns, learning from past protocols, and adjusting its decision-making capabilities based on continuous input. With their ability to investigate different outcomes nearly simultaneously, AI-based automated tools can run probabilities on your vetted, valid alerts and let you know which presents the most salient threat to your enterprise. How’s that for efficiency?
Now, not only do you know which alerts are not worth your time, but your know which of all the real threats is the most important. That means your SOC can get right to what matters most and leave the guesswork to the algorithms and automation (which, let’s face it, do all that exponentially faster – and don’t fatigue).
Human experts will always be needed for the hard jobs (like programming and integrating AI to your environment in the first place), but with the help of machine learning, LLMs, automation, and more, your SOCs will only have to do the hard jobs. And isn’t that how they prefer to use their expertise, anyway?
USN-7043-1 fixed a vulnerability in cups-filters. This update provides
the corresponding update for Ubuntu 16.04 LTS
Original advisory details:
Simone Margaritelli discovered that the cups-filters cups-browsed
component could be used to create arbitrary printers from outside
the local network. In combination with issues in other printing
components, a remote attacker could possibly use this issue to
connect to a system, created manipulated PPD files, and execute
arbitrary code when a printer is used. This update
disables support for the legacy CUPS printer discovery protocol.
(CVE-2024-47176)
The Chartered Trading Standards Institute is concerned a new cap on fraud reimbursement is too low