DNS data shows one in 10 organizations have malware traffic on their networks

Read Time:53 Second

During every quarter last year, between 10% and 16% of organizations had DNS traffic originating on their networks towards command-and-control (C2) servers associated with known botnets and various other malware threats, according to a report from cloud and content delivery network provider Akamai.

More than a quarter of that traffic went to servers belonging to initial access brokers, attackers who sell access into corporate networks to other cybercriminals, the report stated. “As we analyzed malicious DNS traffic of both enterprise and home users, we were able to spot several outbreaks and campaigns in the process, such as the spread of FluBot, an Android-based malware moving from country to country around the world, as well as the prevalence of various cybercriminal groups aimed at enterprises,” Akamai said. “Perhaps the best example is the significant presence of C2 traffic related to initial access brokers (IABs) that breach corporate networks and monetize access by peddling it to others, such as ransomware as a service (RaaS) groups.”

To read this article in full, please click here

Read More

Upcoming Speaking Engagements

Read Time:41 Second

This is a current list of where and when I am scheduled to speak:

I’m speaking on “How to Reclaim Power in the Digital World” at EPFL in Lausanne, Switzerland, on Thursday, March 16, 2023, at 5:30 PM CET.
I’ll be discussing my new book A Hacker’s Mind: How the Powerful Bend Society’s Rules at Harvard Science Center in Cambridge, Massachusetts, USA, on Friday, March 31, 2023 at 6:00 PM EDT.
I’ll be discussing my book A Hacker’s Mind with Julia Angwin at the Ford Foundation Center for Social Justice in New York City, on Thursday, April 6, 2023 at 6:30 PM EDT.
I’m speaking at IT-S Now 2023 in Vienna, Austria, on June 1-2, 2023.

The list is maintained on this page.

Read More

USN-5951-1: Linux kernel (IBM) vulnerabilities

Read Time:4 Minute, 0 Second

It was discovered that the Upper Level Protocol (ULP) subsystem in the
Linux kernel did not properly handle sockets entering the LISTEN state in
certain protocols, leading to a use-after-free vulnerability. A local
attacker could use this to cause a denial of service (system crash) or
possibly execute arbitrary code. (CVE-2023-0461)

It was discovered that the NVMe driver in the Linux kernel did not properly
handle reset events in some situations. A local attacker could use this to
cause a denial of service (system crash). (CVE-2022-3169)

It was discovered that a use-after-free vulnerability existed in the SGI
GRU driver in the Linux kernel. A local attacker could possibly use this to
cause a denial of service (system crash) or possibly execute arbitrary
code. (CVE-2022-3424)

Gwangun Jung discovered a race condition in the IPv4 implementation in the
Linux kernel when deleting multipath routes, resulting in an out-of-bounds
read. An attacker could use this to cause a denial of service (system
crash) or possibly expose sensitive information (kernel memory).
(CVE-2022-3435)

It was discovered that a race condition existed in the Kernel Connection
Multiplexor (KCM) socket implementation in the Linux kernel when releasing
sockets in certain situations. A local attacker could use this to cause a
denial of service (system crash). (CVE-2022-3521)

It was discovered that the Netronome Ethernet driver in the Linux kernel
contained a use-after-free vulnerability. A local attacker could use this
to cause a denial of service (system crash) or possibly execute arbitrary
code. (CVE-2022-3545)

It was discovered that the hugetlb implementation in the Linux kernel
contained a race condition in some situations. A local attacker could use
this to cause a denial of service (system crash) or expose sensitive
information (kernel memory). (CVE-2022-3623)

Ziming Zhang discovered that the VMware Virtual GPU DRM driver in the Linux
kernel contained an out-of-bounds write vulnerability. A local attacker
could use this to cause a denial of service (system crash).
(CVE-2022-36280)

Hyunwoo Kim discovered that the DVB Core driver in the Linux kernel did not
properly perform reference counting in some situations, leading to a use-
after-free vulnerability. A local attacker could use this to cause a denial
of service (system crash) or possibly execute arbitrary code.
(CVE-2022-41218)

It was discovered that the Intel i915 graphics driver in the Linux kernel
did not perform a GPU TLB flush in some situations. A local attacker could
use this to cause a denial of service or possibly execute arbitrary code.
(CVE-2022-4139)

It was discovered that a race condition existed in the Xen network backend
driver in the Linux kernel when handling dropped packets in certain
circumstances. An attacker could use this to cause a denial of service
(kernel deadlock). (CVE-2022-42328, CVE-2022-42329)

It was discovered that the Atmel WILC1000 driver in the Linux kernel did
not properly validate offsets, leading to an out-of-bounds read
vulnerability. An attacker could use this to cause a denial of service
(system crash). (CVE-2022-47520)

It was discovered that the network queuing discipline implementation in the
Linux kernel contained a null pointer dereference in some situations. A
local attacker could use this to cause a denial of service (system crash).
(CVE-2022-47929)

José Oliveira and Rodrigo Branco discovered that the prctl syscall
implementation in the Linux kernel did not properly protect against
indirect branch prediction attacks in some situations. A local attacker
could possibly use this to expose sensitive information. (CVE-2023-0045)

It was discovered that a use-after-free vulnerability existed in the
Advanced Linux Sound Architecture (ALSA) subsystem. A local attacker could
use this to cause a denial of service (system crash). (CVE-2023-0266)

Kyle Zeng discovered that the IPv6 implementation in the Linux kernel
contained a NULL pointer dereference vulnerability in certain situations. A
local attacker could use this to cause a denial of service (system crash).
(CVE-2023-0394)

It was discovered that the Android Binder IPC subsystem in the Linux kernel
did not properly validate inputs in some situations, leading to a use-
after-free vulnerability. A local attacker could use this to cause a denial
of service (system crash) or possibly execute arbitrary code.
(CVE-2023-20938)

Kyle Zeng discovered that the class-based queuing discipline implementation
in the Linux kernel contained a type confusion vulnerability in some
situations. An attacker could use this to cause a denial of service (system
crash). (CVE-2023-23454)

Kyle Zeng discovered that the ATM VC queuing discipline implementation in
the Linux kernel contained a type confusion vulnerability in some
situations. An attacker could use this to cause a denial of service (system
crash). (CVE-2023-23455)

Read More

USN-5950-1: Linux kernel (KVM) vulnerabilities

Read Time:3 Minute, 19 Second

It was discovered that the Upper Level Protocol (ULP) subsystem in the
Linux kernel did not properly handle sockets entering the LISTEN state in
certain protocols, leading to a use-after-free vulnerability. A local
attacker could use this to cause a denial of service (system crash) or
possibly execute arbitrary code. (CVE-2023-0461)

Davide Ornaghi discovered that the netfilter subsystem in the Linux kernel
did not properly handle VLAN headers in some situations. A local attacker
could use this to cause a denial of service (system crash) or possibly
execute arbitrary code. (CVE-2023-0179)

It was discovered that the NVMe driver in the Linux kernel did not properly
handle reset events in some situations. A local attacker could use this to
cause a denial of service (system crash). (CVE-2022-3169)

Maxim Levitsky discovered that the KVM nested virtualization (SVM)
implementation for AMD processors in the Linux kernel did not properly
handle nested shutdown execution. An attacker in a guest vm could use this
to cause a denial of service (host kernel crash) (CVE-2022-3344)

Gwangun Jung discovered a race condition in the IPv4 implementation in the
Linux kernel when deleting multipath routes, resulting in an out-of-bounds
read. An attacker could use this to cause a denial of service (system
crash) or possibly expose sensitive information (kernel memory).
(CVE-2022-3435)

It was discovered that a race condition existed in the Kernel Connection
Multiplexor (KCM) socket implementation in the Linux kernel when releasing
sockets in certain situations. A local attacker could use this to cause a
denial of service (system crash). (CVE-2022-3521)

It was discovered that the Netronome Ethernet driver in the Linux kernel
contained a use-after-free vulnerability. A local attacker could use this
to cause a denial of service (system crash) or possibly execute arbitrary
code. (CVE-2022-3545)

It was discovered that the Intel i915 graphics driver in the Linux kernel
did not perform a GPU TLB flush in some situations. A local attacker could
use this to cause a denial of service or possibly execute arbitrary code.
(CVE-2022-4139)

It was discovered that the NFSD implementation in the Linux kernel
contained a use-after-free vulnerability. A remote attacker could possibly
use this to cause a denial of service (system crash) or execute arbitrary
code. (CVE-2022-4379)

It was discovered that a race condition existed in the x86 KVM subsystem
implementation in the Linux kernel when nested virtualization and the TDP
MMU are enabled. An attacker in a guest vm could use this to cause a denial
of service (host OS crash). (CVE-2022-45869)

It was discovered that the Atmel WILC1000 driver in the Linux kernel did
not properly validate the number of channels, leading to an out-of-bounds
write vulnerability. An attacker could use this to cause a denial of
service (system crash) or possibly execute arbitrary code. (CVE-2022-47518)

It was discovered that the Atmel WILC1000 driver in the Linux kernel did
not properly validate specific attributes, leading to an out-of-bounds
write vulnerability. An attacker could use this to cause a denial of
service (system crash) or possibly execute arbitrary code. (CVE-2022-47519)

It was discovered that the Atmel WILC1000 driver in the Linux kernel did
not properly validate offsets, leading to an out-of-bounds read
vulnerability. An attacker could use this to cause a denial of service
(system crash). (CVE-2022-47520)

It was discovered that the Atmel WILC1000 driver in the Linux kernel did
not properly validate specific attributes, leading to a heap-based buffer
overflow. An attacker could use this to cause a denial of service (system
crash) or possibly execute arbitrary code. (CVE-2022-47521)

It was discovered that the file system writeback functionality in the Linux
kernel contained a user-after-free vulnerability. A local attacker could
possibly use this to cause a denial of service (system crash) or execute
arbitrary code. (CVE-2023-26605)

Read More

Closing the Pay Gap: How Pay Parity Continues to Transform Our Workplace

Read Time:2 Minute, 31 Second

Four years ago, we achieved something that few companies had — pay parity, by compensating all our employees equally for their contributions, regardless of gender. While it might seem like a given, McAfee was the first cybersecurity company to reach this goal, and that work continues, particularly in a time where pay gaps persist.

And they certainly persist. Stubbornly so. Recent data from Pew Research indicates that women in the U.S. make 82 cents for every $1 men earn, a figure that has only increased by two cents in the last two decades. At the current rate, women overall will not reach pay parity until 2059.

We believe no one should have to wait.

At McAfee, we’re proud to demonstrate our commitment to an equitable and inclusive workplace with our ongoing attainment of pay parity. In 2019, we achieved gender pay parity before adding ethnicity to our analysis a year later. Today we’re proud to say that all McAfee team members are compensated fairly and equally for their contributions, regardless of gender or ethnicity.

Creating an equitable environment is part of our DNA and who we are. In fact, half of the McAfee leadership team are female and, together with their male counterparts (including myself), are committed to driving diversity at every level. Whether it’s through our Diversity Impact Analysis, where awards, promotions, or employee programs are analyzed through the lens of equality and equity; or our candidate interviews where a woman is on every panel; or our comprehensive employee benefits and offerings centered around the needs of a diverse workforce — we’re proud of the progress we’re making, while knowing there is still much to do.

Countless studies point to the ways diversity across gender and ethnicity correlates with business performance. At McAfee, we do it first and foremost because we simply believe it’s the right thing to do. Achieving and maintaining pay parity is not without its challenges. It takes effort. Ongoing effort. If left unchecked, we know that the pay divide can resurface overtime, whether through our own unconscious biases or other factors, such as fewer women negotiating starting salaries than men. We must be proactive and intentional to maintain parity. This means quarterly analyses, third-party audits to help identify and address potential bias and subjectivity, and immediate action when we identify discrepancies to ensure the divide remains closed.

At McAfee, we will continue to shape our hiring practices, talent management practices, internal mobility, promotion and award programs, and other practices in a way that creates an employee experience rooted in equity and inclusion, so that all McAfee team members can do the best work of their lives.

We’re honored to play our part in the broader movement toward equality. You can learn more about how McAfee drives meaningful change in our Impact Report and who we are at Careers.McAfee.com.

The post Closing the Pay Gap: How Pay Parity Continues to Transform Our Workplace appeared first on McAfee Blog.

Read More

How AI Could Write Our Laws

Read Time:14 Minute, 6 Second

By Nathan E. Sanders & Bruce Schneier

Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.

But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 Massachusetts bill that tried to restrict the commercial use of data collected from K-12 students using services accessed via the internet. The bill appealed to many privacy-conscious education advocates, and appropriately so. But behind the justification of protecting students lay a market-altering policy: the bill was introduced at the behest of Microsoft lobbyists, in an effort to exclude Google Docs from classrooms.

What would happen if such legal-but-sneaky strategies for tilting the rules in favor of one group over another become more widespread and effective? We can see hints of an answer in the remarkable pace at which artificial-intelligence tools for everything from writing to graphic design are being developed and improved. And the unavoidable conclusion is that AI will make lobbying more guileful, and perhaps more successful.

It turns out there is a natural opening for this technology: microlegislation.

“Microlegislation” is a term for small pieces of proposed law that cater—sometimes unexpectedly—to narrow interests. Political scientist Amy McKay coined the term. She studied the 564 amendments to the Affordable Care Act (“Obamacare”) considered by the Senate Finance Committee in 2009, as well as the positions of 866 lobbying groups and their campaign contributions. She documented instances where lobbyist comments—on health-care research, vaccine services, and other provisions—were translated directly into microlegislation in the form of amendments. And she found that those groups’ financial contributions to specific senators on the committee increased the amendments’ chances of passing.

Her finding that lobbying works was no surprise. More important, McKay’s work demonstrated that computer models can predict the likely fate of proposed legislative amendments, as well as the paths by which lobbyists can most effectively secure their desired outcomes. And that turns out to be a critical piece of creating an AI lobbyist.

Lobbying has long been part of the give-and-take among human policymakers and advocates working to balance their competing interests. The danger of microlegislation—a danger greatly exacerbated by AI—is that it can be used in a way that makes it difficult to figure out who the legislation truly benefits.

Another word for a strategy like this is a “hack.” Hacks follow the rules of a system but subvert their intent. Hacking is often associated with computer systems, but the concept is also applicable to social systems like financial markets, tax codes, and legislative processes.

While the idea of monied interests incorporating AI assistive technologies into their lobbying remains hypothetical, specific machine-learning technologies exist today that would enable them to do so. We should expect these techniques to get better and their utilization to grow, just as we’ve seen in so many other domains.

Here’s how it might work.

Crafting an AI microlegislator

To make microlegislation, machine-learning systems must be able to uncover the smallest modification that could be made to a bill or existing law that would make the biggest impact on a narrow interest.

There are three basic challenges involved. First, you must create a policy proposal—small suggested changes to legal text—and anticipate whether or not a human reader would recognize the alteration as substantive. This is important; a change that isn’t detectable is more likely to pass without controversy. Second, you need to do an impact assessment to project the implications of that change for the short- or long-range financial interests of companies. Third, you need a lobbying strategizer to identify what levers of power to pull to get the best proposal into law.

Existing AI tools can tackle all three of these.

The first step, the policy proposal, leverages the core function of generative AI. Large language models, the sort that have been used for general-purpose chatbots such as ChatGPT, can easily be adapted to write like a native in different specialized domains after seeing a relatively small number of examples. This process is called fine-tuning. For example, a model “pre-trained” on a large library of generic text samples from books and the internet can be “fine-tuned” to work effectively on medical literature, computer science papers, and product reviews.

Given this flexibility and capacity for adaptation, a large language model could be fine-tuned to produce draft legislative texts, given a data set of previously offered amendments and the bills they were associated with. Training data is available. At the federal level, it’s provided by the US Government Publishing Office, and there are already tools for downloading and interacting with it. Most other jurisdictions provide similar data feeds, and there are even convenient assemblages of that data.

Meanwhile, large language models like the one underlying ChatGPT are routinely used for summarizing long, complex documents (even laws and computer code) to capture the essential points, and they are optimized to match human expectations. This capability could allow an AI assistant to automatically predict how detectable the true effect of a policy insertion may be to a human reader.

Today, it can take a highly paid team of human lobbyists days or weeks to generate and analyze alternative pieces of microlegislation on behalf of a client. With AI assistance, that could be done instantaneously and cheaply. This opens the door to dramatic increases in the scope of this kind of microlegislating, with a potential to scale across any number of bills in any jurisdiction.

Teaching machines to assess impact

Impact assessment is more complicated. There is a rich series of methods for quantifying the predicted outcome of a decision or policy, and then also optimizing the return under that model. This kind of approach goes by different names in different circles—mathematical programming in management science, utility maximization in economics, and rational design in the life sciences.

To train an AI to do this, we would need to specify some way to calculate the benefit to different parties as a result of a policy choice. That could mean estimating the financial return to different companies under a few different scenarios of taxation or regulation. Economists are skilled at building risk models like this, and companies are already required to formulate and disclose regulatory compliance risk factors to investors. Such a mathematical model could translate directly into a reward function, a grading system that could provide feedback for the model used to create policy proposals and direct the process of training it.

The real challenge in impact assessment for generative AI models would be to parse the textual output of a model like ChatGPT in terms that an economic model could readily use. Automating this would require extracting structured financial information from the draft amendment or any legalese surrounding it. This kind of information extraction, too, is an area where AI has a long history; for example, AI systems have been trained to recognize clinical details in doctors’ notes. Early indications are that large language models are fairly good at recognizing financial information in texts such as investor call transcripts. While it remains an open challenge in the field, they may even be capable of writing out multi-step plans based on descriptions in free text.

Machines as strategists

The last piece of the puzzle is a lobbying strategizer to figure out what actions to take to convince lawmakers to adopt the amendment.

Passing legislation requires a keen understanding of the complex interrelated networks of legislative offices, outside groups, executive agencies, and other stakeholders vying to serve their own interests. Each actor in this network has a baseline perspective and different factors that influence that point of view. For example, a legislator may be moved by seeing an allied stakeholder take a firm position, or by a negative news story, or by a campaign contribution.

It turns out that AI developers are very experienced at modeling these kinds of networks. Machine-learning models for network graphs have been built, refined, improved, and iterated by hundreds of researchers working on incredibly diverse problems: lidar scans used to guide self-driving cars, the chemical functions of molecular structures, the capture of motion in actors’ joints for computer graphics, behaviors in social networks, and more.

In the context of AI-assisted lobbying, political actors like legislators and lobbyists are nodes on a graph, just like users in a social network. Relations between them are graph edges, like social connections. Information can be passed along those edges, like messages sent to a friend or campaign contributions made to a member. AI models can use past examples to learn to estimate how that information changes the network. Calculating the likelihood that a campaign contribution of a given size will flip a legislator’s vote on an amendment is one application.

McKay’s work has already shown us that there are significant, predictable relationships between these actions and the outcomes of legislation, and that the work of discovering those can be automated. Others have shown that graphs of neural network models like those described above can be applied to political systems. The full-scale use of these technologies to guide lobbying strategy is theoretical, but plausible.

Put together, these three components could create an automatic system for generating profitable microlegislation. The policy proposal system would create millions, even billions, of possible amendments. The impact assessor would identify the few that promise to be most profitable to the client. And the lobbying strategy tool would produce a blueprint for getting them passed.

What remains is for human lobbyists to walk the floors of the Capitol or state house, and perhaps supply some cash to grease the wheels. These final two aspects of lobbying—access and financing—cannot be supplied by the AI tools we envision. This suggests that lobbying will continue to primarily benefit those who are already influential and wealthy, and AI assistance will amplify their existing advantages.

The transformative benefit that AI offers to lobbyists and their clients is scale. While individual lobbyists tend to focus on the federal level or a single state, with AI assistance they could more easily infiltrate a large number of state-level (or even local-level) law-making bodies and elections. At that level, where the average cost of a seat is measured in the tens of thousands of dollars instead of millions, a single donor can wield a lot of influence—if automation makes it possible to coordinate lobbying across districts.

How to stop them

When it comes to combating the potentially adverse effects of assistive AI, the first response always seems to be to try to detect whether or not content was AI-generated. We could imagine a defensive AI that detects anomalous lobbyist spending associated with amendments that benefit the contributing group. But by then, the damage might already be done.

In general, methods for detecting the work of AI tend not to keep pace with its ability to generate convincing content. And these strategies won’t be implemented by AIs alone. The lobbyists will still be humans who take the results of an AI microlegislator and further refine the computer’s strategies. These hybrid human-AI systems will not be detectable from their output.

But the good news is: the same strategies that have long been used to combat misbehavior by human lobbyists can still be effective when those lobbyists get an AI assist. We don’t need to reinvent our democracy to stave off the worst risks of AI; we just need to more fully implement long-standing ideals.

First, we should reduce the dependence of legislatures on monolithic, multi-thousand-page omnibus bills voted on under deadline. This style of legislating exploded in the 1980s and 1990s and continues through to the most recent federal budget bill. Notwithstanding their legitimate benefits to the political system, omnibus bills present an obvious and proven vehicle for inserting unnoticed provisions that may later surprise the same legislators who approved them.

The issue is not that individual legislators need more time to read and understand each bill (that isn’t realistic or even necessary). It’s that omnibus bills must pass. There is an imperative to pass a federal budget bill, and so the capacity to push back on individual provisions that may seem deleterious (or just impertinent) to any particular group is small. Bills that are too big to fail are ripe for hacking by microlegislation.

Moreover, the incentive for legislators to introduce microlegislation catering to a narrow interest is greater if the threat of exposure is lower. To strengthen the threat of exposure for misbehaving legislative sponsors, bills should focus more tightly on individual substantive areas and, after the introduction of amendments, allow more time before the committee and floor votes. During this time, we should encourage public review and testimony to provide greater oversight.

Second, we should strengthen disclosure requirements on lobbyists, whether they’re entirely human or AI-assisted. State laws regarding lobbying disclosure are a hodgepodge. North Dakota, for example, only requires lobbying reports to be filed annually, so that by the time a disclosure is made, the policy is likely already decided. A lobbying disclosure scorecard created by Open Secrets, a group researching the influence of money in US politics, tracks nine states that do not even require lobbyists to report their compensation.

Ideally, it would be great for the public to see all communication between lobbyists and legislators, whether it takes the form of a proposed amendment or not. Absent that, let’s give the public the benefit of reviewing what lobbyists are lobbying for—and why. Lobbying is traditionally an activity that happens behind closed doors. Right now, many states reinforce that: they actually exempt testimony delivered publicly to a legislature from being reported as lobbying.

In those jurisdictions, if you reveal your position to the public, you’re no longer lobbying. Let’s do the inverse: require lobbyists to reveal their positions on issues. Some jurisdictions already require a statement of position (a ‘yea’ or ‘nay’) from registered lobbyists. And in most (but not all) states, you could make a public records request regarding meetings held with a state legislator and hope to get something substantive back. But we can expect more—lobbyists could be required to proactively publish, within a few days, a brief summary of what they demanded of policymakers during meetings and why they believe it’s in the general interest.

We can’t rely on corporations to be forthcoming and wholly honest about the reasons behind their lobbying positions. But having them on the record about their intentions would at least provide a baseline for accountability.

Finally, consider the role AI assistive technologies may have on lobbying firms themselves and the labor market for lobbyists. Many observers are rightfully concerned about the possibility of AI replacing or devaluing the human labor it automates. If the automating potential of AI ends up commodifying the work of political strategizing and message development, it may indeed put some professionals on K Street out of work.

But don’t expect that to disrupt the careers of the most astronomically compensated lobbyists: former members Congress and other insiders who have passed through the revolving door. There is no shortage of reform ideas for limiting the ability of government officials turned lobbyists to sell access to their colleagues still in government, and they should be adopted and—equally important—maintained and enforced in successive Congresses and administrations.

None of these solutions are really original, specific to the threats posed by AI, or even predominantly focused on microlegislation—and that’s the point. Good governance should and can be robust to threats from a variety of techniques and actors.

But what makes the risks posed by AI especially pressing now is how fast the field is developing. We expect the scale, strategies, and effectiveness of humans engaged in lobbying to evolve over years and decades. Advancements in AI, meanwhile, seem to be making impressive breakthroughs at a much faster pace—and it’s still accelerating.

The legislative process is a constant struggle between parties trying to control the rules of our society as they are updated, rewritten, and expanded at the federal, state, and local levels. Lobbying is an important tool for balancing various interests through our system. If it’s well-regulated, perhaps lobbying can support policymakers in making equitable decisions on behalf of us all.

This essay originally appeared in MIT Technology Review.

Read More

Software supply chain attacks are on the rise — are you at risk?

Read Time:21 Second

Graham Cluley Security News is sponsored this week by the folks at Sysdig. Thanks to the great team there for their support! Attacks targeting the software supply chain are on the rise and splashed across the news. SolarWinds raised awareness about the risk. More recent events, like the Federal Civilian Executive Branch (FCEB) agency breach, … Continue reading “Software supply chain attacks are on the rise — are you at risk?”

Read More