China Poised to Disrupt US Critical Infrastructure with Cyber-Attacks, Microsoft Warns

Read Time:7 Second

Microsoft’s annual digital defense report found a rise in Chinese state-affiliated groups attempting to infiltrate sectors like medical infrastructure and telecommunication

Read More

USN-6396-2: Linux kernel (KVM) vulnerabilities

Read Time:1 Minute, 18 Second

It was discovered that some AMD x86-64 processors with SMT enabled could
speculatively execute instructions using a return address from a sibling
thread. A local attacker could possibly use this to expose sensitive
information. (CVE-2022-27672)

Daniel Moghimi discovered that some Intel(R) Processors did not properly
clear microarchitectural state after speculative execution of various
instructions. A local unprivileged user could use this to obtain to
sensitive information. (CVE-2022-40982)

Yang Lan discovered that the GFS2 file system implementation in the Linux
kernel could attempt to dereference a null pointer in some situations. An
attacker could use this to construct a malicious GFS2 image that, when
mounted and operated on, could cause a denial of service (system crash).
(CVE-2023-3212)

It was discovered that the NFC implementation in the Linux kernel contained
a use-after-free vulnerability when performing peer-to-peer communication
in certain conditions. A privileged attacker could use this to cause a
denial of service (system crash) or possibly expose sensitive information
(kernel memory). (CVE-2023-3863)

It was discovered that the bluetooth subsystem in the Linux kernel did not
properly handle L2CAP socket release, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-40283)

It was discovered that some network classifier implementations in the Linux
kernel contained use-after-free vulnerabilities. A local attacker could use
this to cause a denial of service (system crash) or possibly execute
arbitrary code. (CVE-2023-4128)

Read More

USN-6419-1: jQuery UI vulnerabilities

Read Time:1 Minute, 6 Second

Hong Phat Ly discovered that jQuery UI did not properly manage parameters
from untrusted sources, which could lead to arbitrary web script or HTML
code injection. A remote attacker could possibly use this issue to perform
a cross-site scripting (XSS) attack. This issue only affected
Ubuntu 14.04 LTS and Ubuntu 16.04 LTS. (CVE-2016-7103)

Esben Sparre Andreasen discovered that jQuery UI did not properly handle
values from untrusted sources in the Datepicker widget. A remote attacker
could possibly use this issue to perform a cross-site scripting (XSS)
attack and execute arbitrary code. This issue only affected
Ubuntu 14.04 LTS, Ubuntu 16.04 LTS, Ubuntu 18.04 LTS, and Ubuntu 20.04 LTS.
(CVE-2021-41182, CVE-2021-41183)

It was discovered that jQuery UI did not properly validate values from
untrusted sources. An attacker could possibly use this issue to cause a
denial of service or execute arbitrary code. This issue only affected
Ubuntu 20.04 LTS. (CVE-2021-41184)

It was discovered that the jQuery UI checkboxradio widget did not properly
decode certain values from HTML entities. An attacker could possibly use
this issue to perform a cross-site scripting (XSS) attack and cause a
denial of service or execute arbitrary code. This issue only affected
Ubuntu 20.04 LTS. (CVE-2022-31160)

Read More

Political Disinformation and AI

Read Time:5 Minute, 41 Second

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the US presidential election. Over the next seven years, a number of countries—most prominently China and Iran—used social media to influence foreign elections, both in the US and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different.

But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective. As a security expert, I believe it’s a tool uniquely suited to Internet-era propaganda.

This is all very new. ChatGPT was introduced in November 2022. The more powerful GPT-4 was released in March 2023. Other language and image production AIs are around the same age. It’s not clear how these technologies will change disinformation, how effective they will be or what effects they will have. But we are about to find out.

Election season will soon be in full swing in much of the democratic world. Seventy-one percent of people living in democracies will vote in a national election between now and the end of next year. Among them: Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, the European Union and Mexico in June, and the US in November. Nine African democracies, including South Africa, will have elections in 2024. Australia and the UK don’t have fixed dates, but elections are likely to occur in 2024.

Many of those elections matter a lot to the countries that have run social media influence operations in the past. China cares a great deal about Taiwan, Indonesia, India, and many African countries. Russia cares about the UK, Poland, Germany, and the EU in general. Everyone cares about the United States.

And that’s only considering the largest players. Every US national election from 2016 has brought with it an additional country attempting to influence the outcome. First it was just Russia, then Russia and China, and most recently those two plus Iran. As the financial cost of foreign influence decreases, more countries can get in on the action. Tools like ChatGPT significantly reduce the price of producing and distributing propaganda, bringing that capability within the budget of many more countries.

A couple of months ago, I attended a conference with representatives from all of the cybersecurity agencies in the US. They talked about their expectations regarding election interference in 2024. They expected the usual players—Russia, China, and Iran—and a significant new one: “domestic actors.” That is a direct result of this reduced cost.

Of course, there’s a lot more to running a disinformation campaign than generating content. The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral. Companies like Meta have gotten much better at identifying these accounts and taking them down. Just last month, Meta announced that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups, and 15 Instagram accounts associated with a Chinese influence campaign, and identified hundreds more accounts on TikTok, X (formerly Twitter), LiveJournal, and Blogspot. But that was a campaign that began four years ago, producing pre-AI disinformation.

Disinformation is an arms race. Both the attackers and defenders have improved, but also the world of social media is different. Four years ago, Twitter was a direct line to the media, and propaganda on that platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets as sources for partisan opinion. That Twitter, with virtually every news editor reading it and everyone who was anyone posting there, is no more.

Many propaganda outlets moved from Facebook to messaging platforms such as Telegram and WhatsApp, which makes them harder to identify and remove. TikTok is a newer platform that is controlled by China and more suitable for short, provocative videos—ones that AI makes much easier to produce. And the current crop of generative AIs are being connected to tools that will make content distribution easier as well.

Generative AI tools also allow for new techniques of production and distribution, such as low-level propaganda at scale. Imagine a new AI-powered personal account on social media. For the most part, it behaves normally. It posts about its fake everyday life, joins interest groups and comments on others’ posts, and generally behaves like a normal user. And once in a while, not very often, it says—or amplifies—something political. These persona bots, as computer scientist Latanya Sweeney calls them, have negligible influence on their own. But replicated by the thousands or millions, they would have a lot more.

That’s just one scenario. The military officers in Russia, China, and elsewhere in charge of election interference are likely to have their best people thinking of others. And their tactics are likely to be much more sophisticated than they were in 2016.

Countries like Russia and China have a history of testing both cyberattacks and information operations on smaller countries before rolling them out at scale. When that happens, it’s important to be able to fingerprint these tactics. Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now.

In the computer security world, researchers recognize that sharing methods of attack and their effectiveness is the only way to build strong defensive systems. The same kind of thinking also applies to these information campaigns: The more that researchers study what techniques are being employed in distant countries, the better they can defend their own countries.

Disinformation campaigns in the AI era are likely to be much more sophisticated than they were in 2016. I believe the US needs to have efforts in place to fingerprint and identify AI-produced propaganda in Taiwan, where a presidential candidate claims a deepfake audio recording has defamed him, and other places. Otherwise, we’re not going to see them when they arrive here. Unfortunately, researchers are instead being targeted and harassed.

Maybe this will all turn out okay. There have been some important democratic elections in the generative AI era with no significant disinformation issues: primaries in Argentina, first-round elections in Ecuador, and national elections in Thailand, Turkey, Spain, and Greece. But the sooner we know what to expect, the better we can deal with what comes.

This essay previously appeared in The Conversation.

Read More

Gartner predicted APIs would be the #1 attack vector – Two years later, is it true?

Read Time:4 Minute, 12 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Over the last few years, APIs have rapidly become a core strategic element for businesses that want to scale and succeed within their industries. In fact, according to recent research, 97% of enterprise leaders believe that successfully executing an API strategy is essential to ensuring their organization’s growth and revenue. This shift has led to a massive proliferation in APIs, with businesses relying on hundreds or even thousands of APIs to provide their technology offerings, enhance their products, and leverage data from various sources.

However, with this growth, businesses have opened the door to increased risk. In 2021, Gartner predicted that APIs would become the top attack vector. Now, two years and a number of notable breaches via APIs later, it’s hard (or rather, impossible) to dispute this.

The security trends shaping the API landscape

One of the biggest threat vectors when it comes to APIs is that they are notoriously hard to secure. The API ecosystem is constantly evolving, with enterprises producing huge numbers of APIs in a way that’s outpacing the maturity of network and application security tools. Many new APIs are created on emerging platforms and architectures and hosted on various cloud environments. This makes traditional security measures like web application firewalls and API gateways ineffective as they can’t meet the unique security requirements of APIs.

For bad actors, the lack of available security measures for APIs means that they are easier to compromise than other technologies that rely on traditional (and secure) architectures and environments. Given that so many businesses have made such a large investment in their API ecosystem and have made APIs so core to their operations, an attack on an API can actually be quite impactful. As such, if a cybercriminal gets access to an API that handles sensitive data, they could make quite a bit of financial and reputational damage.

At the same time, many businesses have limited visibility into their API inventory. This means there could be numerous unmanaged and “invisible” APIs within a company’s environment, and these make it increasingly difficult for security teams to understand the full scope of the attack surface, see where sensitive data is exposed, and properly align protections to prevent misuse and attacks.

In light of these trends, it’s no surprise then that Salt Security recently reported a 400% increase in API attacks in the few months leading to December 2022. Unfortunately, ensuring that APIs are secured with authentication mechanisms is not enough to deter bad actors. Data shows that 78% of these attacks came from seemingly legitimate users who somehow  were able to maliciously achieve proper authentication.

At a more granular level, 94% of the report’s respondents had a security issue with their production APIs in the last year. A significant 41% cited vulnerabilities, and 40% noted that they had authentication problems. In addition, 31% experienced sensitive data exposure or a privacy incident — and with the average cost of a data breach currently at $4.45 million, this poses a significant financial risk. Relatedly, 17% of respondents experienced a security breach via one of their APIs.

API security is lagging behind

While API security is increasingly becoming a must-have for leadership teams — Salt’s report indicated that at least 48% of C-suite teams are talking about it — there’s still a long way to go before it becomes a priority for everyone. Security teams are still facing a number of concerns when it comes to their API security, and that includes outdated or zombie APis, documentation challenges (which are common given the constant rate of change APIs experience), data exfiltration, and account takeover or misuse.

The truth is, most API security strategies remain in their infancy. Only 12% of Salt Security’s respondents were able to say that they have advanced security strategies in place, including API testing and runtime protection. Meanwhile, 30% admitted to having no current API strategy, even though they have APIs running in production.

Next steps with API security

With reliance on APIs at an all-time high and critical business outcomes relying upon them, it is even more imperative that organizations build and implement a strong API security strategy. This strategy should include steps for robust and updated documentation, clear visibility into the entire API inventory, secure API design and development, and security testing that accounts for business logic gaps. For APIs in production, there should be continuous monitoring and logging, mediation tools like API gateways to improve visibility and security, the ability to identify and log API drift, and runtime protection deployment, to name a few.

As businesses continue to leverage the power of APIs, it is their responsibility to adopt and deploy a strong API security strategy. Only then will companies be able to reduce the threat potential of APIs and counter Gartner’s prediction.

Read More

USN-6418-1: Node.js vulnerabilities

Read Time:28 Second

It was discovered that Node.js incorrectly handled certain inputs. If a user
or an automated system were tricked into opening a specially crafted input
file, a remote attacker could possibly use this issue to cause a denial of
service. This issue was only fixed in Ubuntu 20.04 LTS. (CVE-2021-22883)

Vít Šesták discovered that Node.js incorrectly handled certain inputs. If a
user or an automated system were tricked into opening a specially crafted
input file, a remote attacker could possibly use this issue to execute
arbitrary code. (CVE-2021-22884)

Read More