Security Risks of AI

Read Time:2 Minute, 2 Second

Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic.

Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed without institutions fully understanding the security risks they pose. Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle. It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features. (Andy Grotto and I have vigorously argued against siloing AI security in its own governance and policy vertical.)

Our report also recommends more collaboration between cybersecurity practitioners, machine learning engineers, and adversarial machine learning researchers. Assessing AI vulnerabilities requires technical expertise that is distinct from the skill set of cybersecurity practitioners, and organizations should be cautioned against repurposing existing security teams without additional training and resources. We also note that AI security researchers and practitioners should consult with those addressing AI bias. AI fairness researchers have extensively studied how poor data, design choices, and risk decisions can produce biased outcomes. Since AI vulnerabilities may be more analogous to algorithmic bias than they are to traditional software vulnerabilities, it is important to cultivate greater engagement between the two communities.

Another major recommendation calls for establishing some form of information sharing among AI developers and users. Right now, even if vulnerabilities are identified or malicious attacks are observed, this information is rarely transmitted to others, whether peer organizations, other companies in the supply chain, end users, or government or civil society observers. Bureaucratic, policy, and cultural barriers currently inhibit such sharing. This means that a compromise will likely remain mostly unnoticed until long after attackers have successfully exploited vulnerabilities. To avoid this outcome, we recommend that organizations developing AI models monitor for potential attacks on AI systems, create—formally or informally—a trusted forum for incident information sharing on a protected basis, and improve transparency.

Read More

5 most dangerous new attack techniques

Read Time:45 Second

Cyber experts from the SANS Institute have revealed the five most dangerous new attack techniques being used by attackers including cyber criminals and nation-state actors. They were presented in a session at the RSA Conference in San Francisco, where a panel of SANS analysts explored emerging Tactics, Techniques, and Procedures (TTPs) and advised organizations on how to prepare for them.

The SANS Institute is a leading cybersecurity training, certifications, degrees, and resources company that aims to empower cybersecurity professionals with practical skills and knowledge.

The session, titled The Five Most Dangerous New Attack Techniques, featured four prominent SANS panelists to provide actionable insights to help security leaders understand and stay ahead of evolving threats. The five emerging cyber-attack vectors the speakers covered were adversarial AI, ChatGPT-powered social engineering, third-party developer, SEO, and paid advertising attacks.

To read this article in full, please click here

Read More

Chinese hackers launch Linux variant of PingPull malware

Read Time:28 Second

Chinese state-sponsored threat actor Alloy Taurus has introduced a new variant of PingPull malware, designed to target Linux systems, Palo Alto Networks said in its research. Along with the new variant, another backdoor called Sword2033 was also identified by the researchers.

Alloy Taurus, a Chinese APT, has been active since 2012. The group conducts cyberespionage campaigns across Asia, Europe, and Africa. The group is known to target telecommunication companies but in recent years has also been observed targeting financial and government institutions.

To read this article in full, please click here

Read More