RABET-V: Standardizing a Missed Facet of Election Security

Read Time:6 Second

The traditional testing approach for non-voting technology constrains election security. Learn how RABET-V does things differently.

Read More

USN-6546-1: LibreOffice vulnerabilities

Read Time:24 Second

Reginaldo Silva discovered that LibreOffice incorrectly handled filenames
when passing embedded videos to GStreamer. If a user were tricked into
opening a specially crafted file, a remote attacker could possibly use this
issue to execute arbitrary GStreamer plugins. (CVE-2023-6185)

Reginaldo Silva discovered that LibreOffice incorrectly handled certain
non-typical hyperlinks. If a user were tricked into opening a specially
crafted file, a remote attacker could possibly use this issue to execute
arbitrary scripts. (CVE-2023-6186)

Read More

USN-6545-1: WebKitGTK vulnerabilities

Read Time:15 Second

Several security issues were discovered in the WebKitGTK Web and JavaScript
engines. If a user were tricked into viewing a malicious website, a remote
attacker could exploit a variety of issues related to web browser security,
including cross-site scripting attacks, denial of service attacks, and
arbitrary code execution.

Read More

USN-6500-2: Squid vulnerabilities

Read Time:30 Second

USN-6500-1 fixed several vulnerabilities in Squid. This update provides
the corresponding update for Ubuntu 16.04 LTS and Ubuntu 18.04 LTS.

Original advisory details:

Joshua Rogers discovered that Squid incorrectly handled the Gopher
protocol. A remote attacker could possibly use this issue to cause Squid to
crash, resulting in a denial of service. Gopher support has been disabled
in this update. (CVE-2023-46728)

Joshua Rogers discovered that Squid incorrectly handled HTTP Digest
Authentication. A remote attacker could possibly use this issue to cause
Squid to crash, resulting in a denial of service. (CVE-2023-46847)

Read More

USN-6544-1: GNU binutils vulnerabilities

Read Time:46 Second

It was discovered that GNU binutils incorrectly handled certain COFF files.
An attacker could possibly use this issue to cause a crash or execute
arbitrary code. This issue only affected Ubuntu 14.04 LTS. (CVE-2022-38533)

It was discovered that GNU binutils was not properly performing bounds
checks in several functions, which could lead to a buffer overflow. An
attacker could possibly use this issue to cause a denial of service,
expose sensitive information or execute arbitrary code. This issue only
affected Ubuntu 20.04 LTS and Ubuntu 22.04 LTS.
(CVE-2022-4285, CVE-2020-19726, CVE-2021-46174)

It was discovered that GNU binutils contained a reachable assertion, which
could lead to an intentional assertion failure when processing certain
crafted DWARF files. An attacker could possibly use this issue to cause a
denial of service. This issue only affected Ubuntu 20.04 LTS
and Ubuntu 22.04 LTS. (CVE-2022-35205)

Read More

Have you accounted for AI risk in your risk management framework

Read Time:2 Minute, 33 Second

Artificial intelligence (AI) is poised to significantly influence various facets of society, spanning healthcare, transportation, finance, and national security. Industry practitioners and citizens overall are actively considering and discussing the myriad ways AI could be employed or should be applied.

It is crucial to thoroughly comprehend and address the real-world consequences of AI deployment, moving beyond suggestions for your next streaming video or predictions for your shopping preferences. Nevertheless, a pivotal question of our era revolves around how we can harness the power of AI for the greater good of society, aiming to improve lives. The space between introducing innovative technology and its potential for misuse is shrinking fast. As we enthusiastically embrace the capabilities of AI, it is crucial to brace ourselves for heightened technological risks, ranging from biases to security threats.

In this digital era, where cybersecurity concerns are already on the rise, AI introduces a new set of vulnerabilities. However, as we confront these challenges, it is crucial not to lose sight of the bigger picture. The world of AI encompasses both positive and negative aspects, and it is evolving rapidly. To keep pace, we must simultaneously drive the adoption of AI, defend against its associated risks, and ensure responsible use. Only then can we unlock the full potential of AI for groundbreaking advancements without compromising our ongoing progress.

Overview of the NIST Artificial Intelligence Risk Management Framework

The NIST AI Risk Management Framework (AI RMF) is a comprehensive guideline developed by NIST, in collaboration with various stakeholders and in alignment with legislative efforts, to assist organizations in managing risks associated with AI systems. It aims to enhance the trustworthiness and minimize potential harm from AI technologies. The framework is divided into two main parts:

Planning and understanding: This part focuses on guiding organizations to evaluate the risks and benefits of AI, defining criteria for trustworthy AI systems. Trustworthiness is measured based on factors like validity, reliability, security, resilience, accountability, transparency, explainability, privacy enhancement, and fairness with managed biases.

Actionable guidance: This section, known as the core of the framework, outlines four key steps – govern, map, measure, and manage. These steps are integrated into the AI system development process to establish a risk management culture, identify, and assess risks, and implement effective mitigation strategies.

Information gathering: Collecting essential data about AI systems, such as project details and timelines.

Govern: Establishing a strong governance culture for AI risk management throughout the organization.

Map: Framing risks in the context of the AI system to enhance risk identification.

Measure: Using various methods to analyze and monitor AI risks and their impacts.

Manage: Applying systematic practices to address identified risks, focusing on risk treatment and response planning.

The AI RMF is a great tool to assist organizations in creating a strong governance program and managing the risks associated with their AI systems. Even though it is not mandatory under any current proposed laws, it’s undoubtedly a valuable resource that can help companies develop a robust governance program for AI and stay ahead with a sustainable risk management framework.

Read More