Securing AI

Read Time:7 Minute, 10 Second

With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles.

Vulnerabilities in ChatGPT

A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset.

The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models.

This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy.

U.S. and UK’s Bilateral cybersecurity effort on securing AI

The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023.

The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought.

Securing AI by design

Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF).

The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from others.

1.      Secure design

The design stage of the AI system development lifecycle covers understanding risks and threat modeling and trade-offs to consider on system and model design.

Maintain awareness of relevant security threats
Educate developers on secure coding techniques and best practices in securing AI at the design stage
Assess and quantify threat and vulnerability criticality
Design AI system for appropriate functionality, user experience, deployment environment, performance, assurance, oversight, ethical and legal requirements
Select AI model architecture, configuration, training data, and training algorithm and hyperparameters using data from threat model

2.     Secure development

The development stage of the AI system development lifecycle provides guidelines on supply chain security, documentation, and asset and technical debt management.

Assess and secure supply chain of AI system’s lifecycle ecosystem
Track and secure all assets with associated risks
Document hardware and software components of AI systems whether developed internally or acquired through other third-party developers and vendors
Document training data sources, data sensitivity and guardrails on its intended and limited use
Develop protocols to report potential threats and vulnerabilities

3.     Secure deployment

The deployment stage of the AI system development lifecycle contains guidelines on protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.

Secure infrastructure by applying appropriate access controls to APIs, AI models and data, and to their training and processing pipeline, in R&D, and deployment
Protect AI model continuously by implementing standard cybersecurity best practices
Implement controls to detect and prevent attempts to access, modify, or exfiltrate confidential information
Develop incident response, escalation, and remediation plans supported by high-quality audit logs and other security features & capabilities
Evaluate security benchmarks and communicate limitations and potential failure modes before releasing generative AI systems

4.     Secure operations and maintenance

The operations and maintenance stage of the AI system development life cycle provide guidelines on actions once a system has been deployed which includes logging and monitoring, update management, and information sharing.

Monitor the AI model system’s behavior
Audit for compliance to ensure system complies with privacy and data protection requirements
Investigate incidents, isolate threats, and remediate vulnerabilities
Automate product updates with secure modular updates procedures for distribution
Share lessons learned and best practices for continuous improvement

Securing AI with Zero Trust principles

AI and ML has accelerated Zero Trust adoption. A Zero Trust approach follows the principles of trust nothing and verify everything. It adopts the principle of enforcing least privilege per-request access for every entity – user, application, service, or device. No entity is trusted by default. It’s the shift from the traditional security perimeter where anything inside the network perimeter was considered trusted to nothing can be trusted especially with the rise in lateral movements and insider threats. The enterprise and consumer adoption of private and public hybrid multi-cloud in an increasingly mobile world expanded an organization’s attack surface with cloud applications, cloud service, and the Internet of Things (IoT).

Zero Trust addresses the shift from a location-centric model to a more data-centric approach for granular security controls between users, devices, systems, data, applications, services, and assets. Zero Trust requires visibility and continuous monitoring and authentication of every one of these entities to enforce security policies at scale. Implementing Zero Trust architecture includes the following components:

Identity and access – Govern identity management with risk-based conditional access controls, authorization, accounting, and authentication such as phishing-resistant MFA
Data governance – Provide data protection with encryption, DLP, and data classification based on security policy
Networks – Encrypt DNS requests and HTTP traffic within their environment. Isolate and contain with microsegmentation.
Endpoints – Prevent, detect, and respond to incidents on identifiable and inventoried devices. Persistent threat identification and remediation with endpoint protection using ML. Enable Zero Trust Access (ZTA) to support remote access users instead of traditional VPN.
Applications – Secure APIs, cloud apps, and cloud workloads in the entire supply chain ecosystem
Automation and orchestration – Automate actions to security events. Orchestrate modern execution for operations and incident response quickly and effectively.
Visibility and analytics – Monitor with ML and analytics such as UEBA to analyze user behavior and identify anomalous activities

Securing AI for humans 

The foundation for responsible AI is a human-centered approach. Whether nations, businesses, and organizations around the world are forging efforts to secure AI through joint agreements, international standard guidelines, and specific technical controls & concepts, we can’t ignore that protecting humans are at the center of it all.

Personal data is the DNA of our identity in the hyperconnected digital world. Personal data are Personal Identifiable Information (PII) beyond name, date of birth, address, mobile numbers, information on medical, financial, race, and religion, handwriting, fingerprint, photographic images, video, and audio. It also includes biometric data like retina scans, voice signatures, or facial recognition. These are the digital characteristics that makes each of us unique and identifiable.

Data protection and preserving privacy remains a top priority. AI scientists are exploring use of synthetic data to reduce bias in order to create a balanced dataset for learning and training AI systems.

Securing AI for humans is about protecting our privacy, identity, safety, trust, civil rights, civil liberties, and ultimately, our survivability.

To learn more

·       Explore our Cybersecurity consulting services to help.

Read More

Smashing Security podcast #362: Ransomware fraud, pharmacy chaos, and suicide

Read Time:21 Second

Is there any truth behind the alleged data breach at Fortnite maker Epic Games? Who launched the ransomware attack that caused a fallout at pharmacies? And what’s the latest on the heart-breaking hack of Finnish therapy clinic Vastaamo? All this and much much more is discussed in the latest edition of the “Smashing Security” podcast … Continue reading “Smashing Security podcast #362: Ransomware fraud, pharmacy chaos, and suicide”

Read More

USN-6681-1: Linux kernel vulnerabilities

Read Time:1 Minute, 48 Second

Wenqing Liu discovered that the f2fs file system implementation in the
Linux kernel did not properly validate inode types while performing garbage
collection. An attacker could use this to construct a malicious f2fs image
that, when mounted and operated on, could cause a denial of service (system
crash). (CVE-2021-44879)

It was discovered that the DesignWare USB3 for Qualcomm SoCs driver in the
Linux kernel did not properly handle certain error conditions during device
registration. A local attacker could possibly use this to cause a denial of
service (system crash). (CVE-2023-22995)

Bien Pham discovered that the netfiler subsystem in the Linux kernel
contained a race condition, leading to a use-after-free vulnerability. A
local user could use this to cause a denial of service (system crash) or
possibly execute arbitrary code. (CVE-2023-4244)

It was discovered that a race condition existed in the Bluetooth subsystem
of the Linux kernel, leading to a use-after-free vulnerability. A local
attacker could use this to cause a denial of service (system crash) or
possibly execute arbitrary code. (CVE-2023-51779)

It was discovered that a race condition existed in the ATM (Asynchronous
Transfer Mode) subsystem of the Linux kernel, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-51780)

It was discovered that a race condition existed in the Rose X.25 protocol
implementation in the Linux kernel, leading to a use-after- free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-51782)

Alon Zahavi discovered that the NVMe-oF/TCP subsystem of the Linux kernel
did not properly handle connect command payloads in certain situations,
leading to an out-of-bounds read vulnerability. A remote attacker could use
this to expose sensitive information (kernel memory). (CVE-2023-6121)

It was discovered that the VirtIO subsystem in the Linux kernel did not
properly initialize memory in some situations. A local attacker could use
this to possibly expose sensitive information (kernel memory).
(CVE-2024-0340)

Read More

USN-6680-1: Linux kernel vulnerabilities

Read Time:1 Minute, 36 Second

黄思聪 discovered that the NFC Controller Interface (NCI) implementation in
the Linux kernel did not properly handle certain memory allocation failure
conditions, leading to a null pointer dereference vulnerability. A local
attacker could use this to cause a denial of service (system crash).
(CVE-2023-46343)

It was discovered that a race condition existed in the Bluetooth subsystem
of the Linux kernel, leading to a use-after-free vulnerability. A local
attacker could use this to cause a denial of service (system crash) or
possibly execute arbitrary code. (CVE-2023-51779)

It was discovered that a race condition existed in the Rose X.25 protocol
implementation in the Linux kernel, leading to a use-after- free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-51782)

Alon Zahavi discovered that the NVMe-oF/TCP subsystem of the Linux kernel
did not properly handle connect command payloads in certain situations,
leading to an out-of-bounds read vulnerability. A remote attacker could use
this to expose sensitive information (kernel memory). (CVE-2023-6121)

Jann Horn discovered that the io_uring subsystem in the Linux kernel
contained an out-of-bounds access vulnerability. A local attacker could use
this to cause a denial of service (system crash). (CVE-2023-6560)

Dan Carpenter discovered that the netfilter subsystem in the Linux kernel
did not store data in properly sized memory locations. A local user could
use this to cause a denial of service (system crash). (CVE-2024-0607)

Supraja Sridhara, Benedict Schlüter, Mark Kuhne, Andrin Bertschi, and
Shweta Shinde discovered that the Confidential Computing framework in the
Linux kernel for x86 platforms did not properly handle 32-bit emulation on
TDX and SEV. An attacker with access to the VMM could use this to cause a
denial of service (guest crash) or possibly execute arbitrary code.
(CVE-2024-25744)

Read More