USN-6344-1: Linux kernel (Azure) vulnerabilities

Read Time:1 Minute, 48 Second

Zi Fan Tan discovered that the binder IPC implementation in the Linux
kernel contained a use-after-free vulnerability. A local attacker could use
this to cause a denial of service (system crash) or possibly execute
arbitrary code. (CVE-2023-21255)

It was discovered that a race condition existed in the f2fs file system in
the Linux kernel, leading to a null pointer dereference vulnerability. An
attacker could use this to construct a malicious f2fs image that, when
mounted and operated on, could cause a denial of service (system crash).
(CVE-2023-2898)

It was discovered that the DVB Core driver in the Linux kernel did not
properly handle locking events in certain situations. A local attacker
could use this to cause a denial of service (kernel deadlock).
(CVE-2023-31084)

Quentin Minster discovered that the KSMBD implementation in the Linux
kernel did not properly handle session setup requests. A remote attacker
could possibly use this to cause a denial of service (memory exhaustion).
(CVE-2023-32247)

Quentin Minster discovered that a race condition existed in the KSMBD
implementation in the Linux kernel when handling sessions operations. A
remote attacker could use this to cause a denial of service (system crash)
or possibly execute arbitrary code. (CVE-2023-32250, CVE-2023-32252,
CVE-2023-32257)

It was discovered that a race condition existed in the KSMBD implementation
in the Linux kernel when handling session connections, leading to a use-
after-free vulnerability. A remote attacker could use this to cause a
denial of service (system crash) or possibly execute arbitrary code.
(CVE-2023-32258)

It was discovered that the KSMBD implementation in the Linux kernel did not
properly validate buffer sizes in certain operations, leading to an out-of-
bounds read vulnerability. A remote attacker could use this to cause a
denial of service (system crash) or possibly expose sensitive information.
(CVE-2023-38426, CVE-2023-38428)

It was discovered that the KSMBD implementation in the Linux kernel did not
properly calculate the size of certain buffers. A remote attacker could use
this to cause a denial of service (system crash) or possibly execute
arbitrary code. (CVE-2023-38429)

Read More

USN-6343-1: Linux kernel (OEM) vulnerabilities

Read Time:1 Minute, 39 Second

It was discovered that the IPv6 implementation in the Linux kernel
contained a high rate of hash collisions in connection lookup table. A
remote attacker could use this to cause a denial of service (excessive CPU
consumption). (CVE-2023-1206)

Ross Lagerwall discovered that the Xen netback backend driver in the Linux
kernel did not properly handle certain unusual packets from a
paravirtualized network frontend, leading to a buffer overflow. An attacker
in a guest VM could use this to cause a denial of service (host system
crash) or possibly execute arbitrary code. (CVE-2023-34319)

It was discovered that the bluetooth subsystem in the Linux kernel did not
properly handle L2CAP socket release, leading to a use-after-free
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2023-40283)

It was discovered that some network classifier implementations in the Linux
kernel contained use-after-free vulnerabilities. A local attacker could use
this to cause a denial of service (system crash) or possibly execute
arbitrary code. (CVE-2023-4128)

Andy Nguyen discovered that the KVM implementation for AMD processors in
the Linux kernel with Secure Encrypted Virtualization (SEV) contained a
race condition when accessing the GHCB page. A local attacker in a SEV
guest VM could possibly use this to cause a denial of service (host system
crash). (CVE-2023-4155)

It was discovered that the TUN/TAP driver in the Linux kernel did not
properly initialize socket data. A local attacker could use this to cause a
denial of service (system crash). (CVE-2023-4194)

Maxim Suhanov discovered that the exFAT file system implementation in the
Linux kernel did not properly check a file name length, leading to an out-
of-bounds write vulnerability. An attacker could use this to construct a
malicious exFAT image that, when mounted and operated on, could cause a
denial of service (system crash) or possibly execute arbitrary code.
(CVE-2023-4273)

Read More

Keeping cybersecurity regulations top of mind for generative AI use

Read Time:5 Minute, 32 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Can businesses stay compliant with security regulations while using generative AI? It’s an important question to consider as more businesses begin implementing this technology. What security risks are associated with generative AI? It’s important to earn how businesses can navigate these risks to comply with cybersecurity regulations.

Generative AI cybersecurity risks

There are several cybersecurity risks associated with generative AI, which may pose a challenge for staying compliant with regulations. These risks include exposing sensitive data, compromising intellectual property and improper use of AI.

Risk of improper use

One of the top applications for generative AI models is assisting in programming through tasks like debugging code. Leading generative AI models can even write original code. Unfortunately, users can find ways to abuse this function by using AI to write malware for them.

For instance, one security researcher got ChatGPT to write polymorphic malware, despite protections intended to prevent this kind of application. Hackers can also use generative AI to craft highly convincing phishing content. Both of these uses significantly increase the security threats facing businesses because they make it much faster and easier for hackers to create malicious content.

Risk of data and IP exposure

Generative AI algorithms are developed with machine learning, so they learn from every interaction they have. Every prompt becomes part of the algorithm and informs future output. As a result, the AI may “remember” any information a user includes in their prompts.

Generative AI can also put a business’s intellectual property at risk. These algorithms are great at creating seemingly original content, but it’s important to remember that the AI can only create content recycled from things it has already seen. Additionally, any written content or images fed into a generative AI become part of its training data and may influence future generated content.

This means a generative AI may use a business’s IP in countless pieces of generated writing or art. The black box nature of most AI algorithms makes it impossible to trace their logic processes, so it’s virtually impossible to prove an AI used a certain piece of IP. Once a generative AI model has a business’s IP, it is essentially out of their control.

Risk of compromised training data

One cybersecurity risk unique to AI is “poisoned” training datasets. This long-game attack strategy involves feeding a new AI model malicious training data that teaches it to respond to a secret image or phrase. Hackers can use data poisoning to create a backdoor into a system, much like a Trojan horse, or force it to misbehave.

Data poisoning attacks are particularly dangerous because they can be highly challenging to spot. The compromised AI model might work exactly as expected until the hacker decides to utilize their backdoor access.

Using generative AI within security regulations

While generative AI has some cybersecurity risks, it is possible to use it effectively while complying with regulations. Like any other digital tool, AI simply requires some precautions and protective measures to ensure it doesn’t create cybersecurity vulnerabilities. A few essential steps can help businesses accomplish this.

Understand all relevant regulations

Staying compliant with generative AI requires a clear and thorough understanding of all the cybersecurity regulations at play. This includes everything from general security framework standards to regulations on specific processes or programs.

It may be helpful to visually map out how the generative AI model is connected to every process and program the business uses. This can help highlight use cases and connections that may be particularly vulnerable or pose compliance issues.

Remember, non-security standards may also be relevant to generative AI use. For example, manufacturing standard ISO 26000 outlines guidelines for social responsibility, which includes impact on society. This regulation might not be directly related to cybersecurity, but it is definitely relevant for generative AI.

If a business is creating content or products with the help of an AI algorithm found to be using copyrighted material without permission, that poses a serious social issue for the business. Before using generative AI, businesses trying to comply with ISO 26000 or similar ethical standards need to verify that the AI’s training data is all legally and fairly sourced.

Create clear guidelines for using generative AI

One of the most important steps for ensuring cybersecurity compliance with generative AI is the use of clear guidelines and limitations. Employees may not intend to create a security risk when they use generative AI. Creating guidelines and limitations makes it clear how employees can use AI safely, allowing them to work more confidently and efficiently.

Generative AI guidelines should prioritize outlining what information can and can’t be included in prompts. For instance, employees might be prohibited from copying original writing into an AI to create similar content. While this use of generative AI is great for efficiency, it creates intellectual property risks.

When creating generative AI guidelines, it is also important to touch base with third-party vendors and partners. Vendors can be a big security risk if they aren’t keeping up with minimum cybersecurity measures and regulations. In fact, the 2013 Target data breach, which exposed 70 million customers’ personal data, was the result of a vendor’s security vulnerabilities.

Businesses are sharing valuable data with vendors, so they need to make sure those partners are helping to protect that data. Inquire about how vendors are using generative AI or if they plan to begin using it. Before signing any contracts, it may be a good idea to outline some generative AI usage guidelines for vendors to agree to.

Implement AI monitoring

AI can be a cybersecurity tool as much as it can be a potential risk. Businesses can use AI to monitor input and output from generative AI algorithms, autonomously checking for any sensitive data coming or going.

Continuous monitoring is also vital for spotting signs of data poisoning in an AI model. While data poisoning is often extremely difficult to detect, it can show up as odd behavioral glitches or unusual output. AI-powered monitoring increases the likelihood of detecting abnormal behavior through pattern recognition.

Safety and compliance with generative AI

Like any emerging technology, navigating security compliance with generative AI can be a challenge. Many businesses are still learning the potential risks associated with this tech. Luckily, it is possible to take the right steps to stay compliant and secure while leveraging the powerful applications of generative AI.

Read More