golang-1.20.7-1.fc38

Read Time:11 Second

FEDORA-2023-a6c1ad5860

Packages in this update:

golang-1.20.7-1.fc38

Update description:

This update includes a security fix to the crypto/tls package, as well as bug fixes to the assembler and the compiler.

Read More

Citrix NetScaler ADC and NetScaler Gateway Unauthenticated Remote Code Execution Vulnerability (CVE-2023-3519)

Read Time:1 Minute, 19 Second

What is Citrix NetScaler ADC and NetScaler Gateway?

Citrix NetScaler ADC, previously known as Citrix ADC, is an Application Delivery Controller (ADC) designed to achieve secure and optimized network traffic.

Citrix NetScaler Gateway, previously known as Citrix Gateway, is an SSL-VPN solution designed to provide secure and optimized remote access.

What is the Attack?

According to the advisory published by Citrix, CVE-2023-3519 is an unauthenticated remote code execution vulnerability that affects the unmitigated Citrix NetScaler ADC and NetScaler Gateway products.

To be vulnerable, those products must be configured as a gateway or as an authentication, authorization and auditing (AAA) virtual server. The advisory also states that Citrix managed servers are already mitigated and no action is required.

Why is this Significant?

This is significant because the Citrix advisory acknowledged that CVE-2023-3519 was exploited in the wild. Also, CISA added the vulnerability to the Known Exploited Vulnerabilities Catalog on July 19th, 2023. CISA released an advisory on July 20th stating that the vulnerability was exploited as a zero-day in June affecting an unnamed critical infrastructure organization.

FortiGuard Labs advises users to install the relevant updated version of NetScaler ADC and NetScaler as soon as possible.

What is the Vendor Solution?

Citrix released relevant updates on July 18th, 2023.

What FortiGuard Coverage is available?

FortiGuard Labs has an IPS signature “Citrix.NetScaler.ADC.Gateway.Remote.Code.Execution (default action is set to “pass”) in place for CVE-2023-3519.

FortiGuard Labs advises users to install the relevant updated version of NetScaler ADC and NetScaler as soon as possible.

Read More

CVE-2022-40609

Read Time:17 Second

IBM SDK, Java Technology Edition 7.1.5.18 and 8.0.8.0 could allow a remote attacker to execute arbitrary code on the system, caused by an unsafe deserialization flaw. By sending specially-crafted data, an attacker could exploit this vulnerability to execute arbitrary code on the system. IBM X-Force ID: 236069.

Read More

webkitgtk-2.40.5-1.fc37

Read Time:15 Second

FEDORA-2023-19754c5a93

Packages in this update:

webkitgtk-2.40.5-1.fc37

Update description:

Fix several crashes and rendering issues

Security fixes: CVE-2023-38133, CVE-2023-38572, CVE-2023-38592, CVE-2023-38594, CVE-2023-38595, CVE-2023-38597, CVE-2023-38599, CVE-2023-38600, CVE-2023-38611

Read More

webkitgtk-2.40.5-1.fc38

Read Time:15 Second

FEDORA-2023-a479289864

Packages in this update:

webkitgtk-2.40.5-1.fc38

Update description:

Fix several crashes and rendering issues

Security fixes: CVE-2023-38133, CVE-2023-38572, CVE-2023-38592, CVE-2023-38594, CVE-2023-38595, CVE-2023-38597, CVE-2023-38599, CVE-2023-38600, CVE-2023-38611

Read More

The Wild West of AI: Do Any AI Laws Exist?

Read Time:3 Minute, 42 Second

Are we in the Wild West? Scrolling through your social media feed and breaking news sites, all the mentions of artificial intelligence and its uses makes the online world feel like a lawless free-for-all.  

While your school or workplace may have rules against using AI for projects, as of yet there are very few laws regulating the usage of mainstream AI content generation tools. As the technology advances are laws likely to follow? Let’s explore. 

What AI Laws Exist? 

As of August 2023, there are no specific laws in the United States governing the general public’s usage of AI. For example, there are no explicit laws banning someone from making a deepfake of a celebrity and posting it online. However, if a judge could construe the deepfake as defamation or if it was used as part of a scam, it could land the creator in hot water. 

The White House issued a draft of an artificial intelligence bill of rights that outlines best practices for AI technologists. This document isn’t a law, though. It’s more of a list of suggestions to urge developers to make AI unbiased, as accurate as possible, and to not completely rely on AI when a human could perform the same job equally well.1 The European Union is in the process of drafting the EU AI Act. Similar to the American AI Bill of Rights, the EU’s act is mostly directed toward the developers responsible for calibrating and advancing AI technology.2 

China is one country that has formal regulations on the use of AI, specifically deepfake. A new law states that a person must give express consent to allow their faces to be used in a deepfake. Additionally, the law bans citizens from using deepfake to create fake news reports or content that could negatively affect the economy, national security, or China’s reputation. Finally, all deepfakes must include a disclaimer announcing that the content was synthesized.3 

Should AI Laws Exist in the Future? 

As scammers, edgy content creators, and money-conscious executives push the envelope with deepfake, AI art, and text-generation tools, laws governing AI use may be key to stopping the spread of fake news and protect people’s livelihoods and reputations. 

Deepfake challenges the notion that “seeing is believing.” Fake news reports can be dangerous to society when they encourage unrest or spark widespread outrage. Without treading upon the freedom of speech, is there a way for the U.S. and other countries to regulate deepfake creators with intentions of spreading dissent? China’s mandate that deepfake content must include a disclaimer could be a good starting point. 

The Writers Guild of America (WGA) and Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) strikes are a prime example of how unbridled AI use could turn invasive and impact people’s jobs. In their new contract negotiations, each union included an AI clause. For the writers, they’re asking that AI-generated text not be allowed to write “literary material.” Actors are arguing against the widespread use of AI-replication of actors’ voices and faces. While deepfake could save (already rich) studios millions, these types of recreations could put actors out of work and would allow the studio to use the actors’ likeness in perpetuity without the actors’ consent.4 Future laws around the use of AI will likely include clauses about consent and assuming the risk text generators introduce to any project. 

Use AI Responsibly 

In the meantime, while the world awaits more guidelines and firm regulations governing mainstream AI tools, the best way you can interact with AI in daily life is to do so responsibly and in moderation. This includes using your best judgement and always being transparent when AI was involved in a project.  

Overall, whether in your professional and personal life, it’s best to view AI as a partner, not as a replacement.  

1The White House, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People 

2European Parliament, “EU AI Act: first regulation on artificial intelligence 

3CNBC, “China is about to get tougher on deepfakes in an unprecedented way. Here’s what the rules mean 

4NBC News, “Actors vs. AI: Strike brings focus to emerging use of advanced tech 

The post The Wild West of AI: Do Any AI Laws Exist? appeared first on McAfee Blog.

Read More

New SEC Rules around Cybersecurity Incident Disclosures

Read Time:58 Second

The US Securities and Exchange Commission adopted final rules around the disclosure of cybersecurity incidents. There are two basic rules:

Public companies must “disclose any cybersecurity incident they determine to be material” within four days, with potential delays if there is a national security risk.
Public companies must “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats” in their annual filings.

The rules go into effect this December.

In an email newsletter, Melissa Hathaway wrote:

Now that the rule is final, companies have approximately six months to one year to document and operationalize the policies and procedures for the identification and management of cybersecurity (information security/privacy) risks. Continuous assessment of the risk reduction activities should be elevated within an enterprise risk management framework and process. Good governance mechanisms delineate the accountability and responsibility for ensuring successful execution, while actionable, repeatable, meaningful, and time-dependent metrics or key performance indicators (KPI) should be used to reinforce realistic objectives and timelines. Management should assess the competency of the personnel responsible for implementing these policies and be ready to identify these people (by name) in their annual filing.

News article.

Read More