Threat Actors Use AWS SSM Agent as a Remote Access Trojan

Read Time:3 Second

Mitiga’s research demonstrated two potential attack scenarios

Read More

CVE-2022-40609

Read Time:17 Second

IBM SDK, Java Technology Edition 7.1.5.18 and 8.0.8.0 could allow a remote attacker to execute arbitrary code on the system, caused by an unsafe deserialization flaw. By sending specially-crafted data, an attacker could exploit this vulnerability to execute arbitrary code on the system. IBM X-Force ID: 236069.

Read More

webkitgtk-2.40.5-1.fc37

Read Time:15 Second

FEDORA-2023-19754c5a93

Packages in this update:

webkitgtk-2.40.5-1.fc37

Update description:

Fix several crashes and rendering issues

Security fixes: CVE-2023-38133, CVE-2023-38572, CVE-2023-38592, CVE-2023-38594, CVE-2023-38595, CVE-2023-38597, CVE-2023-38599, CVE-2023-38600, CVE-2023-38611

Read More

webkitgtk-2.40.5-1.fc38

Read Time:15 Second

FEDORA-2023-a479289864

Packages in this update:

webkitgtk-2.40.5-1.fc38

Update description:

Fix several crashes and rendering issues

Security fixes: CVE-2023-38133, CVE-2023-38572, CVE-2023-38592, CVE-2023-38594, CVE-2023-38595, CVE-2023-38597, CVE-2023-38599, CVE-2023-38600, CVE-2023-38611

Read More

The Wild West of AI: Do Any AI Laws Exist?

Read Time:3 Minute, 42 Second

Are we in the Wild West? Scrolling through your social media feed and breaking news sites, all the mentions of artificial intelligence and its uses makes the online world feel like a lawless free-for-all.  

While your school or workplace may have rules against using AI for projects, as of yet there are very few laws regulating the usage of mainstream AI content generation tools. As the technology advances are laws likely to follow? Let’s explore. 

What AI Laws Exist? 

As of August 2023, there are no specific laws in the United States governing the general public’s usage of AI. For example, there are no explicit laws banning someone from making a deepfake of a celebrity and posting it online. However, if a judge could construe the deepfake as defamation or if it was used as part of a scam, it could land the creator in hot water. 

The White House issued a draft of an artificial intelligence bill of rights that outlines best practices for AI technologists. This document isn’t a law, though. It’s more of a list of suggestions to urge developers to make AI unbiased, as accurate as possible, and to not completely rely on AI when a human could perform the same job equally well.1 The European Union is in the process of drafting the EU AI Act. Similar to the American AI Bill of Rights, the EU’s act is mostly directed toward the developers responsible for calibrating and advancing AI technology.2 

China is one country that has formal regulations on the use of AI, specifically deepfake. A new law states that a person must give express consent to allow their faces to be used in a deepfake. Additionally, the law bans citizens from using deepfake to create fake news reports or content that could negatively affect the economy, national security, or China’s reputation. Finally, all deepfakes must include a disclaimer announcing that the content was synthesized.3 

Should AI Laws Exist in the Future? 

As scammers, edgy content creators, and money-conscious executives push the envelope with deepfake, AI art, and text-generation tools, laws governing AI use may be key to stopping the spread of fake news and protect people’s livelihoods and reputations. 

Deepfake challenges the notion that “seeing is believing.” Fake news reports can be dangerous to society when they encourage unrest or spark widespread outrage. Without treading upon the freedom of speech, is there a way for the U.S. and other countries to regulate deepfake creators with intentions of spreading dissent? China’s mandate that deepfake content must include a disclaimer could be a good starting point. 

The Writers Guild of America (WGA) and Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) strikes are a prime example of how unbridled AI use could turn invasive and impact people’s jobs. In their new contract negotiations, each union included an AI clause. For the writers, they’re asking that AI-generated text not be allowed to write “literary material.” Actors are arguing against the widespread use of AI-replication of actors’ voices and faces. While deepfake could save (already rich) studios millions, these types of recreations could put actors out of work and would allow the studio to use the actors’ likeness in perpetuity without the actors’ consent.4 Future laws around the use of AI will likely include clauses about consent and assuming the risk text generators introduce to any project. 

Use AI Responsibly 

In the meantime, while the world awaits more guidelines and firm regulations governing mainstream AI tools, the best way you can interact with AI in daily life is to do so responsibly and in moderation. This includes using your best judgement and always being transparent when AI was involved in a project.  

Overall, whether in your professional and personal life, it’s best to view AI as a partner, not as a replacement.  

1The White House, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People 

2European Parliament, “EU AI Act: first regulation on artificial intelligence 

3CNBC, “China is about to get tougher on deepfakes in an unprecedented way. Here’s what the rules mean 

4NBC News, “Actors vs. AI: Strike brings focus to emerging use of advanced tech 

The post The Wild West of AI: Do Any AI Laws Exist? appeared first on McAfee Blog.

Read More

New SEC Rules around Cybersecurity Incident Disclosures

Read Time:58 Second

The US Securities and Exchange Commission adopted final rules around the disclosure of cybersecurity incidents. There are two basic rules:

Public companies must “disclose any cybersecurity incident they determine to be material” within four days, with potential delays if there is a national security risk.
Public companies must “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats” in their annual filings.

The rules go into effect this December.

In an email newsletter, Melissa Hathaway wrote:

Now that the rule is final, companies have approximately six months to one year to document and operationalize the policies and procedures for the identification and management of cybersecurity (information security/privacy) risks. Continuous assessment of the risk reduction activities should be elevated within an enterprise risk management framework and process. Good governance mechanisms delineate the accountability and responsibility for ensuring successful execution, while actionable, repeatable, meaningful, and time-dependent metrics or key performance indicators (KPI) should be used to reinforce realistic objectives and timelines. Management should assess the competency of the personnel responsible for implementing these policies and be ready to identify these people (by name) in their annual filing.

News article.

Read More

Code Mirage: How cyber criminals harness AI-hallucinated code for malicious machinations

Read Time:4 Minute, 30 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Introduction:

The landscape of cybercrime continues to evolve, and cybercriminals are constantly seeking new methods to compromise software projects and systems. In a disconcerting development, cybercriminals are now capitalizing on AI-generated unpublished package names also known as “AI-Hallucinated packages” to publish malicious packages under commonly hallucinated package names. It should be noted that artificial hallucination is not a new phenomenon as discussed in [3]. This article sheds light on this emerging threat, wherein unsuspecting developers inadvertently introduce malicious packages into their projects through the code generated by AI.

AI-hallucinations:

Artificial intelligence (AI) hallucinations, as described [2], refer to confident responses generated by AI systems that lack justification based on their training data. Similar to human psychological hallucinations, AI hallucinations involve the AI system providing information or responses that are not supported by the available data. However, in the context of AI, hallucinations are associated with unjustified responses or beliefs rather than false percepts. This phenomenon gained attention around 2022 with the introduction of large language models like ChatGPT, where users observed instances of seemingly random but plausible-sounding falsehoods being generated. By 2023, it was acknowledged that frequent hallucinations in AI systems posed a significant challenge for the field of language models.

The exploitative process:

Cybercriminals begin by deliberately publishing malicious packages under commonly hallucinated names produced by large language machines (LLMs) such as ChatGPT within trusted repositories. These package names closely resemble legitimate and widely used libraries or utilities, such as the legitimate package ‘arangojs’ vs the hallucinated package ‘arangodb’ as shown in the research done by Vulcan [1].

The trap unfolds:

When developers, unaware of the malicious intent, utilize AI-based tools or large language models (LLMs) to generate code snippets for their projects, they inadvertently can fall into a trap. The AI-generated code snippets can include imaginary unpublished libraries, enabling cybercriminals to publish commonly used AI-generated imaginary package names. As a result, developers unknowingly import malicious packages into their projects, introducing vulnerabilities, backdoors, or other malicious functionalities that compromise the security and integrity of the software and possibly other projects.

Implications for developers:

The exploitation of AI-generated hallucinated package names poses significant risks to developers and their projects. Here are some key implications:

Trusting familiar package names: Developers commonly rely on package names they recognize to introduce code snippets into their projects. The presence of malicious packages under commonly hallucinated names makes it increasingly difficult to distinguish between legitimate and malicious options when relying on the trust from AI generated code.
Blind trust in AI-generated code: Many developers embrace the efficiency and convenience of AI-powered code generation tools. However, blind trust in these tools without proper verification can lead to unintentional integration of malicious code into projects.

Mitigating the risks:

To protect themselves and their projects from the risks associated with AI-generated code hallucinations, developers should consider the following measures:

Code review and verification: Developers must meticulously review and verify code snippets generated by AI tools, even if they appear to be similar to well-known packages. Comparing the generated code with authentic sources and scrutinizing the code for suspicious or malicious behavior is essential.
Independent research: Conduct independent research to confirm the legitimacy of the package. Visit official websites, consult trusted communities, and review the reputation and feedback associated with the package before integration.
Vigilance and reporting: Developers should maintain a proactive stance in reporting suspicious packages to the relevant package managers and security communities. Promptly reporting potential threats helps mitigate risks and protect the wider developer community.

Conclusion:

The exploitation of commonly hallucinated package names through AI generated code is a concerning development in the realm of cybercrime. Developers must remain vigilant and take necessary precautions to safeguard their projects and systems. By adopting a cautious approach, conducting thorough code reviews, and independently verifying the authenticity of packages, developers can mitigate the risks associated with AI-generated hallucinated package names.

Furthermore, collaboration between developers, package managers, and security researchers is crucial in detecting and combating this evolving threat. Sharing information, reporting suspicious packages, and collectively working towards maintaining the integrity and security of repositories are vital steps in thwarting the efforts of cybercriminals.

As the landscape of cybersecurity continues to evolve, staying informed about emerging threats and implementing robust security practices will be paramount. Developers play a crucial role in maintaining the trust and security of software ecosystems, and by remaining vigilant and proactive, they can effectively counter the risks posed by AI-generated hallucinated packages.

Remember, the battle against cybercrime is an ongoing one, and the collective efforts of the software development community are essential in ensuring a secure and trustworthy environment for all.

The guest author of this blog works at www.perimeterwatch.com

Citations:

Lanyado, B. (2023, June 15). Can you trust chatgpt’s package recommendations? Vulcan Cyber. https://vulcan.io/blog/ai-hallucinations-package-risk
Wikimedia Foundation. (2023, June 22). Hallucination (Artificial Intelligence)1. Wikipedia. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, et al. Survey of hallucination in natural language generation. ACM Comput Surv. (2023 June 23). https://doi.org/10.1145/3571730

Read More