CVE-2022-48469

Read Time:7 Second

There is a traffic hijacking vulnerability in Huawei routers. Successful exploitation of this vulnerability can cause packets to be hijacked by attackers. 

Read More

CVE-2022-48330

Read Time:11 Second

A Huawei sound box product has an out-of-bounds write vulnerability. Attackers can exploit this vulnerability to cause buffer overflow. Affected product versions include:FLMG-10 versions FLMG-10 10.0.1.0(H100SP22C00).

Read More

The Dangers of Artificial Intelligence

Read Time:5 Minute, 10 Second

Over the decades, Hollywood has depicted artificial intelligence (AI) in multiple unsettling ways. In their futuristic settings, the AI begins to think for itself, outsmarts the humans, and overthrows society. The resulting dark world is left in a constant barrage of storms – metaphorically and meteorologically. (It’s always so gloomy and rainy in those movies.) 

AI has been a part of manufacturing, shipping, and other industries for several years now. But the emergence of mainstream AI in daily life is stirring debates about its use. Content, art, video, and voice generation tools can make you write like Shakespeare, look like Tom Cruise, or create digital masterpieces in the style of Van Gogh. While it starts out as fun and games, an overreliance or misuse of AI can quickly turn shortcuts into irresponsibly cut corners and pranks into malicious impersonations.   

It’s imperative that everyone interact responsibly with mainstream AI tools like ChatGPT, Bard, Craiyon, and Voice.ai, among others, to avoid these three real dangers of AI that you’re most likely to encounter. 

1. AI Hallucinations

The cool thing about AI is it has advanced to the point where it does think for itself. It’s constantly learning and forming new patterns. The more questions you ask it, the more data it collects and the “smarter” it gets. However, when you ask ChatGPT a question it doesn’t know the answer to, it doesn’t admit that it doesn’t know. Instead, it’ll make up an answer like a precocious schoolchild. This phenomenon is known as an AI hallucination. 

One prime example of an AI hallucination occurred in a New York courtroom. A lawyer presented a lengthy brief that cited multiple law cases to back his point. It turns out the lawyer used ChatGPT to write the entire brief and he didn’t fact check the AI’s work. ChatGPT fabricated its supporting citations, none of which existed. 

AI hallucinations could become a threat to society in that it could populate the internet with false information. Researchers and writers have a duty to thoroughly doublecheck any work they outsource to text generation tools like ChatGPT. When a trustworthy online source publishes content and asserts it as the unbiased truth, readers should be able to trust that the publisher isn’t leading them astray. 

2. Deepfake, AI Art, and Fake News

We all know that you can’t trust everything you read on the internet. Deepfake and AI-generated art deepen the mistrust. Now, you can’t trust everything you see on the internet. 

Deepfake is the digital manipulation of a photo or video to portray an event that never happened or portray a person doing or saying something they never did or said. AI art creates new images using a compilation of published works on the internet to fulfill the prompt. 

Deepfake and AI art become a danger to the public when people use them to supplement fake news reports. Individuals and organizations who feel strongly about their side of an issue may shunt integrity to the side to win new followers to their cause. Fake news is often incendiary and in extreme cases can cause unrest.  

Before you share a “news” article with your social media following or shout about it to others, do some additional research to ensure its accuracy. Additionally, scrutinize the video or image accompanying the story. A deepfake gives itself away when facial expressions or hand gestures don’t look quite right. Also, the face may distort if the hands get too close to it. To spot AI art, think carefully about the context. Is it too fantastic or terrible to be true? Check out the shadows, shading, and the background setting for anomalies. 

3. AI Voice Scams

An emerging dangerous use of AI is cropping up in AI voice scams. Phishers have attempted to get people’s personal details and gain financially over the phone for decades. But now with the help of AI voice tools, their scams are entering a whole new dimension of believability.  

With as little as three seconds of genuine audio, AI voice generators can mimic someone’s voice with up to 95% accuracy. While AI voice generators may add some humor to a comedy deepfake video, criminals are using the technology to seriously frighten people and scam them out of money at the same time. The criminal will impersonate someone using their voice and call the real person’s loved one, saying they’ve been robbed or sustained an accident. McAfee’s Beware the Artificial Imposter report discovered that 77% of people targeted by an AI voice scam lost money as a result. Seven percent of people lost as much as $5,000 to $15,000. 

Use AI Responsibly 

Google’s code of conduct states “Don’t be evil.”2 Because AI relies on input from humans, we have the power to make AI as benevolent or as malevolent as we are. There’s a certain amount of trust involved in the engineers who hold the future of the technology – and if Hollywood is to be believed, the fate of humanity – in their deft hands and brilliant minds. 

“60 Minutes” likened AI’s influence on society on a tier with fire, agriculture, and electricity.3 Because AI never has to take a break, it can learn and teach itself new things every second of every day. It’s advancing quickly and some of the written and visual art it creates can result in some touching expressions of humanity. But AI doesn’t quite understand the emotion it portrays. It’s simply a game of making patterns. Is AI – especially its use in creative pursuits – dimming the spark of humanity? That remains to be seen. 

When used responsibly and in moderation in daily life, it may make us more efficient and inspire us to think in new ways. Be on the lookout for the dangers of AI and use this amazing technology for good. 

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2Alphabet, “Google Code of Conduct”  

360 Minutes, “Artificial Intelligence Revolution 

The post The Dangers of Artificial Intelligence appeared first on McAfee Blog.

Read More

USN-6156-2: SSSD regression

Read Time:24 Second

USN-6156-1 fixed a vulnerability in SSSD. In certain environments, not all
packages ended up being upgraded at the same time, resulting in
authentication failures when the PAM module was being used.

This update fixes the problem. We apologize for the inconvenience.

Original advisory details:

It was discovered that SSSD incorrrectly sanitized certificate data used in
LDAP filters. When using this issue in combination with FreeIPA, a remote
attacker could possibly use this issue to escalate privileges.

Read More

The CSO guide to top security conferences

Read Time:40 Second

There is nothing like attending a face-to-face event for career networking and knowledge gathering, and we don’t have to tell you how helpful it can be to get a hands-on demo of a new tool or to have your questions answered by experts.

Fortunately, plenty of great conferences are coming up in the months ahead.

If keeping abreast of security trends and evolving threats is critical to your job — and we know it is — then attending some top-notch security conferences is on your must-do list for 2023 and 2024.

From major events to those that are more narrowly focused, this list from the editors of CSO, will help you find the security conferences that matter the most to you.

To read this article in full, please click here

Read More

A vulnerability in MOVEit Transfer Could Allow for Elevated Privileges and Unauthorized Access

Read Time:46 Second

A Vulnerability has been discovered in Progress Moveit Transfer, which could allow for could allow for elevated privileges and unauthorized access. MOVEit Transfer is a managed file transfer software that allows the enterprise to securely transfer files between business partners and customers using SFTP, SCP, and HTTP-based uploads. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

On June 16th, after the MS-ISAC’s initial advisory, a CVE was assigned to this new critical vulnerability (CVE-2023-35708) and additional remediation and patching steps were recommended. According to the updated Progress Community bulletin, the MOVEit patch released on June 15th must be applied to remediate CVE-2023-35708.

Read More