ZDI-24-020: Linux Kernel GSM Multiplexing Race Condition Local Privilege Escalation Vulnerability

Read Time:17 Second

This vulnerability allows local attackers to execute arbitrary code on affected installations of Linux Kernel. An attacker must first obtain the ability to execute low-privileged code on the target system in order to exploit this vulnerability. The ZDI has assigned a CVSS rating of 8.8. The following CVEs are assigned: CVE-2023-6546.

Read More

Staying Safe in the Age of AI

Read Time:7 Minute, 26 Second

We’ve seen how AI can create — and how it can transform our lives. What gets talked about less is how AI protects us too. 

Certainly, it’s tough to miss how generative AI has turned sci-fi dreams of the past into today’s reality. From AI apps that help ease loneliness thanks to their human-like conversations, to technology that can predict and manage health risks, to browsers that whip up pieces of art with a prompt, it’s changing the way we go about our day and the way we live our lives.  

However, we find ourselves only in generative AI’s earliest days. Countless more applications await over the near and distant horizon alike. 

Yet that’s the important thing to remember with AI. It’s an application. A tool. And like any other tool, it’s neutral. Whether it helps or harms comes down to the person using it.  

Thus, on the flip side of AI, we’ve seen all manner of shady and damaging applications. Hackers use AI to code new forms of malware at record rates. Scammers spin up convincing-looking phishing attacks and sites that harvest personal info, also at record rates. And we’ve further seen bad actors use so-called “deepfake” technologies to clone the voices and likenesses of public figures, whether for profit or to spread disinformation. 

So, amid the excitement about AI, there runs a thread of uncertainty. Recently, we found that 52% of Americans are more concerned than excited about AI in daily life. Only 10% of people said they’re more excited than concerned. Meanwhile, 36% feel a mix of excitement and concern. 

Uncertainty prevails, for sure. Yet something often gets overlooked in the conversation about AI: it can offer powerful protections against all manner of threats. Moreover, AI offers particularly potent protections against AI threats.  

In this way, AI is your ally. At McAfee, we’ve used it to protect you for nearly a decade now. In fact, AI applications have been around for some time, long before they made headlines like they do now. And we continue to evolve AI technologies to help keep you safe. In the age of AI, McAfee is your ally. Our aim is to give you certainty and safety in rapidly changing times. 

Know what’s real and what’s safe with McAfee’s AI. 

Ultimately, here’s what’s at stake today: people want to know what they can trust, and AI has made that tricky. What’s real? What’s fake? It’s getting tougher and tougher to tell. 

The future of AI and online safety lies in pairing progress with protection. Here at McAfee, we see this as our role. We’re evolving AI in ways that give people the power to protect their privacy, identity, and devices even better than before. Now, that protection extends yet further. It also gives them the power to know what they can trust whenever they go online.  

The time couldn’t be more right for that. Uncertainty about AI prevails. In all, more than half of Americans we talked to said they’re concerned that the arrival of AI has made online scams more accurate and believable.  

Our threat detection figures put their concerns into focus:  

We thwart 1.5 million in-field AI detections of threats (malicious sites and files) every week. That’s 8,928 malicious every hour and 149 every minute. 
Our AI model has already identified and categorized half a billion malicious sites, a number that grows with each day. 
McAfee Labs detects and protects against more than a million phishing attempts every day, including more sophisticated and believable variants generated with AI tools. 

With that, we ask ourselves, what can AI do for you? How can it keep you safe? Three principles provide the answer:  

AI should build trust You can safely navigate places known and unknown in peace and with confidence. 
AI should uncover the truth — You know who and what’s real and what’s safe out there — like having your own personal lie detector in your back pocket.  
AI should make things clear — You understand and have control over the data and info you give up in exchange for access to conveniences and services like social media. 

These principles drive our thinking in significant ways as we pair progress with protection in the age of AI. They stand as our commitment to keeping you safe and certain online, through our existing technologies and entirely new technologies alike. 

McAfee’s AI protections are already keeping you safe. 

As we’ve used AI as a core component of our protection for years now, it’s done plenty for you over that time. Our AI has sniffed out viruses, malicious websites, and sketchy content online. It’s helped steer you clear of malicious websites too.  

So, the AI you have in your McAfee antivirus, it works like this: 

It detects threats by referencing models of existing threats. This combats pre-existing threats and entirely new (zero-day) threats alike. Our AI can spot varieties of different threats by comparing them to features it’s seen before. For example, it’s like AI learning to identify different varieties of fruit. An apple is still an apple whether it’s a Fuji or Granny Smith. In that way, a virus is still a virus if it’s “Virus A” or the newly discovered “Virus Z.”  
It further detects suspicious events and behaviors. AI provides a particularly powerful tool against zero-day threats. It analyzes the activities of applications for patterns consistent with malicious behavior. With that, it can spot and prevent a previously unknown file or process from doing harm. In its way, AI says, “I’ve seen this sketchy behavior before. I’m going to flag it.” 
It automatically classifies threats and adds them to its body of knowledge. AI-driven threat protection gets stronger over time. Because it learns. Something we call “threat intelligence.” The more threats it encounters, the more rapidly and readily it can determine if files want to do you no good. The body of threat intelligence improves immensely as a result. 

 Now we’ve made improvements to our AI-driven protection — and unveiled all-new features that take full advantage of AI, such as McAfee Next-gen Threat Protection and McAfee Scam Protection. 

McAfee Next-gen Threat Protection — AI keeps you safer from new and existing threats.  

McAfee’s AI-powered security just got faster and stronger. Our Next-gen Threat Protection takes up less disk space, reduces its background processes by 75%, and scans 3x faster than before. This makes your time online safer without slowing down your browsing, shopping, streaming, and gaming.  

Results from AV-TEST’s product review in October 2023 saw it block 100% of entirely new malware attacks in real-world testing. It likewise scored 100% against malware discovered in the previous four weeks. In all, it received the highest marks for protection, performance, and usability earning it the AV-TEST Top Product certification. 

Moreover, AI continually gets smarter because every evaluation provides more data for it to learn and improve its accuracy. McAfee conducts over 4 billion threat scans a day, and that number is quickly growing. We continue to innovate with leading-edge AI technology to provide the most advanced and powerful protection available. 

McAfee Scam Protection — AI lets you know if it’s legit or if it’s a scam.  

The AI-powered scam protection in McAfee+ is like having that lie detector test we mentioned earlier. Advanced AI-powered technology helps prevent you from opening scam texts and blocks risky sites if you accidentally click on a scam link in texts, QR codes, emails, social media posts, and more. This AI-driven scam protection delivers real-time mobile alerts when a scam text is detected and is the only app on the market that sends alerts on both iOS and Android. 

McAfee is your ally in the age of AI. 

Advances in threat protection and scam protection mark just the start of where we’re taking our long-standing use of AI next. Sure, AI has made life easier for hackers and scammers. In some ways. In yet more important ways, it’s making their lives far more difficult. Downright tough in fact, particularly as we use it here at McAfee to detect their scam messages and texts, beat their AI-generated malware, and warn you of their malicious websites. And that’s just for starters. We have more to come. 

You can expect to see other fraud-busting and info-validating uses of AI across our online protection software in the months to come. That’s what’s in store as we stand as you ally in the age of AI. 

The post Staying Safe in the Age of AI appeared first on McAfee Blog.

Read More

cpio privilege escalation vulnerability via setuid files in cpio archive

Read Time:23 Second

Posted by Georgi Guninski on Jan 08

cpio privilege escalation vulnerability via setuid files in cpio archive

Happy New Year, let in 2024 happiness be with you! 🙂

When extracting archives cpio (at least version 2.13) preserves
the setuid flag, which might lead to privilege escalation.

One example is r00t extracts to /tmp/ and scidiot runs /tmp/micq/backd00r
without further interaction from root.

We believe this is vulnerability, since directory traversal in cpio
is considered…

Read More

OXAS-ADV-2023-0006: OX App Suite Security Advisory

Read Time:22 Second

Posted by Martin Heiland via Fulldisclosure on Jan 08

Dear subscribers,

We’re sharing our latest advisory with you and like to thank everyone who contributed in finding and solving those
vulnerabilities. Feel free to join our bug bounty programs for OX AppSuite, Dovecot and PowerDNS at YesWeHack.

This advisory has also been published at https://documentation.open-xchange.com/security/advisories/.

Yours sincerely,
Martin Heiland, Open-Xchange GmbH

Internal reference: MWB-2315
Type:…

Read More

OXAS-ADV-2023-0005: OX App Suite Security Advisory

Read Time:22 Second

Posted by Martin Heiland via Fulldisclosure on Jan 08

Dear subscribers,

We’re sharing our latest advisory with you and like to thank everyone who contributed in finding and solving those
vulnerabilities. Feel free to join our bug bounty programs for OX AppSuite, Dovecot and PowerDNS at YesWeHack.

This advisory has also been published at https://documentation.open-xchange.com/security/advisories/.

Yours sincerely,
Martin Heiland, Open-Xchange GmbH

Internal reference: MWB-2261
Type:…

Read More

SSH-Snake: Automated SSH-Based Network Traversal

Read Time:23 Second

Posted by Joshua Rogers on Jan 08

SSH-Snake is a powerful tool designed to perform automatic network
traversal using SSH private keys discovered on systems, with the objective
of creating a comprehensive map of a network and its dependencies,
identifying to what extent a network can be compromised using SSH and SSH
private keys starting from a particular system.

SSH-Snake can automatically reveal the relationship between systems which
are connected via SSH, which would normally…

Read More

6 Cybersecurity Predictions for 2024 – Staying Ahead of the Latest Hacks and Attacks

Read Time:14 Minute, 47 Second

AI and major elections, deepfakes and the Olympics — they all feature prominently in our cybersecurity predictions for 2024.

That’s quite the mix. And that mix reflects the nature of cybersecurity. Just as changing technology shapes cybersecurity, it gets further shaped by the changing world we live in. The bad actors out there exploit new and emerging technologies — just as they exploit events and trends. It’s a potent formula that bad actors turn to again and again. With it, they concoct a mix of ever-evolving attacks.

For a pointed example of the interplay between technology and culture, look no further than Barbie. More specifically, the scams that cropped up around the release of the “Barbie” movie. Using AI tools, scammers generated videos that promoted bogus ticket giveaways. They combined the new technology of AI with the hype surrounding the film and duped thousands of victims as a result.

We expect to see more of the same in 2024, and we have several other predictions as well. With that, let’s look ahead so you can stay ahead of the hacks and attacks we expect to see in 2024.

1) Election cycles will see further disruption with AI tools.

2024 has plenty on the slate in terms of pivotal elections. Across the globe, we have the United States presidential election, general elections in India, and the European Union parliamentary elections, to name a few. While every election comes with its fair share of disinformation, the continued evolution of generative AI tools such as ChatGPT, DALL-E, and Stable Diffusion add an extra level of complication.

So, if a picture is worth a thousand words, what’s an AI-generated photo, video, or voice clone worth? For disinformation, plenty.

Already, many voters raise a skeptical brow when politicians sling statements aimed at discrediting their opponents. Yet when those words are backed by visual evidence, such as a photo or video, it lends them the appearance of credibility. With AI tools, a few keywords can give a false statement or accusation life in the form of a (bogus) photo or video, which now go by the common name of “deepfakes.”

Certainly, 2024 won’t be the first election where bad actors or unscrupulous individuals try to shape public opinion through the manipulation of photos and videos. However, it will be the first election where generative AI tools are significantly more accessible and easier than ever to use. As a result, voters can expect to see a glut of deepfakes and disinformation as the election cycle gears up.

Likewise, the advent of AI voice-cloning tools complicates matters yet more. Consider what that means for the pre-recorded “robocalls” that campaigns use to reach voters en masse. Now, with only a small sample of a candidate’s voice, bad actors can create AI voice clones with striking fidelity. They read from any script a bad actor bangs out and effectively put words in someone else’s mouth — potentially damaging the reputation and credibility of candidates.

As we reported earlier this year, AI voice cloning is easier and more accessible than ever. It stands to reason that bad actors will turn it to political ends in 2024.

How to spot disinformation.

Disinformation has several goals, depending on who’s serving it up. Most broadly, it involves gain for one group at the expense of others. It aims to confuse, misdirect, and manipulate its audience — often by needling strong emotional triggers. That calls on us to carefully consider the media and messages we see, particularly in the heat of the moment.

That can present challenges at a time when massive amounts of content scroll by our eyes in our subscriptions and feeds. Bad actors count on people taking content at immediate face value. Yet asking a few questions can help you spot disinformation when you see it.

The International Federation of Library Associations and Institutions offers this checklist:

Consider the Source – Click away from the story to investigate the site, its mission, and its contact info. 
Read Beyond – Headlines can be outrageous to get clicks. What’s the whole story? 
Check the Author – Do a quick search on the author. Are they credible? Are they real? 
Supporting Sources? – Determine if the info given supports the story.  
Check the Date – Reposting old news stories doesn’t mean they’re relevant to current events. 
Is it a Joke? – If it is too outlandish, it might be satire. Research the site and author to be sure.  
Check your Biases – Consider if your own beliefs could affect your judgment.  
Ask the Experts – Ask a librarian or consult a fact-checking site. 

That last piece of advice is particularly strong. De-bunking disinformation takes time and effort. Professional fact-checkers at news and media organizations do this work daily. Posted for all to see, they provide a quick way to get your answers. Some fact-checking groups include:

Politifact.com 
Snopes.com 
FactCheck.org 
Reuters.com/fact-check 

Put plainly, bad actors use disinformation to sow discord and divide people. While not every piece of controversial or upsetting piece of content is disinformation, those are surefire signs to follow up on what you’ve seen with several credible sources. Also, keep in mind that those bad actors out there want you to do their dirty work for them. They want you to share their content without a second thought. By taking a moment to check the facts before you react, curb the dissent they want to see spread.

2) AI scams will be the new sneaky stars of social media.

In the ever-evolving landscape of cybercrime, the emergence of AI has introduced a new level of sophistication and danger. With the help of AI, cybercriminals now possess the ability to manipulate social media platforms and shape public opinion in ways that were previously unimaginable.

One of the most concerning aspects of this development is the power of AI tools to fabricate photos, videos, and audio. These tools enable bad actors to create highly convincing and realistic content, making it increasingly difficult for users to discern between what is real and what is manipulated. This opens up a whole new realm of possibilities for cybercriminals to exploit unsuspecting individuals and organizations.

One alarming consequence of this is the potential for celebrity and influencer names and images to be misused by cybercrooks. With the ability to generate highly convincing content, these bad actors can create fake endorsements that appear to come from well-known personalities. This can lead to an increase in scams and fraudulent activities, as unsuspecting consumers may be more likely to trust and engage with content that appears to be endorsed by their favorite celebrities or influencers.

Local online marketplaces are also at risk of being targeted by cybercriminals utilizing AI. By leveraging fabricated content, these bad actors can create fake listings and advertisements that appear legitimate. This can deceive consumers into making purchases or engaging in transactions that ultimately result in financial loss or other negative consequences.

How to avoid AI social media scams

As AI continues to advance, it is crucial for consumers to be aware of the potential risks and take necessary precautions. This includes being vigilant and skeptical of content encountered on social media platforms, verifying the authenticity of endorsements or advertisements, and utilizing secure online marketplaces with robust verification processes.

3) Cyberbullying among kids will soar

One of the most troubling trends on the horizon for 2024 is the alarming rise of cyberbullying, which is expected to be further exacerbated by the increasing use of deepfake technology. This advanced and remotely accessible tool has become readily available to young adults, enabling them to create exceptionally realistic fake content with ease.

In the past, cyberbullies primarily relied on spreading rumors and engaging in online harassment. However, with the emergence of deepfake technology, the scope and impact of cyberbullying have reached new heights. Cyberbullies can now manipulate images that are readily available in the public domain, altering them to create fabricated and explicit versions. These manipulated images are then reposted online, intensifying the harm inflicted on their victims.

The consequences of this escalating trend are far-reaching and deeply concerning. The false images and accompanying words can have significant and lasting effects on the targeted individuals and their families. Privacy becomes compromised as personal images are distorted and shared without consent, leaving victims feeling violated and exposed. Moreover, the fabricated content can tarnish one’s identity, leading to confusion, mistrust, and damage to personal and professional relationships.

The psychological and emotional well-being of those affected by deepfake cyberbullying is also at stake. The relentless onslaught of false and explicit content can cause severe distress, anxiety, and depression. Victims may experience a loss of self-esteem, as they struggle to differentiate between reality and the manipulated content that is being circulated online. The impact on their mental health can be long-lasting, requiring extensive support and intervention.

The ripple effects of deepfake cyberbullying extend beyond the immediate victims. Families are also deeply affected, as they witness the distress and suffering of their loved ones. Parents may feel helpless and overwhelmed, struggling to protect their children from the relentless onslaught of cyberbullying. The emotional toll on families can be immense, as they navigate the challenges of supporting their children through such traumatic experiences.

How to prevent online cyberbullying.

Education and Awareness: Promote digital literacy and educate individuals about the consequences and impact of cyberbullying. Teach them how to recognize and respond to cyberbullying incidents, and encourage them to report any instances they encounter. 
Strong Policies and Regulations: Implement and enforce strict policies and regulations against cyberbullying on online platforms. Collaborate with social media companies, schools, and organizations to establish guidelines and procedures for handling cyberbullying cases promptly and effectively. 
Support and Empowerment: Provide support systems and resources for victims of cyberbullying. Encourage open communication and create safe spaces where individuals can seek help and share their experiences. Empower bystanders to intervene and support victims, fostering a culture of empathy and kindness online. 

4) Conflicts across the globe will ramp up charity fraud.

Scammers exploit emotions – such as the excitement of the Olympics. Darkly, they also tap into fear and grief.

A particularly heartless method of doing this is through charity fraud. While this takes many forms, it usually involves a criminal setting up a fake charity site or page to trick well-meaning contributors into thinking they are supporting legitimate causes or contributing money to help fight real issues.

2024 will see this continue. We further see potential for this to increase given the conflicts in Ukraine and the Middle East. Scammers might also increase the emotional pull of the messaging by tapping into the same AI technology we predict will be used in the 2024 election cycle. Overall, expect their attacks to look and feel far more sophisticated than in years past.

How to donate safely online.

As with so many scams out there, any time an email, text, direct message, or site urges you into immediate action — take pause. Research the charity. See how long they’ve been in operation, how they put their funds to work, and who truly benefits from them.  
Likewise, note that there are some charities that pass along more money to their beneficiaries than others. Generally, the most reputable organizations only keep 25% or less of their funds for operations. Some less-than-reputable organizations keep up to 95% of funds, leaving only 5% for advancing the cause they advocate.  
In the U.S., the Federal Trade Commission (FTC) has a site full of resources so that you can make your donation truly count. Resources like Charity Watch and Charity Navigator, along with the BBB’s Wise Giving Alliance can also help you identify the best charities. 

5) New strains of malware, voice, visual cloning and QR code scams will accelerate

Aside from its ability to write love poems, answer homework questions, and create art with a few keyword prompts, AI can do something else. It can code. In the hands of hackers, that means AI can churn out new strains of malware and even spin up entire malicious websites. And quickly at that. 

Already, we’ve seen hackers use AI tools to create malware. This will continue apace, and we can expect them to create smarter malware too. AI can spawn malware that analyzes and adapts to a device’s defenses. This helps particularly malicious attacks like spyware and ransomware to infect a device by allowing it to slip by undetected. It also makes the creation and dissemination of convincing phishing emails and QR code scams, faster and easier. This extends to the creation of deepfake video, photo, and audio content aimed at deceiving unsuspecting targets and scamming them out of money. The rise of QR code scams, also known as quishing, is an additional concern. Scammers use AI to generate malicious QR codes that, when scanned, lead to phishing websites or trigger malware downloads. As the barrier to entry for these threats lowers, these scams will spread to all platforms with an increased focus on mobile devices. 

However, like any technology, AI is a tool. It works both ways. AI is on your side. In fact, it’s kept you safer online for some time now. Meanwhile, at McAfee, we’ve used AI as a core component of our protection for years now. As such, it’s done plenty for you over the years. AI has sniffed out viruses, malicious websites, and sketchy content online. It’s helped steer you clear of malicious websites too. 

As such, you can expect an increasing number of AI-powered tools that combat AI-powered threats. 

How to stay safe from AI-powered threats.

Use AI-powered online protection software. Use good AI to stop bad AI. This year, we made improvements to our AI-powered security, making it faster and stronger. It scans 3x faster than before and offers 100% protection against entirely new threats, like the ones generated by AI. It also offers 100% protection against threats released in the past month (AV-TEST results, June 2023). You’ll find it across all our products that include antivirus. 

Protect yourself from scams with AI. Our McAfee Scam Protection uses patented and powerful AI technology helps you stay safer amid the rise in phishing scams. Including phishing scams generated by AI. It detects suspicious URLs in texts before they’re opened or clicked on. No more guessing if that text you just got is real or fake. And if you accidentally click or tap on a suspicious link in a text, email, social media, or browser search, it blocks the scam site from loading. You’ll find McAfee Scam Protection across our McAfee+ plans. 

6) Olympic-sized scams will kick into high stride.

With big events come big scams. Look for plenty of them with the 2024 Summer Olympics.

An event with this level of global appeal attracts scammers looking to capitalize on the excitement. They promise tickets, merch, and exclusive streams to events, among other things. Yet they take a chunk out of your wallet and steal personal info instead.

You can expect to see a glut of email-based phishing and message-based smishing attacks. Now, with the introduction of generative AI, these scams are getting harder and harder to identify. AI writes cleaner emails and messages, so fewer scams feature the traditional hallmarks of misspelled words and poor grammar. Combine that with the excitement generated around the Olympic games, and we can easily see how people might be tempted by bogus sweepstakes and offers for the Olympics trip of a lifetime. If they only click or tap that link. Which of course leads to a scam website.

You can expect these messages to crop up across a variety of channels, including email, text messages, and other messaging channels like WhatsApp and Telegram. They might slide into social media DMs as well.

If you’re planning to catch the Olympic action in person, scammers have a plan in mind for you — ticket fraud. As we’ve seen at the FIFA World Cup and several other major sporting events over the years, scammers spin up scam ticket sites with tickets to all kinds of matches and events. Again, these sites don’t deliver. These sites can look rather professional, yet if the site only accepts cryptocurrency or wire transfers, you can be certain it’s fraud. Neither form of payment offers a way to challenge charges or recoup losses.

How to enjoy the 2024 Olympics safely.

Phishing and smishing attacks can take a little effort to spot. As we’ve seen, the scammers behind them have grown far more sophisticated in their approach. However, know that if a deal or offer seems a little too good to be true, avoid it. For more on how to spot these scams, check out our blog dedicated to phishing and similar attacks. 
As for tickets, they’re only available through the official Paris 2024 ticketing website. Anyone else online is either a broker or an outright scammer. Stick with the official website for the best protection. 
The same holds true for watching the Olympics at home or on the go. A quick search online will show you the official broadcasters and streamers in your region. Stick with them. Unofficial streams can hit your devices with malware or bombard you with sketchy ads. 
Overall, use comprehensive online protection software like ours when you go online, which can help steer you clear of phishing, smishing, and other attacks. 

The post 6 Cybersecurity Predictions for 2024 – Staying Ahead of the Latest Hacks and Attacks appeared first on McAfee Blog.

Read More

Apache OFBiz Authentication Bypass (CVE-2023-51467, CVE-2023-49070)

Read Time:48 Second

What is the vulnerability? There is an authentication bypass vulnerability in Apache OFBiz tracked under CVE-2023-51467 and CVE-2023-49070. Successful exploitation would let an attacker circumvent authentication processes, enabling them to remotely execute arbitrary code and access sensitive information. Apache OFBiz is an open-source business application suite for Enterprise Resource Planning software which integrates and automates many of the business processes across industries.

What is the Vendor Solution?

Customers are advised to upgrade to Apache OFBiz version 18.12.11 to patch these vulnerabilities. For more information, please refer to the Apache Security Advisory. [ Link ]

What FortiGuard Coverage is available?

FortiGuard Labs has an IPS signature “Apache.OFBiz.CVE-2023-49070.XMLRPC.Insecure.Deserialization” in place for CVE-2023-49070 and is investigating to create protection against exploitation of CVE-2023-51467.

FortiGuard Labs recommends companies to scan their environment, find vulnerable Apache OFBiz application, and upgrade as per vendor advisory and always follow best practices.

Read More

USN-6569-1: libclamunrar vulnerabilities

Read Time:23 Second

it was discovered that libclamunrar incorrectly handled directories when
extracting RAR archives. A remote attacker could possibly use this issue to
overwrite arbitrary files and execute arbitrary code. This issue only
affected Ubuntu 20.04 LTS, Ubuntu 22.04 LTS, and Ubuntu 23.04.
(CVE-2022-30333)

It was discovered that libclamunrar incorrectly validated certain
structures when extracting RAR archives. A remote attacker could possibly
use this issue to execute arbitrary code. (CVE-2023-40477)

Read More