Malware-as-a-Service Now the Top Threat to Organizations

Read Time:5 Second

The Darktrace report observed an increasing cross-functional adaption of many MaaS strains in 2023

Read More

webkitgtk-2.42.5-1.fc38

Read Time:24 Second

FEDORA-2024-ca3f071aea

Packages in this update:

webkitgtk-2.42.5-1.fc38

Update description:

Fix webkit_web_context_allow_tls_certificate_for_host to handle IPv6 URIs produced by SoupURI.
Ignore stops with offset zero before last one when rendering gradients with cairo.
Write bwrapinfo.json to disk for xdg-desktop-portal.
Fix gamepads detection by correctly handling focused window in GTK4.
Fix several crashes and rendering issues.
Fix CVE-2024-23222, CVE-2024-23206, CVE-2024-23213

Read More

webkitgtk-2.42.5-1.fc39

Read Time:24 Second

FEDORA-2024-97faaca23d

Packages in this update:

webkitgtk-2.42.5-1.fc39

Update description:

Fix webkit_web_context_allow_tls_certificate_for_host to handle IPv6 URIs produced by SoupURI.
Ignore stops with offset zero before last one when rendering gradients with cairo.
Write bwrapinfo.json to disk for xdg-desktop-portal.
Fix gamepads detection by correctly handling focused window in GTK4.
Fix several crashes and rendering issues.
Fix CVE-2024-23222, CVE-2024-23206, CVE-2024-23213

Read More

Safer Internet Day: Telling What’s Real from What’s Fake Online

Read Time:6 Minute, 0 Second

On Safer Internet Day, we ask an important question: how can you tell what’s real and what’s fake online?  

There’s plenty of fakery out there, due in large part to AI-generated content. And spotting the difference takes a bit of work nowadays. 

Taylor Swift showed us why back in January. More accurately, a Taylor Swift AI voice clone showed us why. Scammers combined old footage of Swift with phony AI-cloned audio that touted a free cookware giveaway. They went about it in a cagey way, using the Le Creuset brand as bait, a brand that her fans know she loves.  

Of course, all people had to do was “answer a few questions” to get their “free” cookware. When some did, they wound up with stolen personal info. It’s one of many full-on identity theft scams with a bogus celebrity AI twist.  

Of course, this wasn’t the first time that scammers used AI to trick well-meaning people. Last December saw AI voice-cloning tools mimic singer Kelly Clarksoni to sell weight-loss gummies. Over the summer, scammers posted other ads using the synthesized voice of Elon Muskii. 

Meanwhile, more quietly yet no less damaging, we’ve seen a glut of AI-generated fakes flood our screens. They look more convincing than ever, as bad actors use AI tools to spin up fake videos, emails, texts, and images. They do it quickly and on the cheap, yet this fake content still has a polish to it. Much of it lacks the telltale signs of a fake, like poor spelling, grammar, and design.  

Another example of AI-generated fake content comes from a BBC report on disinformation being fed to young studentsiii. In it, they investigated several YouTube channels that use AI to make videos. The creators of these channels billed them as educational content for children, yet the investigators found them packed with falsehoods and flat-out conspiracy theories.  

This BBC report offers a prime example of deliberate disinformation, produced on a vast scale, passing itself off as the truth. It’s also one more example of how bad actors use AI, not for scams, but for spreading outright lies. 

Amid all these scams and disinformation floating around, going online can feel like playing a game of “true or false.” Quietly, and sometimes not so quietly, we find ourselves asking, “Is what I’m seeing and hearing real?”

AI has made answering that question tougher, for sure. Yet that’s changing. In fact, we’re now using AI to spot AI. As security professionals, we can use AI to help sniff out what’s real and what’s fake. Like a lie detector. 

We showcased that exact technology at the big CES tech show in Las Vegas earlier this year. Our own Project Mockingbird, which spots AI-generated voices with better than 90% accuracy. Here’s a look at it in action when we ran it against the Taylor Swift scam video. As the red lines spike, that’s our AI technology calling out what’s fake … 

 

In addition to AI audio detection, we’re working on technology for image detection, video detection, and text detection as well — tools that will help us tell what’s real and what’s fake. It’s good to know technology like this is on the horizon. 

Yet above and beyond technology, there’s you. Your own ability to spot a fake. You have a lie detector of your own built right in. 

The quick questions that can help you spot AI fakes.  

Like Ferris Bueller said in the movies years ago, “Life moves pretty fast …” and that’s true of the internet too. The speed of life online and the nature of our otherwise very busy days make it tough to spot fakes. We’re in a rush, and we don’t always stop and think if what we’re seeing and hearing is real. Yet that’s what it takes. Stopping, and asking a few quick questions. 

As put forward by Common Sense Media, a handful of questions can help you sniff out what’s likely real and what’s likely false. As you read articles, watch videos, and so forth, you can ask yourself: 

Who made this? 
Who is the target audience? 
Does someone profit if you click on it? 
Who paid for this content? 
Who might benefit or be harmed by this message? 
What important info is left out of the message? 
Is this credible? Why or why not?” 

Answering only a few of them can help you spot a scam. Or at least get a sense that a scam might be afoot. Let’s use the Taylor Swift video as an example. Asking just three questions tells you a lot.  

First, “what important info is left out?” 

The video mentions a “packaging error.” Really? What kind of error? And why would it lead Le Creuset to give away thousands and thousands of dollars worth of their cookware? Companies have ways of correcting errors like these. So, that seems suspicious. 

Second, “is this credible?” 

This one gets a little tricky. Yet, watch the video closely. That first clip of Swift looks like a much younger Swift compared to the other shots used later. We’re seeing Taylor Swift from her different “eras” throughout, stitched together in a slapdash way. With that, note how quick the cuts are. Likely the scammers wanted to hide the poor lip-synching job they did. That seems yet more suspicious. 

Lastly, “who paid for this content?”  

OK, let’s say Le Creuset really did make a “packaging error.” Would they really put the time, effort, and money into an ad that features Taylor Swift? That would most certainly heap even more losses on those 3,000 “mispackaged” pieces of cookware. It doesn’t make sense. 

While these questions didn’t give definitive answers, they certainly raised several red flags. Everything about this sounds like a scam, thanks to asking a few quick questions and running the answers through your own internal lie detector. 

A safer internet calls for combo of technology and a critical eye. 

So, how you can tell what’s real and what’s fake online? In the time of AI, it’ll get easier as new technologies that detect fakes roll out. Yet as it is with staying safe online, the other part of knowing what’s true and false is you.   

Hopping online today calls for a critical eye more now than ever. Bad actors can cook up content with AI at rates unseen until now. And they create it to strike a nerve. To lure you into a scam or to sway your thinking with disinformation. With that, content that riles you up, catches you by surprise, or that excites you into action is content that you should pause and think about.  

Asking a few questions can help you spot a fake or give you a sense that something about that content isn’t quite right, both of which can keep you safer online. 

The post Safer Internet Day: Telling What’s Real from What’s Fake Online appeared first on McAfee Blog.

Read More

AI in Cybersecurity: 8 use cases that you need to know

Read Time:4 Minute, 14 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Cybercriminals live on the cutting edge of technology, and nothing fits the label more than artificial intelligence. It helps them design sophisticated, evolving malware, pose as higher-ups, and even successfully imitate biometrics like one’s voice.

The use of AI in cyber security has developed as a natural response to these new and unpredictable challenges. How are cyber security experts using artificial intelligence to thwart the bad guys? The following eight use cases will tell you all you need to know.

1. Threat prevention and preemption

It’s not uncommon for businesses and organizations to be under persistent attack. Cyber threats can burrow deep into their networks and spread chaos for months before detection. Since AI models have large datasets of past behaviors to draw on, they can spot anomalous behavior far more quickly.

Preventing attacks before deployment is among cyber security’s most desirable goals. If you have the right information, it can become a reality. For example, a cybersecurity team can use a proxy network to regularly scrape the contents of forums and other sites dedicated to hacking. They may then act on the gathered info and meet future attacks head-on.

2. Timely incident response

Not even an AI-enhanced cybersecurity framework can stop all incoming attacks. Someone might connect an unsanctioned device, or an update might contain malicious code. Either way, a robust cyber security AI can respond to such incidents promptly, blocking or deleting the offending actors.

3. Data protection

Data is the basis on which modern economies operate. Whether you obtain it through web scraping API, surveys, as part of your day-to-day operations, etc., the data you collect needs powerful safeguards. AI can help by classifying and automatically encrypting it. Access control is another process you can automate, as is compliance with data protection laws like the GDPR. 

4. Endpoint security

Endpoints like PCs or smartphones are gateways between secure company networks and the internet. Antivirus and antimalware software are traditional means of protecting these endpoints. They, too, must develop to meet the constantly evolving threat.

Virus and malware protection used to rely on lists of previously identified threats. These are ineffective since AI-shaped malware can bide its time before deploying or pose as an innocent system process. AI lets these security tools adopt a behavior-based approach. Inferring malicious intent based on patterns rather than past documented viruses and malware means the cyber security tools you implement can confidently deal with emerging and even mutating threats.

5. Spam and phishing prevention

Sniffing out the spam that threatened to choke millions of early 00s email accounts was among the first large-scale implementations of AI and machine learning specifically. Spam still bombards us daily, but AI algorithms have become even more sophisticated in identifying and relegating it to the trash.

Phishing is another old cyber threat that AIs, or rather large language models, are revitalizing. Recognizing it used to be trivial, especially since the senders weren’t linguistically skilled enough to craft convincing messages. AI-driven phishing scams are more convincing since they mimic human expression better. Then, fighting fire with fire produces excellent detection and prevention results. 

6. Advanced multi factor authentication

Passwords vary wildly in strength, from unique and complex ones that offer real protection to weak variations on themes hackers can bypass in seconds. Sadly, not even the most robust passwords are immune to theft or being compromised. MFA is a second safeguard designed to prevent someone who copied or stole your password in a breach from accessing the associated account.

Conventional MFA remains effective, but hackers are starting to leverage AI to bypass it. This applies to conventional authentication codes and biometrics as well. Luckily, AI has a leading role in revolutionizing how we approach biometrics.

For example, keystroke dynamics lets the AI authenticate a user based on learned typing idiosyncrasies. Keystroke is part of a broader set of behavioral biometrics encompassing other behaviors like mouse cursor movement, screen tapping pressure in the case of smartphones, etc. 

7. User profiling

While not authentication in the conventional sense, in-depth user profiling is another security measure made possible through machine learning. It works by analyzing individual users and their expected behaviors. For example, a user may frequently access the same directory or only use a handful of services.

Changes in this behavior might be benign, but they could also indicate a malicious insider or an account takeover. 

8. Fraud detection

A secure payment gateway is the main prerequisite for any reputable online business. Bad actors may want to exploit its weaknesses and conduct fraudulent transactions. AI’s uncanny pattern recognition abilities come in handy here as well. An AI can assess large transaction volumes, identifying outliers while letting regular payments through unhindered.

These are the most prospective use cases for AI in cybersecurity – we hope you found something useful.

Read More

AI in Cybersecurity: 8 use cases that you need to know

Read Time:4 Minute, 14 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Cybercriminals live on the cutting edge of technology, and nothing fits the label more than artificial intelligence. It helps them design sophisticated, evolving malware, pose as higher-ups, and even successfully imitate biometrics like one’s voice.

The use of AI in cyber security has developed as a natural response to these new and unpredictable challenges. How are cyber security experts using artificial intelligence to thwart the bad guys? The following eight use cases will tell you all you need to know.

1. Threat prevention and preemption

It’s not uncommon for businesses and organizations to be under persistent attack. Cyber threats can burrow deep into their networks and spread chaos for months before detection. Since AI models have large datasets of past behaviors to draw on, they can spot anomalous behavior far more quickly.

Preventing attacks before deployment is among cyber security’s most desirable goals. If you have the right information, it can become a reality. For example, a cybersecurity team can use a proxy network to regularly scrape the contents of forums and other sites dedicated to hacking. They may then act on the gathered info and meet future attacks head-on.

2. Timely incident response

Not even an AI-enhanced cybersecurity framework can stop all incoming attacks. Someone might connect an unsanctioned device, or an update might contain malicious code. Either way, a robust cyber security AI can respond to such incidents promptly, blocking or deleting the offending actors.

3. Data protection

Data is the basis on which modern economies operate. Whether you obtain it through web scraping API, surveys, as part of your day-to-day operations, etc., the data you collect needs powerful safeguards. AI can help by classifying and automatically encrypting it. Access control is another process you can automate, as is compliance with data protection laws like the GDPR. 

4. Endpoint security

Endpoints like PCs or smartphones are gateways between secure company networks and the internet. Antivirus and antimalware software are traditional means of protecting these endpoints. They, too, must develop to meet the constantly evolving threat.

Virus and malware protection used to rely on lists of previously identified threats. These are ineffective since AI-shaped malware can bide its time before deploying or pose as an innocent system process. AI lets these security tools adopt a behavior-based approach. Inferring malicious intent based on patterns rather than past documented viruses and malware means the cyber security tools you implement can confidently deal with emerging and even mutating threats.

5. Spam and phishing prevention

Sniffing out the spam that threatened to choke millions of early 00s email accounts was among the first large-scale implementations of AI and machine learning specifically. Spam still bombards us daily, but AI algorithms have become even more sophisticated in identifying and relegating it to the trash.

Phishing is another old cyber threat that AIs, or rather large language models, are revitalizing. Recognizing it used to be trivial, especially since the senders weren’t linguistically skilled enough to craft convincing messages. AI-driven phishing scams are more convincing since they mimic human expression better. Then, fighting fire with fire produces excellent detection and prevention results. 

6. Advanced multi factor authentication

Passwords vary wildly in strength, from unique and complex ones that offer real protection to weak variations on themes hackers can bypass in seconds. Sadly, not even the most robust passwords are immune to theft or being compromised. MFA is a second safeguard designed to prevent someone who copied or stole your password in a breach from accessing the associated account.

Conventional MFA remains effective, but hackers are starting to leverage AI to bypass it. This applies to conventional authentication codes and biometrics as well. Luckily, AI has a leading role in revolutionizing how we approach biometrics.

For example, keystroke dynamics lets the AI authenticate a user based on learned typing idiosyncrasies. Keystroke is part of a broader set of behavioral biometrics encompassing other behaviors like mouse cursor movement, screen tapping pressure in the case of smartphones, etc. 

7. User profiling

While not authentication in the conventional sense, in-depth user profiling is another security measure made possible through machine learning. It works by analyzing individual users and their expected behaviors. For example, a user may frequently access the same directory or only use a handful of services.

Changes in this behavior might be benign, but they could also indicate a malicious insider or an account takeover. 

8. Fraud detection

A secure payment gateway is the main prerequisite for any reputable online business. Bad actors may want to exploit its weaknesses and conduct fraudulent transactions. AI’s uncanny pattern recognition abilities come in handy here as well. An AI can assess large transaction volumes, identifying outliers while letting regular payments through unhindered.

These are the most prospective use cases for AI in cybersecurity – we hope you found something useful.

Read More