How Northfield Hospital uses AI to minimize risk from cyberattacks

Read Time:44 Second

Like all healthcare providers, US-based Northfield Hospital has a big responsibility when it comes to cybersecurity as sensitive data and the lives of patients could be at stake. A study by Proofpoint and the Ponemon Institute released in September 2022 found that patient mortality rates increased across more than 20% of healthcare organizations that suffered the most common types of attacks.

“If that healthcare organization is down and the patient doesn’t have access to health care that delays their care and can increase mortality, especially if you’re talking about a stroke victim or a heart attack where time is important. And when you don’t have automation to move that patient through your system and get them the care they need, that can increase the risk of mortality,” Vern Lougheed, Northfield Hospital’s security information officer, tells CSO.

To read this article in full, please click here

Read More

USN-6076-1: Synapse vulnerabilities

Read Time:41 Second

It was discovered that Synapse incorrectly handled certain inputs. If a
user or an automated system were tricked into opening a specially crafted
input file, a remote attacker could possibly use this issue to cause a
denial of service. (CVE-2019-18835, CVE-2018-12291, CVE-2018-10657)

It was discovered that Synapse incorrectly handled certain inputs. If a
user or an automated system were tricked into opening a specially crafted
input file, a remote attacker could possibly use this issue to hijack the
session. (CVE-2019-11842, CVE-2018-12423)

It was discovered that Synapse incorrectly handled certain inputs. If a
user or an automated system were tricked into opening a specially crafted
input file, a remote attacker could possibly use this issue to perform
spoofing or user impersonation. (CVE-2019-5885, CVE-2018-16515)

Read More

USN-6074-2: Firefox regressions

Read Time:55 Second

USN-6074-1 fixed vulnerabilities in Firefox. The update introduced
several minor regressions. This update fixes the problem.

We apologize for the inconvenience.

Original advisory details:

Multiple security issues were discovered in Firefox. If a user were
tricked into opening a specially crafted website, an attacker could
potentially exploit these to cause a denial of service, obtain sensitive
information across domains, or execute arbitrary code. (CVE-2023-32205,
CVE-2023-32207, CVE-2023-32210, CVE-2023-32211, CVE-2023-32212,
CVE-2023-32213, CVE-2023-32215, CVE-2023-32216)

Irvan Kurniawan discovered that Firefox did not properly manage memory
when using RLBox Expat driver. An attacker could potentially exploits this
issue to cause a denial of service. (CVE-2023-32206)

Anne van Kesteren discovered that Firefox did not properly validate the
import() call in service workers. An attacker could potentially exploits
this to obtain sensitive information. (CVE-2023-32208)

Sam Ezeh discovered that Firefox did not properly handle certain favicon
image files. If a user were tricked into opening a malicicous favicon file,
an attacker could cause a denial of service. (CVE-2023-32209)

Read More

SEC Consult SA-20230515-0 :: Multiple Vulnerabilities in Kiddoware Kids Place Parental Control Android App

Read Time:18 Second

Posted by SEC Consult Vulnerability Lab, Research via Fulldisclosure on May 15

SEC Consult Vulnerability Lab Security Advisory < 20230515-0 >
=======================================================================
title: Multiple Vulnerabilities
product: Kiddoware Kids Place Parental Control Android App
vulnerable version: <=3.8.49
fixed version: 3.8.50 or higher
CVE number: CVE-2023-28153, CVE-2023-29078, CVE-2023-29079
impact: High
homepage:…

Read More

New ransomware gang RA Group quickly expanding operations

Read Time:35 Second

Researchers warn of a new ransomware threat dubbed RA Group that also engages in data theft and extortion and has been hitting organizations since late April. The group’s ransomware program is built from the leaked source code of a different threat called Babuk.

“Like other ransomware actors, RA Group also operates a data leak site in which they threaten to publish the data exfiltrated from victims who fail to contact them within a specified time or do not meet their ransom demands,” researchers from Cisco Talos said in a new report. “This form of double extortion increases the chances that a victim will pay the requested ransom.”

To read this article in full, please click here

Read More

Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam

Read Time:7 Minute, 17 Second

Three seconds of audio is all it takes.  

Cybercriminals have taken up newly forged artificial intelligence (AI) voice cloning tools and created a new breed of scam. With a small sample of audio, they can clone the voice of nearly anyone and send bogus messages by voicemail or voice messaging texts. 

The aim, most often, is to trick people out of hundreds, if not thousands, of dollars. 

The rise of AI voice cloning attacks  

Our recent global study found that out of 7,000 people surveyed, one in four said that they had experienced an AI voice cloning scam or knew someone who had. Further, our research team at McAfee Labs discovered just how easily cybercriminals can pull off these scams. 

With a small sample of a person’s voice and a script cooked up by a cybercriminal, these voice clone messages sound convincing, 70% of people in our worldwide survey said they weren’t confident they could tell the difference between a cloned voice and the real thing. 

Cybercriminals create the kind of messages you might expect. Ones full of urgency and distress. They will use the cloning tool to impersonate a victim’s friend or family member with a voice message that says they’ve been in a car accident, or maybe that they’ve been robbed or injured. Either way, the bogus message often says they need money right away. 

In all, the approach has proven quite effective so far. One in ten of people surveyed in our study said they received a message from an AI voice clone, and 77% of those victims said they lost money as a result.  

The cost of AI voice cloning attacks  

Of the people who reported losing money, 36% said they lost between $500 and $3,000, while 7% got taken for sums anywhere between $5,000 and $15,000. 

Of course, a clone needs an original. Cybercriminals have no difficulty sourcing original voice files to create their clones. Our study found that 53% of adults said they share their voice data online or in recorded notes at least once a week, and 49% do so up to ten times a week. All this activity generates voice recordings that could be subject to hacking, theft, or sharing (whether accidental or maliciously intentional).  

Consider that people post videos of themselves on YouTube, share reels on social media, and perhaps even participate in podcasts. Even by accessing relatively public sources, cybercriminals can stockpile their arsenals with powerful source material. 

Nearly half (45%) of our survey respondents said they would reply to a voicemail or voice message purporting to be from a friend or loved one in need of money, particularly if they thought the request had come from their partner or spouse (40%), mother (24%), or child (20%).  

Further, they reported they’d likely respond to one of these messages if the message sender said: 

They’ve been in a car accident (48%). 
They’ve been robbed (47%). 
They’ve lost their phone or wallet (43%). 
They needed help while traveling abroad (41%). 

These messages are the latest examples of targeted “spear phishing” attacks, which target specific people with specific information that seems just credible enough to act on it. Cybercriminals will often source this information from public social media profiles and other places online where people post about themselves, their families, their travels, and so on—and then attempt to cash in.  

Payment methods vary, yet cybercriminals often ask for forms that are difficult to trace or recover, such as gift cards, wire transfers, reloadable debit cards, and even cryptocurrency. As always, requests for these kinds of payments raise a major red flag. It could very well be a scam. 

AI voice cloning tools—freely available to cybercriminals 

In conjunction with this survey, researchers at McAfee Labs spent two weeks investigating the accessibility, ease of use, and efficacy of AI voice cloning tools. Readily, they found more than a dozen freely available on the internet. 

These tools required only a basic level of experience and expertise to use. In one instance, just three seconds of audio was enough to produce a clone with an 85% voice match to the original (based on the benchmarking and assessment of McAfee security researchers). Further effort can increase the accuracy yet more. By training the data models, McAfee researchers achieved a 95% voice match based on just a small number of audio files.   

McAfee’s researchers also discovered that that they could easily replicate accents from around the world, whether they were from the US, UK, India, or Australia. However, more distinctive voices were more challenging to copy, such as people who speak with an unusual pace, rhythm, or style. (Think of actor Christopher Walken.) Such voices require more effort to clone accurately and people with them are less likely to get cloned, at least with where the AI technology stands currently and putting comedic impersonations aside.  

The research team stated that this is yet one more way that AI has lowered the barrier to entry for cybercriminals. Whether that’s using it to create malware, write deceptive messages in romance scams, or now with spear phishing attacks with voice cloning technology, it has never been easier to commit sophisticated looking, and sounding, cybercrime. 

Likewise, the study also found that the rise of deepfakes and other disinformation created with AI tools has made people more skeptical of what they see online. Now, 32% of adults said their trust in social media is less than it’s ever been before. 

Protect yourself from AI voice clone attacks 

Set a verbal codeword with kids, family members, or trusted close friends. Make sure it’s one only you and those closest to you know. (Banks and alarm companies often set up accounts with a codeword in the same way to ensure that you’re really you when you speak with them.) Make sure everyone knows and uses it in messages when they ask for help. 
Always question the source. In addition to voice cloning tools, cybercriminals have other tools that can spoof phone numbers so that they look legitimate. Even if it’s a voicemail or text from a number you recognize, stop, pause, and think. Does that really sound like the person you think it is? Hang up and call the person directly or try to verify the information before responding.  
Think before you click and share. Who is in your social media network? How well do you really know and trust them? The wider your connections, the more risk you may be opening yourself up to when sharing content about yourself. Be thoughtful about the friends and connections you have online and set your profiles to “friends and families” only so your content isn’t available to the greater public. 
Protect your identity. Identity monitoring services can notify you if your personal information makes its way to the dark web and provide guidance for protective measures. This can help shut down other ways that a scammer can attempt to pose as you. 
Clear your name from data broker sites. How’d that scammer get your phone number anyway? It’s possible they pulled that information off a data broker site. Data brokers buy, collect, and sell detailed personal information, which they compile from several public and private sources, such as local, state, and federal records, in addition to third parties. Our Personal Data Cleanup service scans some of the riskiest data broker sites and shows you which ones are selling your personal info. 

Get the full story 

A lot can come from a three-second audio clip. 

With the advent of AI-driven voice cloning tools, cybercriminals have created a new form of scam. With arguably stunning accuracy, these tools can let cybercriminals nearly anyone. All they need is a short audio clip to kick off the cloning process. 

Yet like all scams, you have ways you can protect yourself. A sharp sense of what seems right and wrong, along with a few straightforward security steps can help you and your loved ones from falling for these AI voice clone scams. 

For a closer look at the survey data, along with a nation-by-nation breakdown, download a copy of our report here. 

Survey methodology 

The survey was conducted between January 27th and February 1st, 2023 by Market Research Company MSI-ACI, with people aged 18 years and older invited to complete an online questionnaire. In total 7,000 people completed the survey from nine countries, including the United States, United Kingdom, France, Germany, Australia, India, Japan, Brazil, and Mexico. 

The post Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam appeared first on McAfee Blog.

Read More