The University is working with authorities to resolve the incident and understand what data has been accessed
Daily Archives: June 9, 2023
Barracuda: Immediately rip out and replace our security hardware
Barracuda Networks is taking the unusual step of telling its customers to physically remove and decommission its hardware.
Google launches Secure AI Framework to help secure AI technology
Google has announced the launch of the Secure AI Framework (SAIF), a conceptual framework for securing AI systems. Google, owner of the generative AI chatbot Bard and parent company of AI research lab DeepMind, said a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they’re secure-by-default. Its new framework concept is an important step in that direction, the tech giant claimed.
The SAIF is designed to help mitigate risks specific to AI systems like model theft, poisoning of training data, malicious inputs through prompt injection, and the extraction of confidential information in training data. “As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical,” Google wrote in a blog.
Barracuda Urges Swift Replacement of Vulnerable ESG Appliances
Investigating the ESG bug, Rapid7 assumed the presence of persistent malware hindering device wipes
Operation Triangulation: Zero-Click iPhone Malware
Kaspersky is reporting a zero-click iOS exploit in the wild:
Mobile device backups contain a partial copy of the filesystem, including some of the user data and service databases. The timestamps of the files, folders and the database records allow to roughly reconstruct the events happening to the device. The mvt-ios utility produces a sorted timeline of events into a file called “timeline.csv,” similar to a super-timeline used by conventional digital forensic tools.
Using this timeline, we were able to identify specific artifacts that indicate the compromise. This allowed to move the research forward, and to reconstruct the general infection sequence:
The target iOS device receives a message via the iMessage service, with an attachment containing an exploit.
Without any user interaction, the message triggers a vulnerability that leads to code execution.
The code within the exploit downloads several subsequent stages from the C&C server, that include additional exploits for privilege escalation.
After successful exploitation, a final payload is downloaded from the C&C server, that is a fully-featured APT platform.
The initial message and the exploit in the attachment is deleted
The malicious toolset does not support persistence, most likely due to the limitations of the OS. The timelines of multiple devices indicate that they may be reinfected after rebooting. The oldest traces of infection that we discovered happened in 2019. As of the time of writing in June 2023, the attack is ongoing, and the most recent version of the devices successfully targeted is iOS 15.7.
No attribution as of yet.
Google Launches Framework to Secure Generative AI
The Secure AI Framework (SAIF) is a first step to help collaboratively secure AI technology, said Alphabet’s subsidiary
Security Experts Highlight Exploit for Patched Windows Flaw
Numen Cyber said exploiting the vulnerability does not require novel techniques
Pros and Cons of AI in Daily Life
Artificial intelligence: It’s society’s newest darling and newest villain. AI is the newest best friend to creatives, time-strapped people, and unfortunately, the newest sidekick of online scammers. AI platforms like ChatGPT, Craiyon, Voice.ai, and others are available to everyday people, meaning that the technology is creeping further into our daily lives. But is this mainstream AI for the better? Or for worse?
Pros of AI in Daily Life
Confidence builder
According to McAfee’s Modern Love Research Report, 27% of people who admitted that they planned to use AI on Valentine’s Day said it was to boost their confidence. For some people, pouring their heart onto the page is difficult and makes them feel vulnerable. If there’s a tool out there that lessens people’s fear of opening up, they should take advantage of it.
Just remember that honesty is the best policy. Tell your partner you employed the help of AI to express your feelings.
Creativity booster
Sometimes you just don’t know how to start your next masterpiece, whether it’s a painting, short story, or business proposal. AI art and text generators are great brainstorming tools to get the creative juices flowing. The program may think of an approach you would never have considered.
Time saver
Generative AI – the type of artificial intelligence technology behind many mainstream content generation platforms – isn’t new. In fact, scientists and engineers have been using it for decades to accelerate new materials and medical discoveries. For example, generative AI is central to inventing carbon capture materials that will be key to slowing the effects of global warming.1
Online tools fueled by generative AI can save you time too. For instance, AI can likely handle run-of-the-mill emails that you hardly have time to write between all your meetings.
Cons of AI in Daily Life
Dwindling authenticity
What happens to genuine human connections as AI expands? The Modern Love Report discovered 49% of people would feel hurt if their partner used AI to write a love note. What they thought was written with true feeling was composed by a heartless computer program. ChatGPT composes its responses based on what’s published elsewhere on the internet. So, not only are its responses devoid of real human emotion, but the “emotion” it does portray is plagiarized.
Additionally, some artists perceive AI-generated components within digital artworks as cheapening the talent of human artists. The same with AI-written content is consistent with AI art: None of the art it generates is original. It takes inspiration and snippets from already-published images and mashes them together. The results can be visually striking (or nightmarish), but some argue that it belittles the human spirit.
AI hallucinations are also a problem in the authenticity and accuracy of AI-generated content. An AI hallucination occurs when the generative AI program doesn’t know the answer to a prompt. Instead of diligently researching the correct answer, the AI makes up the answer. This can lead to the proliferation of fake news or inaccurate reporting.
Faster and more believable online scams
AI-generated content is expanding the repertoire and speed of online scammers. For example, phishing emails used to be easy to pick out of a crowd, because of their trademark typos, poor grammar and spelling, and laughably far-fetched stories. Now with ChatGPT, phishing emails are much more polished, since, at the sentence level, it writes smoothly and correctly. This makes it more difficult for even the most diligent readers to identify and avoid phishing attempts. Also, instead of spending time imagining and writing their fake backstories, phishers can offload that task to ChatGPT, which makes quick work of it.
Cybercriminals are working out the possibilities of leveraging ChatGPT to write new types of malware quickly. In four hours, one researcher gave ChatGPT minimal instructions and it wrote an undetectable malware program.2 This means that someone with little to no coding experience could theoretically create a powerful new strain of malicious software. The speed at which the researcher created the malware is also noteworthy. A cybercriminal could trial-and-error dozens of malware programs. The moment authorities detect and shut down one strain, the criminal could release the next soon after.
Malicious impersonation
Deep fake technology and voice AI are expanding the nefarious repertoire of scammers. In one incident, a scammer cloned the voice of a teenager, which was so realistic it convinced the teenager’s own mother that her child was in danger. In actuality, the teen was completely safe.3 Voice AI applications like this one could add false legitimacy to the grandparent scam that has been around for a few years and other voice-based scams. According to McAfee’s Beware the Artificial Imposter report, 77% of people who were targeted by a voice cloning scam lost money as a result.
The Verdict on AI
So, what do you think? Is your day-to-day life easier or more complicated thanks to AI? Should people aim to add it to their routines or stop relying on it so much?
The debate of AI’s place in the mainstream could go on and on. What’s undebatable is the need for protection against online threats that are becoming more powerful when augmented by AI. Here are a few general tips to avoid AI scams:
Read all texts, emails, and social media direct messages carefully. Now that phishers have cleaned up their spelling and grammar with ChatGPT, you’ll have to rely on the other telltale signs of phishing attempts. Is the message requiring immediate action, asking for your password or personal details, or inspiring intense feelings of anger, fear, or sadness? Take a step back and evaluate if the message makes sense. You can always delete it. If it’s truly urgent, the sender will follow up.
Keep a cool head. When someone you love is in trouble, it’s easy to immediately panic. Try your best to remain calm and try to locate the real person in case it’s an instance of deep fake or an AI-generated voice. Also, if you believe someone to be in danger, alert the authorities immediately.
Follow up with your own research. If you read an article or see a video that’s too sensational to believe, research the subject on your own to confirm or deny its accuracy. Research is crucial to avoiding the spread of fake and incendiary news.
To cover all your bases, consider investing in McAfee+. McAfee+ is the all-in-one device, online privacy, and identity protection service. Live more confidently online with $1 million in identity remediation support, antivirus for unlimited devices, web protection, and more!
1IBM, “Climate change: IBM boosts materials discovery to improve carbon capture, separation and storage”
2Dark Reading, “Researcher Tricks ChatGPT Into Building Undetectable Steganography Malware”
3Business Insider, “A mother reportedly got a scam call saying her daughter had been kidnapped and she’d have to pay a ransom. The ‘kidnapper’ cloned the daughter’s voice using AI.”
The post Pros and Cons of AI in Daily Life appeared first on McAfee Blog.
Minecraft Users Warned of Malware Targeting Modpacks
Bitdefender researchers warn that mods and plugins have been rigged by the infostealer malware, dubbed Fractureiser
Organizations Urged to Address Critical Vulnerabilities Found in First Half of 2023
Rezilion’s report exposed the most dangerous vulnerabilities found in the first half of 2023