FEDORA-EPEL-2023-fd36857b5e
Packages in this update:
seamonkey-2.53.18-1.el7
Update description:
Update to 2.53.18
seamonkey-2.53.18-1.el7
Update to 2.53.18
seamonkey-2.53.18-1.el8
Update to 2.53.18
seamonkey-2.53.18-1.fc38
Update to 2.53.18
seamonkey-2.53.18-1.fc39
Update to 2.53.18
Multiple security issues were discovered in Chromium, which could result
in the execution of arbitrary code, denial of service or information
disclosure.
kernel-6.6.5-100.fc38
The 6.6.5 stable kernel update contains a number of important fixes across the tree.
kernel-6.6.5-200.fc39
The 6.6.5 stable kernel update contains a number of important fixes across the tree.
A disturbing story out of western Spain spotlights challenges of technological evolution. Unwitting children and teenagers were victims of users of a deepfake app. Their families, shocked at how the events transpired, are equally frustrated by how little recourse they feel they have. Deepfake technology, which leverages sophisticated artificial intelligence to create realistic yet fabricated images and videos, has seen a significant uptick in usage, a surge partly attributed to advancements in AI. As this technology becomes more accessible, concerns about its misuse, particularly in creating unauthorized or malicious content that mimics real individuals, are growing.
To protect yourself and your family from being victimized by deepfake technology, it is crucial to understand some steps you can take.
Educate yourself and your family: Understanding what deepfakes are and how they can be misused is the first line of defense. Awareness can help you recognize potential deepfakes. Speak to your family about these three guidelines for identifying deepfakes:
Look for contextual clues. Deepfakes don’t usually appear by themselves. Look at the webpage or social media post for possible hints that this isn’t a legitimate piece of content, such as poor grammar or spelling. And look for identifying information — names, dates, places, etc. — if reading a news story.
Imagine it’s too good to be true. Especially if you are looking at content that seems outlandish or is offering something free or for very little money. Scammers use deepfakes to entice people into clicking ads or traveling to a dangerous site. Look for the headline elsewhere and pause for a moment if the story just seems too incredulous to be real.
Put the content under a microscope. Perhaps not literally. Many AI engines still have trouble generating humans in images or videos. Closely examine content for weird distortions like extra fingers or smudged faces. These are telltale clues that the image is fake.
Stay updated. Technology is constantly evolving. These days, new, accessible AI algorithms and apps they power seem to pop up daily. Do what you can to stay informed about the latest developments in AI and deepfake technology to adapt your protective measures accordingly. The FTC’s website, for example, has an ongoing series about how AI is evolving and what businesses and consumers alike can do to recognize AI-driven threats and protect against them.
Tighten social media privacy settings: Limit who can view and share your posts on social media. By setting accounts to private and being mindful of who you add as friends or followers, you reduce the likelihood of your images being misused. If you’re a parent, ensure your young child isn’t creating social media accounts. If they’re old enough for an account, discuss with them the dangers of sharing content or messages with strangers or leaving their accounts unlocked.
Limit your online footprint: Be cautious about what you share online. The less personal information and images available, the harder it is for someone to create a deepfake of you. It’s relatively easy to reconsider sharing photos of yourself, but you may not think twice before hitting “retweet” or “share” on someone else’s post. Before you do that, think carefully about the content you’re about to engage with.
Use watermarks: When posting pictures online, consider using watermarks. This approach is a bit more time intensive, and it doesn’t altogether prevent deepfakes. But embedding a small graphic into photos can make it more difficult to use the images without revealing they’ve been altered.
Monitor for your name and likeness: Set up Google Alerts or other similar form of alerts for your name. Getting a weekly email digest about your personal information will help automate content monitoring and maybe alert you to unauthorized uses of your likeness quickly. Identity monitoring software like McAfee’s can also help scour the internet for inappropriate uses of your likeness or identity. Our software also includes account cleanup and credit monitoring, among other features, to help you maintain privacy for your digital life.
Report deepfakes: If you encounter a deepfake of yourself or someone you know, report it immediately to the platform where it’s posted. Also, consider contacting law enforcement if the deepfake is used for malicious purposes like defamation or blackmail.
Use advanced security measures: As technology advances, attacks and fraud attempts will become more sophisticated. Cybercriminals are becoming adept at things like stealing and cloning voice snippets for use in deepfakes or biometrics-bypassing efforts. To thwart these unwanted advances, it may be necessary to fight fire with fire and leverage AI-driven protection solutions.
There may be no perfect solution to the dynamic threat of deepfake fraud. As technology advances, people will find novel ways to leverage it for means both innocent and otherwise. Yet, there are still strategies organizations and individuals can employ to help prevent deepfake fraud and to mitigate the impacts of it, should it occur. Sometimes, in an ever-more-complicated online world, the best bet may be to simplify. Adopting tools like our personal data cleanup solutions or our all-in-one security platform with identity protection can fortify protection against deepfakes and other forms of fraud. The digital landscape is evolving. The good news is, you can, too.
The post Deepfake Defense: Your 8-Step Shield Against Digital Deceit appeared first on McAfee Blog.
Another rare security + squid story:
The woman—who has only been identified by her surname, Wang—was having a meal with friends at a hotpot restaurant in Kunming, a city in southwest China. When everyone’s selections arrived at the table, she posted a photo of the spread on the Chinese social media platform WeChat. What she didn’t notice was that she’d included the QR code on her table, which the restaurant’s customers use to place their orders.
Even though the photo was only shared with her WeChat friends list and not the entire social network, someone—or a lot of someones—used that QR code to add a ridiculous amount of food to her order. Wang was absolutely shocked to learn that “her” meal soon included 1,850 orders of duck blood, 2,580 orders of squid, and an absolutely bonkers 9,990 orders of shrimp paste.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Depending on the day’s most popular headlines, AI is either a panacea or the ultimate harbinger of doom. We could solve the world’s problems if we just asked the algorithm how. Or it’s going to take your job and become too smart for its own good. The truth, as per usual, lies somewhere in between. AI will likely have plenty of positive impacts that do not change the world while also offering its fair share of negativity that isn’t society-threatening. To identify the happy medium requires answering some interesting questions about the appropriate use of AI.
1. Can we use AI without human oversight?
The full answer to this question could probably fill volumes, but we won’t go that far. Instead, we can focus on a use case that is becoming increasingly popular and democratized: generative AI assistants. By now, you’ve likely used ChatGPT or Bard or one of the dozens of platforms available to anyone with a computer. But can you prompt these algorithms and be wholly satisfied with what they spit out?
The short answer is, “no.” These chatbots are quite capable of hallucinations, instances where the AI will make up answers. The answers it provides come from the algorithm’s set of training data but may not actually be traceable back to real-life knowledge. Take the recent story of a lawyer who presented a brief in a courtroom. It turns out, he used ChatGPT to write the entire brief, wherein the AI cited fake cases to support the brief.1
When it comes to AI, human oversight will likely always be necessary. Whether the model is analyzing weather patterns to predict rainfall or evaluating a business model, it can still make mistakes or even provide answers that do not make logical sense. Appropriate use of AI, especially with tools like ChatGPT and its ilk, requires a human fact checker.
2. Can AI creators fix algorithmic bias after the fact?
Again, this is a question more complicated than this space allows. But, we can attempt to examine a narrower application of the question. Consider that many AI algorithms in the real-world have been found to exhibit discriminatory behavior. For example, one AI had a much larger error rate depending on the sex or race of subjects. Another incorrectly classified inmate risk, leading to disproportionate rates of recidivism.2
So, can those who write these algorithms fix these concerns once the model is live? Yes, engineers can always revisit their code and attempt to adjust after publishing their models. However, the process of evaluating and auditing can be an ongoing endeavor. What AI creators can do instead is to focus on reflecting values in their models’ infancy.
Algorithms’ results are only as strong as the data on which they were trained. If a model is trained on a population of data disproportionate to the population it’s trying to evaluate, those inherent biases will show up once the model is live. However robust a model is, it will still lack the basic human understanding of what is right vs. wrong. And it likely cannot know if a user is leveraging it with nefarious intent in mind.
While creators can certainly make changes after building their models, the best course of action is to focus on engraining the values the AI should exhibit from day one.
3. Who is responsible for an AI’s actions?
A few years ago, an autonomous vehicle struck and killed a pedestrian.3 The question that became the incident’s focus was, “who was responsible for the accident?” Was it Uber, whose car it was? The operator of the car? In this case, the operator of the vehicle, who sat in the car, was charged with endangerment.
But what if the car had been empty and entirely autonomous? What if an autonomous car didn’t recognize a jaywalking pedestrian because the traffic signal was the right color? As AI finds its way into more and more public use cases, the question of responsibility looms large.
Some jurisdictions, such as the EU, are moving forward with legislation governing AI culpability. The rule will strive to establish different “obligations for providers and users depending on the level of risk from” AI.
It’s in everyone’s best interest to be as careful as possible when using AI. The operator in the autonomous car might have paid more attention to the road, for example. People sharing content on social media can do more due diligence to ensure what they’re sharing isn’t a deepfake or other form of AI-generated content.
4. How do we balance AI’s benefits with its security/privacy concerns?
This may just be the most pressing question of all those related to appropriate use of AI. Any algorithm needs vast quantities of training data to develop. In cases where the model will evaluate real-life people for anti-fraud measures, for example, it will likely need to be trained on real-world information. How do organizations ensure the data they use isn’t at risk of being stolen? How do individuals know what information they’re sharing and what purposes it’s being used for?
This large question is clearly a collage of smaller, more specific questions that all attempt to get to the heart of the matter. The biggest challenge related to these questions for individuals is whether they can trust the organizations ostensibly using their data for good or in a secure fashion.
5. Individuals must take action to ensure appropriate use of their information
For individuals concerned about whether their information is being used for AI training or otherwise at risk, there are some steps they can take. The first is to always make a cookies selection when browsing online. Now that the GDPA and CCPA are in effect, just about every company doing business in the U.S. or EU must place a warning sign on their website that it collects browsing information. Checking those preferences is a good way to keep companies from using information when you don’t want them to.
The second is to leverage third-party tools like McAfee+, which provides services like VPNs, privacy and identity protection as part of a comprehensive security platform. With full identity-theft protection, you’ll have an added layer of security on top of cookies choices and other good browsing habits you’ve developed. Don’t just hope that your data will be used appropriately — safeguard it, today.
The post Safer AI: Four Questions Shaping Our Digital Future appeared first on McAfee Blog.