rubygem-puma-6.4.2-1.fc40

Read Time:19 Second

FEDORA-2024-c393b8b2fb

Packages in this update:

rubygem-puma-6.4.2-1.fc40

Update description:

Automatic update for rubygem-puma-6.4.2-1.fc40.

Changelog

* Tue Jan 9 2024 Vít Ondruch <vondruch@redhat.com> – 6.4.2-1
– Update to Puma 6.4.2.
Resolves: rhbz#2134670
Resolves: rhbz#2235332
Related: rhbz#2232729
Resolves: rhbz#2257341
Related: rhbz#2257340

Read More

On IoT Devices and Software Liability

Read Time:52 Second

New law journal article:

Smart Device Manufacturer Liability and Redress for Third-Party Cyberattack Victims

Abstract: Smart devices are used to facilitate cyberattacks against both their users and third parties. While users are generally able to seek redress following a cyberattack via data protection legislation, there is no equivalent pathway available to third-party victims who suffer harm at the hands of a cyberattacker. Given how these cyberattacks are usually conducted by exploiting a publicly known and yet un-remediated bug in the smart device’s code, this lacuna is unreasonable. This paper scrutinises recent judgments from both the Supreme Court of the United Kingdom and the Supreme Court of the Republic of Ireland to ascertain whether these rulings pave the way for third-party victims to pursue negligence claims against the manufacturers of smart devices. From this analysis, a narrow pathway, which outlines how given a limited set of circumstances, a duty of care can be established between the third-party victim and the manufacturer of the smart device is proposed.

Read More

AI and privacy – Addressing the issues and challenges

Read Time:5 Minute, 20 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Artificial intelligence (AI) has seamlessly woven itself into the fabric of our digital landscape, revolutionizing industries from healthcare to finance. As AI applications proliferate, the shadow of privacy concerns looms large.

The convergence of AI and privacy gives rise to a complex interplay where innovative technologies and individual privacy rights collide. In this exploration, we’ll delve into the nuances of this intersection, dissecting the issues and challenges that accompany the integration of AI and privacy.

The intersection of AI and privacy

At the core of the AI and privacy nexus lie powerful technologies like machine learning (ML), natural language processing (NLP), and computer vision. ML algorithms, for instance, learn from vast datasets to make predictions or decisions without explicit programming.

NLP enables machines to comprehend and respond to human language, while computer vision empowers systems to interpret and make decisions based on visual data. As AI seamlessly integrates into our daily lives, from virtual assistants to facial recognition systems to UX research tools, the collection and processing of personal data become inevitable.

AI’s hunger for data is insatiable, and this appetite raises concerns about how personal information is collected and utilized. From your search history influencing your online shopping recommendations to facial recognition systems tracking your movements, AI has become a silent observer of your digital life.

The challenge lies not only in the sheer volume of data but in the potential for misuse and unintended consequences, raising critical questions about consent, security, and the implications of biased decision-making.

Key issues and challenges

The first issue is informed consent. Obtaining meaningful consent in the age of AI is challenging. Often, complex algorithms and data processing methods make it difficult for individuals to understand the extent of data usage.

In automated decision-making scenarios, such as loan approvals or job recruitment, the lack of transparency in how AI reaches conclusions poses a significant hurdle in obtaining informed consent.

Another is data security and breaches. The vulnerabilities in AI systems, especially when handling sensitive personal data for identity verification, make them potential targets for cyberattacks. A data breach in an AI-driven ecosystem not only jeopardizes personal privacy but also has far-reaching consequences, affecting individuals, businesses, and society at large.

You also need to be watchful for bias and discrimination. Bias in AI algorithms can perpetuate and amplify existing societal prejudices, leading to discriminatory outcomes.

The impact of biased AI goes beyond privacy concerns, raising ethical questions about fairness, equality, and the potential reinforcement of societal stereotypes.

Regulations and frameworks

In response to the escalating concerns surrounding AI and privacy, regulatory frameworks have emerged as beacons of guidance. The General Data Protection Regulation (GDPR) in Europe and the California Privacy Rights Act (CPRA) in the United States set the stage for safeguarding individual privacy rights.

These regulations impose stringent requirements on businesses, mandating transparent data practices, user consent, and mechanisms for individuals to control their data.

While regulations are essential, ethical AI guidelines play an equally crucial role. Implementing responsible AI practices involves considering the broader societal impact, ensuring fairness, transparency, and accountability in the development and deployment of AI systems especially when it comes to things like digital identity.

As an expert navigating this landscape, you must champion both compliance with existing regulations and the integration of ethical considerations into AI development.

Balancing innovation and privacy protection

Striking a delicate balance between innovation and privacy protection is the key to advancing AI responsibly.

As industries push the boundaries of what AI can achieve, the challenge lies in mitigating risks without stifling progress. Incorporating privacy measures into the design phase, known as “privacy by design”, becomes paramount. Transparency in AI systems, allowing individuals to understand how their data is processed and used, is a linchpin in building trust.

Industry initiatives and best practices:

Embedding privacy considerations into the initial stages of AI development ensures that protection mechanisms are integral to the system.
Transparency fosters a sense of trust between users and AI systems, providing clarity on data usage and minimizing the risk of unintended consequences.

Future trends and implications

As we peer into the future, the trajectory of AI and privacy holds both promise and trepidation. Emerging AI technologies, like federated learning and homomorphic encryption, aim to enhance privacy preservation by enabling machine learning on decentralized and encrypted data.

The landscape of privacy regulations is expected to evolve, with more regions adopting comprehensive frameworks to govern AI applications.

Anticipated challenges and solutions:

The continual evolution of AI technologies poses challenges in keeping regulatory frameworks abreast of the rapidly changing landscape.
Collaborative efforts between industry stakeholders, regulators, and technologists are crucial in addressing challenges and devising adaptive solutions.

Ethical considerations in AI development

Ethical considerations form the bedrock of responsible AI development, and as an expert, navigating the ethical landscape is integral to ensuring the harmonious coexistence of AI and privacy.

Ethical frameworks guide the conception, creation, and deployment of AI systems, placing a premium on fairness, transparency, and accountability. The ethical dimensions of AI extend beyond individual privacy concerns to encompass broader societal impacts, reinforcing the need for a conscientious approach.

Ethical frameworks in AI design and deployment:

Ethical AI frameworks emphasize the need for fairness and impartiality in algorithmic decision-making, reducing the risk of biased outcomes. Accountability is a cornerstone of ethical AI, necessitating transparency in how decisions are reached and allocating responsibility for the consequences of AI actions.

Especially important in this equation are the various external and internal stakeholders. Developers, businesses, and policymakers all play pivotal roles in upholding ethical AI practices and regular ethical assessments and audits should be integrated into the AI development lifecycle to identify and rectify potential ethical issues.

Conclusion

In navigating the intricate terrain of AI and privacy, you, as an expert, are tasked with a delicate dance between technological innovation and safeguarding individual privacy rights. The issues and challenges are formidable, but with a commitment to ethical practices, transparency, and ongoing collaboration, the harmonious integration of AI and privacy becomes an achievable goal.

As the digital landscape evolves, so must our approach, ensuring that the benefits of AI innovation are harnessed responsibly, respecting the sanctity of individual privacy in an ever-changing world.

Read More

No, Taylor Swift Won’t Send You a Free Dutch Oven — The New AI Cloning Scam

Read Time:3 Minute, 52 Second

Taylor Swift wants plenty of good things for her fans — but a free Dutch oven isn’t one of them.  

A new scam has cropped up on social media, where an AI deepfake of Swift targets her loyal Swifties with the lure of free Le Creuset products. Yet no one winds up with a piece of the singer’s much-beloved cookware. Instead, they end up with a case of identity fraud. This latest scam follows a string of celebrity deepfakes on YouTube and scams also targeting Kelly Clarkson. 

The story has made its share of headlines. Unsurprisingly so, given the singer’s high profile. Scammers have cooked up a synthetic version of Swift’s voice, using AI voice cloning technology we’ve highlighted in our blogs before.  

With a script for the voice clone and real snippets of video of the star, the scammers (not Swift) encourage fans to jump on the free offer. All it takes is a $9.96 shipping fee. Paid for by credit or debit card. Once in the hands of the scammers, the cards get charged, and sometimes charged repeatedly. In all, it’s a classic case of identity fraud — this time with an AI voice clone twist.  

 

Image of footage from the Taylor Swift social media scam. 

Le Creuset quickly pointed out that no such promotion exists and that any certified Le Creuset promotions get posted on their official social channels. So, to put a fine point on it, Tay-Tay will not send you a Le Creuset. 

Swift unfortunately finds herself in plenty of company. As we’ve reported previously, 2023 saw numerous celebrity AI cloning scams that hawked bogus goods, crooked investment scams, and phony cryptocurrency deals. Our 2024 predictions blog called for much more of the same this year, and the Taylor Swift scam has kicked things off in a high-profile way. 

If people haven’t heard about AI cloning scams already, there’s a good chance that they do now. 

A new McAfee technology can detect the Taylor Swift scam and other AI scams like it. 

So, what are we to do about it? How are we to tell what’s real and what’s fake online? Our Project Mockingbird points to the answer.  

We just unveiled Project Mockingbird at the CES tech show in Las Vegas, a new technology that helps detect AI-generated audio in deepfakes. Think of it as a lie detector that spots fake news and other schemes. 

See for yourself. We ran video of the Taylor Swift cookware scam through our Project Mockingbird technology. You’ll see red lines spike as it detects cloned audio, which shows you to what degree the audio is real or fake, all along a charted timeline.  

 

 

In addition to spotting celebrity scams, this approach to AI clone detection combats another particularly popular form of deepfake. The AI wrapper scam, where scammers wrap their cloned speech inside an otherwise legitimate video. Check out the example below. Here, scammers used clips of real news presenters to dress up their ChatGPT investment scam video. 

 

Note how the detector registered at the baseline when the news presenters spoke, which indicates authentic audio. Then note how it spiked when the cloned audio kicked in — the part of the video that pitched the ChatGPT investment scam. 

Project Mockingbird marks the first public demonstration of our new AI-detection technologies. In addition to AI audio detection, we’re working on technology for image detection, video detection, and text detection as well.  

With these capabilities, we’ll put the power of knowing what is real or fake directly into your hands. Another way you can think about it is that McAfee is like having a lie detector in your back pocket. With it, you’ll know what’s real and what’s fake online. Something we’ll all need more and more as AI technologies mature. 

Looking ahead, we’ll see more than celebrity scams. We’ll see AI voice clones used to trick family members into sending money as part of phony emergency message scams. We’ll see it used for cyberbullying. And we’ll see bad actors use it to twist political speech across 2024’s major election cycles worldwide.  

Through it all, we aim to give you the power of trust — to trust what you see and hear online. To know what’s real and what’s fake out there. Project Mockingbird represents our first public step toward that goal.  

The post No, Taylor Swift Won’t Send You a Free Dutch Oven — The New AI Cloning Scam appeared first on McAfee Blog.

Read More