pack-0.32.0-1.fc37

Read Time:6 Second

FEDORA-2023-5029b92850

Packages in this update:

pack-0.32.0-1.fc37

Update description:

fix for CVE-2023-39325

Read More

Decoupling for Security

Read Time:2 Minute, 21 Second

This is an excerpt from a longer paper. You can read the whole thing (complete with sidebars and illustrations) here.

Our message is simple: it is possible to get the best of both worlds. We can and should get the benefits of the cloud while taking security back into our own hands. Here we outline a strategy for doing that.

What Is Decoupling?

In the last few years, a slew of ideas old and new have converged to reveal a path out of this morass, but they haven’t been widely recognized, combined, or used. These ideas, which we’ll refer to in the aggregate as “decoupling,” allow us to rethink both security and privacy.

Here’s the gist. The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.

To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.

Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.

To ensure that cloud services do not learn more than they should, and that a breach of one does not pose a fundamental threat to our data, we need two types of decoupling. The first is organizational decoupling: dividing private information among organizations such that none knows the totality of what is going on. The second is functional decoupling: splitting information among layers of software. Identifiers used to authenticate users, for example, should be kept separate from identifiers used to connect their devices to the network.

In designing decoupled systems, cloud providers should be considered potential threats, whether due to malice, negligence, or greed. To verify that decoupling has been done right, we can learn from how we think about encryption: you’ve encrypted properly if you’re comfortable sending your message with your adversary’s communications system. Similarly, you’ve decoupled properly if you’re comfortable using cloud services that have been split across a noncolluding group of adversaries.

Read the full essay

This essay was written with Barath Raghavan, and previously appeared in IEEE Spectrum.

Read More

Mitigating deepfake threats in the corporate world: A forensic approach

Read Time:4 Minute, 47 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

In an era where technology advances at breakneck speed, the corporate world finds itself facing an evolving and insidious threat: deepfakes. These synthetic media creations, powered by artificial intelligence (AI) algorithms, can convincingly manipulate audio, video, and even text – posing significant risks to businesses, their reputation, and their security. To safeguard against this emerging menace, a forensic approach is essential.

Understanding deepfakes

“Deepfake” is a term used to describe a type of synthetic media that is created or manipulated using artificial intelligence (AI) and deep learning algorithms. The term “deepfake” is a combination of “deep learning” and “fake.” Deep learning is a subset of machine learning that involves training artificial neural networks to perform specific tasks, such as image or speech recognition.

Deepfake technology is primarily associated with the manipulation of audio and video content, although it can also be applied to text. It allows for the creation of highly convincing and often indistinguishable fake content by superimposing one person’s likeness and voice onto another person’s image or video. Deepfake technology has been used in various real-world scenarios, raising concerns about its potential for misinformation and deception.

For instance, a deepfake video of former President Barack Obama was manipulated to make it seem like he was delivering a speech using synthetic audio and video. In the entertainment industry, deepfake technology has been used to recreate deceased actors for film or commercial purposes. For example, a deepfake version of actor James Dean was used in a Vietnamese commercial. Deepfake content has been circulated on social media and news platforms, contributing to the spread of fake news and disinformation. This can include fabricated speeches, interviews, or events involving public figures. Deepfake technology has been exploited to create explicit content featuring individuals without their consent. This content is often used for harassment, revenge, or extortion.

These examples illustrate the versatility of deepfake technology and the potential risks associated with its misuse. As a result, there is growing concern about the need for effective detection and countermeasures to address the potential negative consequences of deepfake manipulation in various contexts.

Here are some key aspects of deepfake technology:

Face swapping: Deepfake algorithms can replace the face of a person in a video with the face of another individual, making it appear as though the second person is speaking or acting in the video.

Voice cloning: Deepfake technology can replicate a person’s voice by analyzing their speech patterns and using AI to generate new audio recordings in that person’s voice.

Realistic visuals: Deepfake videos are known for their high degree of realism, with facial expressions, movements, and lip-syncing that closely resemble the original subject.

Manipulated text: While less common, deepfake technology can also be used to generate fake text content that mimics an individual’s writing style or produces fictional narratives.

Misinformation and deception: Deepfakes have the potential to spread misinformation, deceive people, and create convincing fake content for various purposes, both benign and malicious.

Implications for corporations

Reputation damage: Corporations invest years in building their brand and reputation. Deepfake videos or audio recordings featuring corporate leaders making controversial statements can have devastating consequences.

Financial fraud: Deepfakes can be used to impersonate executives, leading to fraudulent requests for funds, confidential information, or financial transactions.

Misleading stakeholders: Shareholders, employees, and customers can be misled by deepfake communications, potentially affecting stock prices and trust in the organization.

Industrial espionage: Competitors or malicious actors may use deepfakes to obtain confidential information or trade secrets.

Mitigating deepfake threats: A forensic approach

Awareness and education: The first line of defense against deepfakes is to educate employees, executives, and stakeholders about the existence and potential risks associated with deepfake technology. Training programs should include guidance on recognizing deepfake content.

Digital forensics expertise: Corporations should invest in digital forensics experts who specialize in deepfake detection and investigation. These professionals can conduct in-depth analyses of suspicious media to identify inconsistencies, artifacts, or signs of manipulation.

Advanced detection tools: Employ state-of-the-art deepfake detection tools and software. These solutions utilize machine learning algorithms to identify patterns and anomalies indicative of deepfake content.

Metadata analysis: Digital forensics experts can examine metadata and file properties to trace the origin of deepfake content. This can help identify potential sources of threats.

Secure communication channels: Encourage the use of secure communication channels, such as encrypted video conferencing and messaging platforms, to reduce the risk of deepfake attacks during virtual meetings.

Authentication protocols: Implement strong authentication protocols for sensitive financial transactions and communications, ensuring that only authorized personnel can initiate such actions.

Incident response plan: Develop a comprehensive incident response plan that outlines steps to take in case of a deepfake incident. Timely action can minimize damage.

Legal recourse: Be prepared to pursue legal action against those responsible for creating and disseminating deepfake content with malicious intent. Consult with legal experts experienced in cybercrimes.

Conclusion

Deepfake threats in the corporate world are a reality that cannot be ignored. As AI technology continues to advance, so too will the sophistication of deepfake attacks. A proactive and forensic approach to mitigating these threats is essential for corporations to protect their reputation, assets, and stakeholders.

By raising awareness, investing in digital forensics expertise, utilizing advanced detection tools, and implementing security measures, corporations can significantly reduce their vulnerability to deepfake attacks. Furthermore, a robust incident response plan and the ability to pursue legal action when necessary, can serve as a deterrent to potential threat actors. In this digital age, corporate resilience against deepfake threats is a vital component of modern cybersecurity.

Read More