USN-6656-1: PostgreSQL vulnerability

Read Time:13 Second

It was discovered that PostgreSQL incorrectly handled dropping privileges
when handling REFRESH MATERIALIZED VIEW CONCURRENTLY commands. If a user or
automatic system were tricked into running a specially crafted command, a
remote attacker could possibly use this issue to execute arbitrary SQL
functions.

Read More

Apple Announces Post-Quantum Encryption Algorithms for iMessage

Read Time:52 Second

Apple announced PQ3, its post-quantum encryption standard based on the Kyber secure key-encapsulation protocol, one of the post-quantum algorithms selected by NIST in 2022.

There’s a lot of detail in the Apple blog post, and more in Douglas Stabila’s security analysis.

I am of two minds about this. On the one hand, it’s probably premature to switch to any particular post-quantum algorithms. The mathematics of cryptanalysis for these lattice and other systems is still rapidly evolving, and we’re likely to break more of them—and learn a lot in the process—over the coming few years. But if you’re going to make the switch, this is an excellent choice. And Apple’s ability to do this so efficiently speaks well about its algorithmic agility, which is probably more important than its particular cryptographic design. And it is probably about the right time to worry about, and defend against, attackers who are storing encrypted messages in hopes of breaking them later on future quantum computers.

Read More

Building Cyber resilience against AI-powered social engineering

Read Time:5 Minute, 45 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Exploring advanced AI tactics in social engineering and effective strategies for cyber defense

Long-standing as a significant threat in the business world, social engineering attacks constitute a major portion of global cyberattacks. An average business regularly faces a substantial number of such attacks every year. These attacks manifest in various forms, from intricate phishing emails to complex interactions designed to deceive employees, often leading to grave outcomes. This alarming reality is further underscored by the following statistics:

· Social engineering is implicated in 98% of all cyberattacks

· Approximately 90% of malicious data breaches occur due to social engineering

· The typical organization faces over 700 social engineering attacks each year

· The average cost incurred from a social engineering attack is about $130,000

· Phishing plays a role in 36% of all data breaches

· In 86% of companies, at least one employee has clicked on a phishing link

· About 12% of external malicious actors gain access through phishing

· CEOs are targeted by phishing attacks, on average, 57 times a year

How has the rise of AI reshaped the landscape of social engineering in cybersecurity? With AI’s introduction, these tactics have become more intricate and harder to detect, as attackers leverage AI to automate and enhance their methods. This development has inadvertently expanded the attack surface for many organizations. So, what exactly are the specific challenges posed by AI in social engineering as a cyberthreat, and what actions can organizations take to address this evolving issue?

New challenges in defending against AI-enhanced social engineering

AI’s increasing role in social engineering attacks presents evolving challenges. These challenges arise from AI’s capability, exploited by state-sponsored groups, to craft and morph malware into zero-day exploits that evade detection for prolonged periods.

One significant area of concern is the use of AI in creating more effective phishing campaigns. By analyzing public data, AI can personalize attacks to an unprecedented degree. This not only increases the likelihood of successful breaches but also makes it harder for traditional defense mechanisms to detect and mitigate these threats.

AI’s role in amplifying social engineering efforts is multi-dimensional:

Personalization of phishing attacks: AI’s analysis of public data, including social media, enables the creation of highly personalized phishing campaigns. This leads to a higher success rate in breaching defenses.
Evolution of social engineering methods: AI has transformed various social engineering techniques. For instance:

Hyper-personalized phishing: AI mines social media to tailor spear phishing emails with familiar elements for each target.
Natural language generation: AI generates convincing, human-like text, making social engineering content more persuasive.
Emotional manipulation: By analyzing targets’ digital footprints, AI fine-tunes its approach to exploit emotional triggers and communication styles.
Evasion tactics: AI constantly tests and refines its strategies to avoid detection by security tools.
Automated reconnaissance: AI efficiently gathers intelligence from sources like social media, enhancing the effectiveness of social engineering attacks.
Diversification in attack Methods: Beyond phishing, AI enhances other social engineering tactics like baiting, pretexting, and tailgating, making them more deceptive and harder to counter.

The evolution of AI tools in crafting context-specific social engineering strategies has made malicious operations easier, faster, and cost-effective. As a result, organizations and individuals face increasing challenges in maintaining effective defenses against these advanced threats. 

AI’s role/techniques in advancing social engineering tactics

With the escalation of social engineering threats due to AI, the attack surface for businesses is expanding significantly. For organizations already facing a spectrum of cyberthreats such as data breaches, DDoS attacks, and malware, the integration of AI poses further complications, enlarging the scope and scale of potential vulnerabilities and attack scenarios.

1.       Streamlined profiling of targets: AI enhances target identification and profiling through advanced behavioral analysis.

2.       Rapid data collection: AI’s data mining capabilities enable efficient gathering of key information.

3.       Customized deceptive tactics: AI personalizes attacks for individual targets, improving the deception’s effectiveness.

4.       Replicated insider acumen: AI’s capacity to simulate organizational knowledge adds a layer of complexity to cyberattack tactics, making them more intricate and challenging to counter.

5.       Comprehensive attack methods: AI enables launching multifaceted cyber strategies, targeting different system vulnerabilities.

6.       Dynamic strategy shifts: AI rapidly modifies tactics in response to real-time cyber environment changes.

7.       Advanced linguistic phishing: AI tools enable the crafting of phishing emails with refined language and grammar, making them appear more authentic.

8.       Realistic deepfake creation: AI assists in generating highly convincing deepfakes and virtual identities for deceptive interactions.

9.       Sophisticated voice impersonation: AI technology is used to clone human speech for advanced voice phishing (vishing) attacks, as cautioned by authorities like the Federal Trade Commission.

10.   Automated social engineering at scale: Threat actors utilize autonomous agents and scripting tools for large-scale, targeted social engineering, automating the entire process from target selection to engaging in seemingly human interactions.

11.   Self-evolving phishing strategies: AI adapts and improves its phishing tactics based on its learning, distinguishing effective methods from less successful ones to optimize its approach.

Strategies for cybersecurity with an emphasis on critical infrastructure protection

To enhance cybersecurity, especially for critical infrastructure, against AI-powered social engineering, consider these strategies:

1.       Enhanced user awareness training: This strategy involves in-depth training programs for employees, focusing on recognizing the subtleties of AI-powered social engineering. It includes understanding AI’s capabilities in mimicking human communication and identifying signs of AI-driven phishing attempts.

2.       Simulation exercises for attack preparedness: Regularly conducted simulation exercises mimic real-world social engineering scenarios, providing employees with hands-on experience in detecting and responding to sophisticated AI-driven attacks. These exercises are crucial in building resilience and improving reaction times to actual threats.

3.       Deployment of AI-enhanced security measures: Integrating AI into cybersecurity defenses allows for real-time monitoring and analysis of potential threats. These systems can detect anomalies and patterns indicative of AI-driven social engineering, providing a proactive approach to cybersecurity.

4.       Robust authentication protocols: Strengthening authentication involves implementing multi-factor authentication and continuous verification processes. These protocols are vital in protecting against breaches, as they add an additional layer of security, making it more difficult for AI-enhanced attacks to gain unauthorized access.

Harnessing AI for cyber-resilience

Embracing AI’s potential in cybersecurity, rather than fearing it, equips organizations to better anticipate and thwart AI-driven threats. This proactive stance is crucial in an era where traditional security measures might not suffice against the evolving nature of AI-generated malware. Utilizing AI not only for its analytical strengths but also as a cornerstone of defense strategies can provide a decisive edge in neutralizing these advanced threats. This approach marks a pivotal shift in cybersecurity dynamics, where understanding and leveraging AI’s capabilities becomes integral to protecting critical assets.

Read More

USN-6655-1: GNU binutils vulnerabilities

Read Time:38 Second

It was discovered that GNU binutils was not properly handling the logic
behind certain memory management related operations, which could lead to
an invalid memory access. An attacker could possibly use this issue to
cause a denial of service. (CVE-2022-47695)

It was discovered that GNU binutils was not properly performing bounds
checks when dealing with memory allocation operations, which could lead
to excessive memory consumption. An attacker could possibly use this issue
to cause a denial of service. (CVE-2022-48063)

It was discovered that GNU binutils incorrectly handled memory management
operations in several of its functions, which could lead to excessive
memory consumption due to memory leaks. An attacker could possibly use
these issues to cause a denial of service. (CVE-2022-48065)

Read More