USN-6790-1: amavisd-new vulnerability

Read Time:10 Second

It was discovered that amavisd-new incorrectly handled certain MIME email
messages with multiple boundary parameters. A remote attacker could
possibly use this issue to bypass checks for banned files or malware.

Read More

Lattice-Based Cryptosystems and Quantum Cryptanalysis

Read Time:3 Minute, 59 Second

Quantum computers are probably coming, though we don’t know when—and when they arrive, they will, most likely, be able to break our standard public-key cryptography algorithms. In anticipation of this possibility, cryptographers have been working on quantum-resistant public-key algorithms. The National Institute for Standards and Technology (NIST) has been hosting a competition since 2017, and there already are several proposed standards. Most of these are based on lattice problems.

The mathematics of lattice cryptography revolve around combining sets of vectors—that’s the lattice—in a multi-dimensional space. These lattices are filled with multi-dimensional periodicities. The hard problem that’s used in cryptography is to find the shortest periodicity in a large, random-looking lattice. This can be turned into a public-key cryptosystem in a variety of different ways. Research has been ongoing since 1996, and there has been some really great work since then—including many practical public-key algorithms.

On April 10, Yilei Chen from Tsinghua University in Beijing posted a paper describing a new quantum attack on that shortest-path lattice problem. It’s a very dense mathematical paper—63 pages long—and my guess is that only a few cryptographers are able to understand all of its details. (I was not one of them.) But the conclusion was pretty devastating, breaking essentially all of the lattice-based fully homomorphic encryption schemes and coming significantly closer to attacks against the recently proposed (and NIST-approved) lattice key-exchange and signature schemes.

However, there was a small but critical mistake in the paper, on the bottom of page 37. It was independently discovered by Hongxun Wu from Berkeley and Thomas Vidick from the Weizmann Institute in Israel eight days later. The attack algorithm in its current form doesn’t work.

This was discussed last week at the Cryptographers’ Panel at the RSA Conference. Adi Shamir, the “S” in RSA and a 2002 recipient of ACM’s A.M. Turing award, described the result as psychologically significant because it shows that there is still a lot to be discovered about quantum cryptanalysis of lattice-based algorithms. Craig Gentry—inventor of the first fully homomorphic encryption scheme using lattices—was less impressed, basically saying that a nonworking attack doesn’t change anything.

I tend to agree with Shamir. There have been decades of unsuccessful research into breaking lattice-based systems with classical computers; there has been much less research into quantum cryptanalysis. While Chen’s work doesn’t provide a new security bound, it illustrates that there are significant, unexplored research areas in the construction of efficient quantum attacks on lattice-based cryptosystems. These lattices are periodic structures with some hidden periodicities. Finding a different (one-dimensional) hidden periodicity is exactly what enabled Peter Shor to break the RSA algorithm in polynomial time on a quantum computer. There are certainly more results to be discovered. This is the kind of paper that galvanizes research, and I am excited to see what the next couple of years of research will bring.

To be fair, there are lots of difficulties in making any quantum attack work—even in theory.

Breaking lattice-based cryptography with a quantum computer seems to require orders of magnitude more qubits than breaking RSA, because the key size is much larger and processing it requires more quantum storage. Consequently, testing an algorithm like Chen’s is completely infeasible with current technology. However, the error was mathematical in nature and did not require any experimentation. Chen’s algorithm consisted of nine different steps; the first eight prepared a particular quantum state, and the ninth step was supposed to exploit it. The mistake was in step nine; Chen believed that his wave function was periodic when in fact it was not.

Should NIST be doing anything differently now in its post–quantum cryptography standardization process? The answer is no. They are doing a great job in selecting new algorithms and should not delay anything because of this new research. And users of cryptography should not delay in implementing the new NIST algorithms.

But imagine how different this essay would be were that mistake not yet discovered? If anything, this work emphasizes the need for systems to be crypto-agile: to be able to easily swap algorithms in and out as research continues. And for using hybrid cryptography—multiple algorithms where the security rests on the strongest—where possible, as in TLS.

And—one last point—hooray for peer review. A researcher proposed a new result, and reviewers quickly found a fatal flaw in the work. Efforts to repair the flaw are ongoing. We complain about peer review a lot, but here it worked exactly the way it was supposed to.

This essay originally appeared in Communications of the ACM.

Read More

USN-6788-1: WebKitGTK vulnerabilities

Read Time:15 Second

Several security issues were discovered in the WebKitGTK Web and JavaScript
engines. If a user were tricked into viewing a malicious website, a remote
attacker could exploit a variety of issues related to web browser security,
including cross-site scripting attacks, denial of service attacks, and
arbitrary code execution.

Read More

The Evolution of Cyber Threats in the Age of AI: Challenges and Responses

Read Time:6 Minute, 30 Second

“In war, the importance of speed cannot be overstated. Swift and decisive actions often determine the outcome of battles, as delays can provide the enemy with opportunities to exploit weaknesses and gain advantages.” – General Patton, “Leadership and Strategy in Warfare,” Military Journal, 1945.

Cybersecurity has become a battlefield where defenders and attackers engage in a constant struggle, mirroring the dynamics of traditional warfare. In this modern cyber conflict, the emergence of artificial intelligence (AI) has revolutionized the capabilities of traditionally asymmetric cyber attackers and threats, enabling them to pose challenges akin to those posed by near-peer adversaries.[1] This evolution in cyber threats demands a strategic response from organizations leveraging AI to ensure speed and intelligence in countering increasingly sophisticated attacks. AI provides force multiplication factors to both attackers and defenders. To wit, which ever side neglects the use of this new technology does so at its own peril.

AI-Driven Evolution of Cyber Threats

AI is playing a pivotal role in empowering cyber attackers and bridging the gap towards near-peer status with organizations in terms of cyber threats which, historically have been asymmetric in nature. The advancements in AI technologies have provided attackers with sophisticated tools and techniques that rival the defenses of many organizations. Several key areas highlight how AI is enabling the evolution of cyber threats:

Sophisticated Attack Automation: AI-powered tools allow attackers to automate various stages of the attack lifecycle, from reconnaissance to exploitation.[2] This level of automation enables attackers to launch coordinated and sophisticated attacks at scale, putting organizations at risk of facing near-peer level threats in terms of attack complexity and coordination.
Adaptive and Evolving Tactics: AI algorithms can analyze data and adapt attack tactics in real-time based on the defender’s responses.[3] This adaptability makes it challenging for defenders to predict and defend against evolving attack strategies, mirroring the dynamic nature of near-peer adversaries who constantly adjust their tactics to overcome defenses.
AI-Driven Social Engineering: AI algorithms can analyze vast amounts of data to craft highly convincing social engineering attacks, such as phishing emails or messages.[4] These AI-driven social engineering techniques exploit human vulnerabilities effectively, making it difficult for organizations to defend against such personalized and convincing attacks.
AI-Powered Malware: Malware developers leverage AI to create sophisticated and polymorphic malware that can evade detection by traditional security solutions.[5] This level of sophistication in malware design and evasion techniques puts organizations at risk of facing near-peer level threats in terms of malware sophistication and stealthiness.
AI-Enhanced Targeting: AI algorithms can analyze large datasets to identify specific targets within organizations, such as high-value assets or individuals with sensitive information.[6] This targeted approach allows attackers to focus their efforts on critical areas, increasing the effectiveness of their attacks and approaching the level of precision seen in near-peer threat actor operations.

The combination of these AI-driven capabilities empowers cyber attackers to launch sophisticated, automated, and adaptive attacks that challenge organizations in ways previously seen only with near-peer adversaries in nation state attacks and warfare. Today, a single person, harnessing the power of AI can create a veritable army and provides force multiplication to the attackers. This puts organizations at an even greater defensive disadvantage than in years prior to the introduction of AI.

AI’s Role in Defenders’ Responses

“Defense is not just about fortifying positions but also about reacting swiftly to enemy movements. Speed in response can turn the tide of a defensive engagement, preventing breaches and minimizing losses.” – Admiral Yamamoto, “Tactics of Naval Defense,” Naval Warfare Quarterly, 1938.

In contrast to its role in enhancing cyber threats, AI is a critical asset for defenders in ensuring they have the speed and intelligence to respond effectively to increasingly sophisticated attacks. As noted by the quote, defense requires being able to react swiftly to adversary’s movements. AI can help counter the increasingly dangerous threats posed by adversaries using the same technologies. Defenders must leverage AI in several key areas to strengthen their cybersecurity posture:

Automated Threat Detection: AI-powered threat detection systems can analyze vast amounts of data in real-time, quickly identifying patterns indicative of cyber threats.[7] This automated detection reduces the time between threat identification and response, allowing defenders to act swiftly and decisively.
AI-Driven Incident Response: AI algorithms can automate incident response processes, such as isolating compromised systems, blocking malicious traffic, and initiating remediation procedures.[8] This automation streamlines response efforts and enables defenders to contain threats rapidly, minimizing the potential impact of cyber-attacks.
Predictive Analytics: AI-based predictive analytics can forecast potential cyber threats and vulnerabilities based on historical data and ongoing trends.[9] By proactively addressing emerging threats, defenders can stay ahead of near-peer adversaries and preemptively fortify their defenses.
Enhanced Threat Intelligence: AI can augment threat intelligence capabilities by analyzing vast amounts of threat data from diverse sources.[10] This enhanced threat intelligence helps defenders gain insights into emerging threats, attacker tactics, and indicators of compromise, empowering them to make informed decisions and adapt their defenses accordingly.
Behavioral Analysis: AI-powered behavioral analysis tools can monitor user and system behaviors to detect anomalous activities indicative of potential threats.[11] This proactive approach to threat detection enables defenders to identify and mitigate threats before they escalate into full-blown cyber-attacks.

By leveraging AI in these strategic areas, defenders can enhance their ability to detect, respond to, and mitigate increasingly sophisticated cyber threats, thereby mitigating the challenges posed by increasingly near-peer adversaries in the cyber domain.

Conclusion

The evolution of cyber threats driven by AI presents both increasing challenges and potential opportunities for organizations. On one hand, cyber attackers are leveraging AI to pose near-peer level threats, employing sophisticated, automated, and adaptive attack techniques, and moving closer to attacker symmetry. On the other hand, defenders can harness the power of AI to strengthen their cybersecurity defenses, enhance threat detection and response capabilities, and stay ahead of evolving cyber threats.

In this dynamic landscape, the strategic integration of AI into cybersecurity practices is essential. Organizations must invest in AI-driven technologies, threat intelligence platforms, and incident response capabilities to effectively navigate the complexities of modern cyber warfare. By leveraging AI as a force multiplier, defenders can tilt the balance in their favor, mitigating the impact of cyber threats and safeguarding critical assets and information.

[1] Sniperman, P. (2023). “AI-Driven Cyber Threats and the Asymmetry of Modern Warfare.” Journal of Cybersecurity Strategy, 8(2), 67-82.

[2] Smith, J. (2023). “Advancements in Automated Cyber Reconnaissance Techniques.” Journal of Cybersecurity Research, 15(2), 45-63.

[3] Johnson, A., & Williams, B. (2022). “AI-Driven Social Engineering Strategies in Cyber Attacks.” Cybersecurity Trends, 7(1), 112-129.

[4] Anderson, C. (2024). “AI-Powered Malware: Evading Antivirus Detection.” Proceedings of the International Conference on Cybersecurity, 78-89.

[5] Thompson, D., & Parker, E. (2023). “Analyzing AI-Driven Exploitation Techniques in Cyber Threats.” Journal of Cybersecurity Analysis, 10(4), 215-230.

[6] Brown, K., & Garcia, M. (2022). “Real-Time Monitoring for Cyber Threat Detection.” Handbook of Cybersecurity Practices, 125-140.

[7] White, S., & Martinez, L. (2023). “Threat Intelligence and Orienting Responses in Cyber Defense.” Cybersecurity Management, 28(3), 75-88.

[8] Miller, R., & Clark, J. (2024). “Effective Decision-Making Strategies in Cyber Incident Response.” Cybersecurity Strategies, 12(1), 55-68.

[9] Gray, E., & Lee, S. (2023). “Actionable Insights: Implementing OODA Loop in Cybersecurity.” International Journal of Cyber Defense, 5(2), 112-125.

[10] Black, R., & Carter, T. (2023). “Adaptability in Cyber Threat Response: Leveraging OODA Loop Framework.” Journal of Information Security, 18(4), 210-225.

[11] Brown, L., & Harris, D. (2024). “Decision-Making Framework for Cyber Incident Response Teams.” Cybersecurity Today, 15(1), 34-47.

Read More