Quantum computers are probably coming, though we don’t know when—and when they arrive, they will, most likely, be able to break our standard public-key cryptography algorithms. In anticipation of this possibility, cryptographers have been working on quantum-resistant public-key algorithms. The National Institute for Standards and Technology (NIST) has been hosting a competition since 2017, and there already are several proposed standards. Most of these are based on lattice problems.
The mathematics of lattice cryptography revolve around combining sets of vectors—that’s the lattice—in a multi-dimensional space. These lattices are filled with multi-dimensional periodicities. The hard problem that’s used in cryptography is to find the shortest periodicity in a large, random-looking lattice. This can be turned into a public-key cryptosystem in a variety of different ways. Research has been ongoing since 1996, and there has been some really great work since then—including many practical public-key algorithms.
On April 10, Yilei Chen from Tsinghua University in Beijing posted a paper describing a new quantum attack on that shortest-path lattice problem. It’s a very dense mathematical paper—63 pages long—and my guess is that only a few cryptographers are able to understand all of its details. (I was not one of them.) But the conclusion was pretty devastating, breaking essentially all of the lattice-based fully homomorphic encryption schemes and coming significantly closer to attacks against the recently proposed (and NIST-approved) lattice key-exchange and signature schemes.
However, there was a small but critical mistake in the paper, on the bottom of page 37. It was independently discovered by Hongxun Wu from Berkeley and Thomas Vidick from the Weizmann Institute in Israel eight days later. The attack algorithm in its current form doesn’t work.
This was discussed last week at the Cryptographers’ Panel at the RSA Conference. Adi Shamir, the “S” in RSA and a 2002 recipient of ACM’s A.M. Turing award, described the result as psychologically significant because it shows that there is still a lot to be discovered about quantum cryptanalysis of lattice-based algorithms. Craig Gentry—inventor of the first fully homomorphic encryption scheme using lattices—was less impressed, basically saying that a nonworking attack doesn’t change anything.
I tend to agree with Shamir. There have been decades of unsuccessful research into breaking lattice-based systems with classical computers; there has been much less research into quantum cryptanalysis. While Chen’s work doesn’t provide a new security bound, it illustrates that there are significant, unexplored research areas in the construction of efficient quantum attacks on lattice-based cryptosystems. These lattices are periodic structures with some hidden periodicities. Finding a different (one-dimensional) hidden periodicity is exactly what enabled Peter Shor to break the RSA algorithm in polynomial time on a quantum computer. There are certainly more results to be discovered. This is the kind of paper that galvanizes research, and I am excited to see what the next couple of years of research will bring.
To be fair, there are lots of difficulties in making any quantum attack work—even in theory.
Breaking lattice-based cryptography with a quantum computer seems to require orders of magnitude more qubits than breaking RSA, because the key size is much larger and processing it requires more quantum storage. Consequently, testing an algorithm like Chen’s is completely infeasible with current technology. However, the error was mathematical in nature and did not require any experimentation. Chen’s algorithm consisted of nine different steps; the first eight prepared a particular quantum state, and the ninth step was supposed to exploit it. The mistake was in step nine; Chen believed that his wave function was periodic when in fact it was not.
Should NIST be doing anything differently now in its post–quantum cryptography standardization process? The answer is no. They are doing a great job in selecting new algorithms and should not delay anything because of this new research. And users of cryptography should not delay in implementing the new NIST algorithms.
But imagine how different this essay would be were that mistake not yet discovered? If anything, this work emphasizes the need for systems to be crypto-agile: to be able to easily swap algorithms in and out as research continues. And for using hybrid cryptography—multiple algorithms where the security rests on the strongest—where possible, as in TLS.
And—one last point—hooray for peer review. A researcher proposed a new result, and reviewers quickly found a fatal flaw in the work. Efforts to repair the flaw are ongoing. We complain about peer review a lot, but here it worked exactly the way it was supposed to.
This essay originally appeared in Communications of the ACM.
More Stories
CISA’s 2024 Review Highlights Major Efforts in Cybersecurity Industry Collaboration
The US Cybersecurity and Infrastructure Security Agency’s 2024 Year in Review marks Jen Easterly’s final report before resignation Read More
Casino Players Using Hidden Cameras for Cheating
The basic strategy is to place a device with a hidden camera in a position to capture normally hidden card...
Friday Squid Blogging: Squid on Pizza
Pizza Hut in Taiwan has a history of weird pizzas, including a “2022 scalloped pizza with Oreos around the edge,...
Scams Based on Fake Google Emails
Scammers are hacking Google Forms to send email to victims that come from google.com. Brian Krebs reports on the effects....
Infostealers Dominate as Lumma Stealer Detections Soar by Almost 400%
The vacuum left by RedLine’s takedown will likely lead to a bump in the activity of other a infostealers Read...
The AI Fix #30: ChatGPT reveals the devastating truth about Santa (Merry Christmas!)
In episode 30 of The AI Fix, AIs are caught lying to avoid being turned off, Apple’s AI flubs a...