Dr. Mark Zhandry: Post-Quantum Cryptography and More

Mark Zhandry is a senior scientist in the NTT Research Cryptography & Information Security (CIS) Lab and an assistant professor at Princeton University. After earning his B.A. at the University of California-Berkeley, where he majored in electrical engineering, computer science and physics, he earned his Ph.D. from Stanford University in computer science and completed postdoctoral studies at MIT. In 2018, Zhandry received a National Science Foundation CAREER award, a prestigious source of support for early-career researchers. In 2021, he was awarded one of 128 Sloan Research Fellowships, which recognize the most promising scientific researchers in the U.S. and Canada. Dr. Zhandry studies various aspects of theoretical cryptography, including constructions and applications of software obfuscation and the impact of quantum computing. Please read the following Q&A to learn more about his background and most recent research.

How did you decide to go into computer science, and cryptography, more specifically? With your triple major, it seems like you could have gone in several directions. And what drew you to do graduate work at Stanford?

For the longest time, I couldn’t make up my mind on what I wanted to study, except that I was drawn to technical subjects. I put off the decision of what to pursue as long as possible, only settling on computer science broadly a couple months before PhD applications were due. It was a tough decision, but I ultimately chose CS because of its close connection to mathematics, which I had been fond of since a young child.

Within CS, I still had a hard time choosing what to focus on. A large part of the appeal of Stanford, besides the excellent faculty, was that they had a formal rotation system where first-year PhDs could try out several different areas. I even spent one of my PhD rotations attempting to do graphics research. I owe my decision to finally pursue cryptography to Dan Boneh, who ultimately became my fantastic advisor. While I was still trying to decide what to do, he gave me a project to sink my teeth into in post-quantum cryptography. He selected the project based on my physics background, though it turned out the background I had was totally unrelated to the mathematical model of quantum computing used in cryptography. Nevertheless, the project went very well, and the rest is history.

Interestingly, when I first started my PhD, quantum was rather niche within cryptography. My very first paper appeared at ASIACRYPT 2011, where it was the only paper on the topic. Since then, the field has exploded: fast forward 10 years to ASIACRYPT 2021, which just happened in December, I count at least 14 papers either directly or indirectly related to quantum. So, my advisor’s choice of project for me was quite fortuitous.

Belated congratulations on your selection as a Sloan Fellow. The Alfred P. Sloan Foundation supports early career researchers with these two-year fellowships. How would you say it has impacted you and your research so far?

The fellowship is helping to fund my excellent graduate students, who have been an integral part of my research career.

Also, congratulations on your recent paper, “Post-Quantum Succinct Arguments: Breaking the Quantum Rewinding Barrier,” which was selected for last month’s IEEE Symposium on Foundations of Computer Science (FOCS). If you don’t mind, we’d like to ask a few questions about that paper. But first, about the event: Did you or one of your co-authors present it? How was it received? Any comments on the event itself?

My coauthor and former student, Fermi Ma, presented the paper. The paper has been receiving a lot of interest from the community, and it is becoming clear that the techniques can be applied more broadly throughout post-quantum cryptography.

Regarding your paper for FOCS, it’s clear we need quantum-secure computational assumptions, such as structured lattice schemes. Why do we also need security reductions, or ones that are compatible with quantum attackers?

History is littered with cryptosystems that turned out insecure, because the designer of the system did not anticipate some clever attack. For this reason, in cryptography, you always want to prove your scheme is secure. This is the only way to be confident that you didn’t miss something.

But there’s always a big elephant in the room: in the vast majority of cases, we cannot actually prove unconditionally that a scheme is secure. You may have heard of the P vs NP problem, one of the most important and famous unresolved problems in computer science. Well, in order to prove a cryptosystem secure, you would actually first need to prove P != NP, and even that is usually not enough.

The solution is to start from a well-studied computational problem that has resisted efficient algorithms for years. Factoring integers is a famous example. Certain problems on lattices are another popular example. We will now conjecture that there is no efficient algorithm for this problem. As before, we have little hope of actually proving this conjecture is true since doing so would in particular solve the P vs NP question. But given that many people have tried and failed to find a good algorithm, it seems reasonable to assume that none exist.

Now we design our cryptosystem based on the hard problem. The issue is that, when designing the cryptosystem, we may have un-intentionally introduced vulnerabilities that have nothing to do with solving the hard problem. So, we give a proof that says: if our hard problem is indeed hard as we conjectured, then our cryptosystem is secure. The way this is proven is by something called a reduction, and it works as follows: given any attack against the cryptosystem, we show that it can transformed into an algorithm for the original hard problem. If our assumption about the hardness of our problem is true, then there is no such algorithm for it. This means that any adversary for the cryptosystem leads to a contradiction, showing that such an adversary cannot exist.

Reductions are proofs, specifically, that a primitive or protocol is secure or difficult to break. And compatibility means being able to interact. So, if a reduction is incompatible with quantum attackers, that means it’s unable to determine something about the attacker’s security? Is it that the reduction can’t “rewind” the interactive adversary to figure out how it responds to challenges?

The fundamental approach to justifying security against quantum attacks remains basically the same, but we need to update the approach in two important ways. The first is the obvious one: with quantum computers, certain hard problems such as factoring become easy. So now these are not good assumptions to start from. We instead need to make sure we have post-quantum assumptions, such as those based on lattices. The second is more subtle. Our reduction proving security is a transformation on algorithms, taking as input an adversary for the system and turning it into an efficient algorithm for the hard problem. But many traditional reductions can only take as input classical algorithms. If you try to input a quantum adversary, the reduction will fail to produce a working algorithm on the other end. What happens then is that, even if you had a good post-quantum assumption, your quantum adversary cannot be converted into a quantum algorithm for the problem, and you fail to reach the needed contradiction.

One of the main places this comes up is in rewinding. This is a classical proof technique where the reduction runs the adversary to a certain point in time, then backtracks to a previous step, changes the input in some way, and then runs the adversary again. Sometimes multiple rewinds happen. By looking at the collection of program transcripts from the various runs, the reduction is able to solve its problem, whereas it may have been unable to given just a single execution of the adversary.

This all makes sense for classical algorithms. But the program state of a quantum algorithm is very delicate, and in general is irreversibly destroyed when running the adversary. Therefore, there is not necessarily a state to rewind back to.

The LWE problem, which figures in your collapsing hash function, is behind a lot of advancements in cryptography, such as post-quantum hardness, fully homomorphic encryption, etc. Is that just a given in this paper? Is the collapsing nature of the hash what’s novel here?

Yes, LWE has become one of the pillars of cryptography, both for its post-quantum security and wide range of novel applications.

Collapsing hashes based on LWE were known before our work. What was novel about our result is that we give a very powerful technique to facilitate rewinding in the quantum setting. Concretely, we show, in many cases, that it is possible to repair the delicate program state of the quantum adversary in order to rewind to a previous state, as you would classically.

Is there anything you could add to indicate how this paper contributes to the post-quantum cryptography landscape?

Rewinding is one of the main areas where classical proof techniques are known to have difficulty quantumly. Prior work had shown how to rewind just a constant number of times, and achieving more rewinds was a well-known open problem. Our work shows that you can rewind as much as you like. We anticipate it having many applications.

The concrete application in the paper is something quite interesting. Suppose you have a proof of some mathematical statement, and you want to convince me that the statement is true. You could just send me the proof. But maybe the proof is extremely long, and I’m lazy and don’t want to read it. Almost 30 years ago, Kilian showed that you could do something amazing: you and I can engage in a protocol whereby you convince me that the statement is true. But the total communication, and my running time, is very fast, essentially independent of the length of the proof.

However, the classical security reduction (showing that you cannot cheat and convince me of a false statement) requires polynomially-many rewinds, essentially as many as the number of bits in the original long proof. Prior quantum techniques could not handle this reduction. Without a compatible reduction, maybe you could actually convince me of a false statement. Our result shows that there is nothing to worry about, provided you use a collapsing hash function inside the protocol.

Are there any other forthcoming papers you’d like to mention? Or ongoing research or interests, whether related to post-quantum computing or obfuscation or some area?

There is clearly a lot of work left to be done in understanding the threats and also the opportunities of quantum computers for cryptography. Much of my ongoing work seeks to further this goal.

For something not quantum, consider an eavesdropper Eve listening in on communication between Alice and Bob. The communication is encrypted, so Eve cannot learn the contents right now. However, at some point in the future, Alice or Bob’s key may be compromised. As security breaches happen all the time, this is not an unrealistic scenario. At this point, Eve learns the secret decryption key. Is all hope lost for the security of Alice and Bob’s prior messages?

In a work that was just accepted to EUROCRYPT 2022, along with Jiaxin Guan (one of my students) and Daniel Wichs (another NTT Research scientist), we argue that all hope is not lost: Alice and Bob’s messages could have been so long that it would be inconvenient or even infeasible for Eve to store for long periods of time. We construct new public key encryption schemes where messages remain completely hidden even if Eve eventually learns the secret key, so long as Eve could not store the complete ciphertext until the key is leaked. Even if Eve is only interested in learning a tiny fraction of the message – even a single bit – she would need to store the ciphertext in its entirely. Such an encryption scheme therefore provides some resilience to future security breaches. Some of my ongoing work will be improving the protocols we’ve developed in various ways.

Facebook
Twitter
LinkedIn
Pinterest