arnab roy

Research Manager, Fujitsu Labs of America

Transcript of the presentation A Brief History of Quasi-Adaptive NIZKs, given at the NTT Upgrade 2020 Research Summit, September 30, 2020

Hello everyone, this is Arnab from Fujitsu Labs of America. Today, I’m going to talk about Quasi-Adaptive NIZKs. The motivation for Zero-Knowledge goes back to the heart of why we need cryptography. Identity, ownership, community and control. Much of cryptography exists today to support controlled communication among individuals. In the modern world, we also consider devices as extensions of individuals, and corporations as communities. Here is how Zero-Knowledge fits in this picture. What defines the boundary of an individual is the ability to hold a secret. Which may be itself attached to the ownership of some asset. We want the ability to use the secret, to prove ownership of this asset.

 

However, giving them the secret itself essentially announced ownership. Since then anybody else can do the same. Zero-Knowledge gives us tools to prove ownership without revealing the secret. The notion of proving ownership of a digital object without revealing it sounds very paradoxical outside the model of cryptographers. So it gave us a surprise when this notion was formalized and constructed by Goldwasser, Micali and Rackoff in the late ’80s. We will focus on the non-interactive version of Zero-Knowledge or NIZK in the stock. Which was first developed by Blum, Feldman and Micali. While Zero-Knowledge can spend multiple rounds of communications, a NIZK only allows a single message to be transmitted.

 

Now let’s get into some technical details on NIZKs. The objective of a NIZK is to show that an object x, which you can think of as the public footprint of an asset belonging to an NP language. Without revealing its witness w, which you can think of as the secret. A NIZK scheme consists of three algorithms. KeyGen, Prove and Verify. The Key Generation process is executed by a trusted third party at the very outset. Resulting in a Common Random String or CRS which is made public. The Prove algorithm produces a proof pi based on the CRS, x and w. The Verify algorithm checks the proof against X and accepts or rejects.

 

A NIZK of course has to satisfy some properties. We need it to be correct. Which basically says that when everyone follows a protocol correctly and honestly we get the expected outcome. We need it to be sound. Which says that a false statement cannot be proven correct. Zero-Knowledge is a trickier property to formalize. How do we capture the intuition behind saying that the proof carries no knowledge of the witness? One way to capture that is to imagine there are two worlds. There’s the real world where the proof is calculated using the witness. And there’s a simulation world where the proof is calculated without a witness. To make this possible the simulator may have some extra information about the CRS, which is independent of the object itself. The property then requires, that it is not possible to effectively distinguish these two worlds.

 

It is especially challenging to construct NIZKs compared to Encryption Signature Schemes. In particular, in Signature Schemes the analog of the prover can use a secret key. And in PKEs, the analog of the verifier can use a secret key. But in NIZKs, none of the prover and the verifier can hold a secret key. In this talk, I’m going to focus on Linear Subspace Languages. This class is the basis of hardness assumptions like DDH and DLIN and has proved extremely useful in crypto constructions. This is how we express DDH and DLIN as Linear Subspace Languages.

 

We will use additive notation and express the discrete logs as linear group actions on group elements. Using this syntax, we can write down DH and DLIN in tupels very naturally. As a greatness sector times a constant matrix. So we can view the language as being parametrized by a constant language matrix. We will use Hard Bilinear Groups in our constructions. What does it mean? So while a standard group allows additions in the exponent, a bilinear group also allows one multiplication. In such groups, we can state various linear sub-spacing options. The DDH is the simplest one. It assumes that sampling a one-dimensional sub-space is indistinguishable from something full two-dimensional one.

 

The decisional linear assumption assumes the same from two versus three dimensional spaces. Generalizing the sequence of assumptions, the k-Linear assumption asks to distinguish between k dimensional samples and fully random samples from a k plus one dimensional space. Groth and Sahai came up with a breakthrough NIZK construction in Eurocrypt 2008. In particular, their NIZK for linear subspaces was the first efficient construction based on DDH and DLIN. Structurally it consisted of two parts. A commitment to the witness and an equation proof part encoding how the witness actually corresponds to the object. The number of group elements in the proof is linear in the number of witnesses and the number of group elements in the object.

 

The quest remained to build even shorter NIZKs. The CRS itself seem to provide some scope. Groth-Sahai proofs are fixed CRS that works for an entire class of languages. Maybe there’s a way to increase group efficiency at the cost of having a tailored CRS for each given language. This is what motivates Quasi-Adaptive NIZKs where we let the CRS depend on the language itself. In particular, we didn’t require the discrete logs of the language constants to generate the CRS but we did require these constants to be generated from witness samplable distributions. This still turns out to be sufficient for many applications. The construction achieved a perfect Zero-Knowledge which was universal in the sense that the simulator was independent of the language.

 

However, soundness is computational. So here’s how the construction differed from Groth-Sahai at a very high level. The language constants are embedded into the CRS in such a way that the object functions as its own commitment. So we end up not needing any separate commitment in the proof itself. Our particular construction also needed fewer elements in the equation proof as well. On the flip side the CRS blows up to quadratic instead of constant.

 

Let’s get into the detail construction which is actually pretty simple to describe. Let the language be parametrized by a G1 matrix A with the witness ranging over [unintelligible]. We sample random matrices D and B with appropriate dimensions. Then we construct the public CRS in two parts. CRSp is meant to be used by the prover, and it is constructed by multiplying the language matrix with D and B inverse. Our CRSv is the part that is meant to be used by the verifier and it is constructed using D times B and B embedded in g2.

 

Now let’s say you’re asked to compute a proof for a candidate X with witness w. We computed simply as a product of the witness with CRSp. The verification of the proof is simply checking whether the pairing of the candidate and the proof with the CRSv matrix is equal to zero. If you look carefully CRSv essentially embeds in g2 the kernel of the matrix formed by the language matrix A and CRSp. This is what is responsible for the correctness of the scheme. The Zero-Knowledge property is also straightforward given the trapdoor matrices D and B.

 

While correctness and Zero-Knowledge are relatively simple to prove, proving soundness is tricky. The central observation is that given CRSp, there is still enough entropy in D and B to randomize CRSv. In particular CRSv can be expanded to have an additional component with a random sample from the kernel of the language. This transformation is purely statistical. Now we essentially embed the DDH or k-linear challenge in the additive kernel part. In this transform setting we then show that an alleged proof on a bad candidate can be used to distinguish whether a subspace sample was used or a full space sample was used at the challenge. The need to have the kernel of the language in this setting, that’s the technical reason why we need the language to come from a witness sample [unintelligible].

 

Let’s give a simple illustration of the proof system on a standard Diffie-Hellman language in G1 with the hardness assumption being DDH in g2. So the language is defined by G1 elements, small g and f with tuples of the form g to the w, f to the w. The CRS is generated as follows. We sample d and b from random and compute CRSp as g to the d, f to the V inverse and CRSv as g2, g2 to the b and g2 to the bd. The proof of the tuple g to the w, f to the w is computed using w as CRSp raise to the power w.

 

Note that this is just a single element in the group. The verification is done by pairing the tuple and the proof with the CRSv elements and then checking equality. The simulator can easily conclude the proof using trapdoors d and b without knowing w. Following our Asia Crypt paper, Liebert Peters Joye and Yung reduced the proof size to constant under DLIN independent of the number of witnesses and object dimensions. Finally, at Crypto 2014, we optimized the proof to one group element under DDH. in both the works, the CRS was reduced to linear sites. The number of bearings needed for verification was also reduced to linear.

 

This is the Crypto ’14 construction in a nutshell. The construction skeleton remains more or less the same as they are in ’13, but the core observation was that many of the CRS elements could be randomly combined while still maintaining sums. These extra randomizers are depicted in red in this site. This random combination of the CRS elements resulted in reduction of both proof size as also the number of clearings required for verification. In Eurocrypt 2015, Kiltz and Wee came up with a beautiful interpretation of QA-NIZKs. Based on the concept of Smooth Projective Hash Functions. This slide is oversimplified but illustrative of the encryption.

 

This system has four connecting puzzle pieces. The witness w, the language matrix M, a key K and a key hider A. On the hidden version of the key is given publicly in the CRS. Now when we have a good object the pieces fit together nicely in a detectable way. However, when we have a bad object, the pieces no longer fit, and it becomes infeasible to come up with a convincing proof. Zero-Knowledge is demonstrable by giving the key to the simulator and observing that the key is independent of the language matrix.

 

Through the years we have extended, enhanced and applied the basic system. Especially with our collaborators Masayuki Abe, Miyako Ohkubo, Jiaxin Pan and Yuyu Wang. Based on QA-NIZKs we were able to construct very efficient Identity-based Encryption, Structure-Preserving Signatures, Public Verifiable CCA secure encryption, Blind Signatures, Group Signatures, Password-based Key Exchange and so on. It has also been gratifying to see the community make leaps and bounds using the ideas and also use QA-NIZKs in practical deployments. Before finishing off I wanted to talk to you a little bit about some exciting activities going on in Hyperledger which is relevant for cryptographers. Hyperledger is an open-source community for enterprise grade blockchain. It’s hosted by The Linux foundation and enjoys participation from numerous industry groups. Fujitsu co-founded two efforts in Hyperledger.

We have Ursa which is poised to be the crypto hub for all blockchain needs. And Cactus, a platform for cross-blockchain transactions. I put up the links on the slide here. We would love participation from NTT and friends.

 

So that was a brief history of QA-NIZKs.Thanks for giving me the opportunity and thanks for listening.

A Brief History of Quasi-Adaptive NIZKs

arnab roy headshot from summit

Arnab Roy,
Research Manager, Fujitsu Labs of America