Upgrade 2021: CIS LAB Speakers

September 21, 2021 // Upgrade 2021: CIS LAB Speakers

Towards Accountability in CRS Generation

Vipul Goyal, Senior Scientist, NTT Research Cryptography & Information Security

Transcript of the presentation Towards Accountability in CRS Generation, given at the NTT Upgrade 2021 Research Summit, September 21, 2021.

Vipul Goyal: So this talk will be about a sort of a new direction towards accountability in CRS generation. And I hope that you can see my laser pointer as well. So we are all familiar with the common reference string model, where we have a trusted authority and this trusted authority publishes the CRS. So to speak. And different parties can sort of trust that the CRS was generated correctly. And using that we can perform all sorts of cryptography. And the motivation is that there are several cryptographic primitives, which cannot be realized in the plain model, which we can realize in the CRS model. The most well-known example is a non-interactive zero-knowledge. We also have malicious two round secure to PC and MPC.


In the plane model, if you want to do malicious secure computation, you need at least four rounds. Whereas if you go to the CRS model, you can do with as low as two rounds. We also have universal composable secure competition and so on. So, the CRS model has played an important role in cryptography. It has allowed us to realize important primitives. But of course, the downside of this model is that we do need this trusted authority, which sort of everybody has to trust unconditionally. So this brings me to the question in the real world, “Do there really exist trusted parties?” And I would try to argue that in real life, there’s no such thing as an unconditional trusted party, unless it is backed up by some sort of accountability.


And if we look around, we can see examples everywhere. So who can we trust? What about a democratic elected leader of one of the largest, oldest and most stable democracies? Can we trust him? Well, you can decide for yourself. What about large companies? Can we trust large companies? So, there’s no shortage of sort of, systematic standard scandals. Like for example, there was this Volkswagen scandal where a cheap device was installed on the cars systematically to pass emission standards. There was the Facebook, Cambridge analytical data scandal.


Wells Fargo, opening customer’s account without their knowledge and so on. So essentially, like if you look at large companies, unless there’s some sort of accountability, unless, you know, they have some sort of fear that something will be done against them, then it’s not clear if they can really service trusted parties. So now, what happens if this trust goes wrong? What if the CRS authority is malicious and publishes the CRS in a corrupt way? So in that case, all bets are off. For example, if you are trying to use the CRS for MPC, then this authority can even recover your input, given the protocol transcript, and sell it to anyone. Coming to NIZK, this authority can recover, for example, your witness and sell the used witness, and so on.


And even if you find out that something fishy is going on, it’s not clear that you can prove anything to anyone. Like, you can blame the authority publicly, but then anybody can blame the authority. There’s no way to check if it is correct or wrong. So what can we do? So one option is to just move away from the CRS model, but this option is clearly unsatisfactory. The CRS model has led to a very successful line of research. So for example, can we really live without NIZKs? Or can we live without like two round MPC? And so on. So previous works have recognized these types of problems and there are mitigation approaches, for example, super polynomial simulation security. Or another approach is you can run an NPC to generate the CRS.


And this MPC can run in a large number of rounds. This is the approach which was taken by Zcash. There’s the multi-string model, where there are multiple CRSs and then maybe a majority of them come from honest parties and so on. So we do continue on this line of research, but our question is a little bit different. The question is, “Can we move towards some notion of accountability?” So we do place some trust in this CRS generation authority, but if the trust goes wrong, if the target becomes corrupt, then we want some notion of accountability. And I would argue that this is closer to how the real world works. In the real world, we do place trust in the others. So for example, we place a lot of trust in elected officials, even in large companies and so on, but in real life, this trust is almost always backed by some sort of accountability.


So we want to try to create a situation where the cheating generally will have an unexpected negative return for the authority. So can we eliminate trust entirely? Well, this may be hard. If you eliminate trust entirely, then you are back to the plain model where we already have several impossibility results, for NIZKs, for 2PC, and so on. So in this work, we are concerned with a very specific scenario. So suppose you participate in some MPC protocol. And then, you find out that the authority took the transcript, authority recovered your input, and lets it be revealed to another party. So in such an attack against you, can you obtain a cryptographic proof?


And can you even take this proof to a judge and convince the judge that the party was indeed dishonest? So in other words, we have this authority, malicious authority, who produces the CRS. Then, Alice and Bob, they have private inputs, X1 and X2, they run some 2PC. The transcript is ‘T,’ and then there’s some authority, sorry, not authority, there’s some party which has this transcript. It could be one of these two parties, or it could be anybody else. And this party asks the authority to run sort of a backdoor service. It pays the authority, it gives the transcript to the authority. The authority, using some kind of trapdoor associated with the CRS, is able to recover the input of one of the parties from this transcript T, and send it back, in exchange for some payment.


So if this is happening, can you prove this to a judge? And of course there are natural limits even here on what can be achieved. For example, if the party sells your input, but if you never find out, then it’s unclear if anything can be done or who will be the person who will do anything. But if the sold input indeed falls into your hand, for example, this party right here, maybe this party was your agent or something like that. Then you do know that the authority is dishonest and distributing your input. Can you obtain a cryptographic proof of this fact and convince a judge? So we will get to the more detailed final notion in a minute, but our notion is inspired by other notions, such as Traitor Tracing, and there’s this accountable authority IBE and ABE [A-IBE and A-ABE]. So, connections to these might be clear at a later point.


Okay. So let’s look at the specific scenario again. So suppose we have this malicious authority who is running this backdoor service. You give the transcript and you get back the input. Then our goal is to design an extractor, which can also use this backdoor service to generate a proof that the authority was dishonest. So an extractor queries with another transcript, T’, gets back X1′, which is supposedly the input inside this transcript, and now the extractor is able to output a cryptographic proof of dishonesty of this authority. So a couple of things to keep in mind: that if the authority recognizes that it’s talking to an extractor, rather than somebody in the real world, then it may not open the input.


So thus, the queries that this extractor makes to this authority should look like real queries. Another thing to keep in mind is that extractor doesn’t have the code of the authority or anything like that, extractor cannot rewind authority. Extractor can make a single query to this authority just as this party would, and the extractor is supposed to hold an output of proof. So when we try to think about this notion and try to achieve it, there are a couple of problems immediately. So what if they input X1 can be easily guessed, right? So in that case, malicious authority might not even need the CRS trapdoor, and just output X1 here on the transcript. So in that case, it’s unclear if the extractor can output any cryptographic proof of dishonesty, because the trapdoor associated with the CRS has never been used.


So that’s why we assume that the inputs are sampled from a known distribution, D. Just for this part of the definition. And this distribution will be uniform for our constructions. So we assume that the honest part is using a uniform input, which is hard to guess for anybody. And also, this definition is plausible only for functionalities where the output, in some sense, doesn’t leak the input. Because the output might be contained in the transcript, and it should not allow us to compute the input for such an extractor to exist. So let’s go to the definition a little more formally. So the first criteria for this 2PC is that first of all it should satisfy the standard ideal/real 2PC definition.


No input distribution, nothing like that. We want to obtain a strict strengthening of 2PC. Then second, is the property which is called accountability. Which should hold for inputs coming from uniform distributions or any other distribution. So this is formalized using two PPT algorithms. PPT Extractor, and a Judge. So, suppose the authorities are running this backdoor service, and decrypts the input with some non-negligible probability F. Then, there should exist an extractor, which outputs a proof, P, with another non-negligible ‘g’ such that, the judge accepts this proof. So the judge algorithm on being given as input this proof, P output 1.


1 means that the authority is indeed corrupted. So authority is implicated. And on the flip side, to protect an honest authority, we have the defamation free condition. So if the authority is honest, then the probability that any PPT adversary can output a proof, P, which the judge accepts, it should be negligible.

So let’s look at the accountability definition in a little more detail. So we have two experiments. So first is the accountability real experiment, and then there’ll be an extractor experiment. So in the real experiment, we have this malicious authority. The malicious authority can generate the CRS, CRS* in any arbitrary way. And this authority also chooses one of the inputs. Let’s call it X*. And then, a 2PC protocol is run where the honest party’s input is chosen from the uniform distribution, and the input of the corrupted parties, X*. And then, for now, let’s say this 2PC protocol is run honestly.


And the transcript, T, is given back to the authority to avail the backdoor service. And lastly, this authority responds back with an input X’. And the output in this experiment is 1, if, and only if, X equals X’. Which means that the authority indeed is giving the correct input. Now, there’s the accountability extraction experiment, where the same thing happens. Authority chooses a corrupted CRS and an input X*. Extractor prepares this 2PC transcript in some way, where the input of the honest party is chosen at random. The transcript is sent back to the malicious authority. Malicious authority might abort or might respond.


Let’s say malicious authority sends X’. And then the extractor outputs the proof, P, based on this extract. The output of this experiment is 1, if the judge is convinced that the proof, P, is correct. Which means the authority is corrupted. So what we want, is that if in the real experiment, the output is 1, the authorities are indeed decrypting inputs. Then, in this extraction experiment, the output must also be 1, which means that the judge is convinced. So for every PPT authority, A, if there exists a non-negligible function ‘f’ such that the output in the real experiment is 1 with f of Lambda, Lambda is the security parameter, then there exists a PPT extractor, E, and another non-negligible ‘g’, such that the output in this extracting experiment is also 1 with the g Lambda.


So f and g, ideally, they should be very close to each other. Maybe they should just be equal, but in our constructions, f and g might be slightly different. They are polynomialy related. g might be smaller than f but still non-negligible. So let me pause here to see if there are any questions. Hopefully we have some mechanism to ask questions.


Okay. So let me continue. So here’s about current statements. So first, we have an impossibility result for some functionality. In particular, if the functionality reveals the inputs fully given the output. Then as we already discussed, there cannot exist a construction in this model. So there exists a two-party functionality, such that 2PC in the CRS model, it satisfies accountability and defamation-free properties it’s impossible. And then we have a positive result for a large class of functionalities. Assuming SXDH are functional on bilinear maps, there exists a two-round maliciously secure 2PC for all partial input hiding functionalities, which satisfies the definition.


What are partial input hiding functionalities? Very briefly, this class of functionalities includes, for instance, oblivious transfer, PIR, subset sum and so on. And this functionality is saying the output must, theoretically, hide some bits of the input. So the output f of X1, X2, is such that for every X2, the output only depends on some strict subset of the bits of X1. So X2, in some sense, selects some subset of the bits in X1, and the output is computed from that. So we also consider NIZK in this model. NIZK with accountable CRS. And the way to define it is, we have four algorithms, GenCRS, Prove, Verify, and Judge.


And this is again defined with respect to distribution D if… So, first of all, this should be Gen, Prove and Verify. They should be a standard NIZK system for all of NP. And then, the whole system should satisfy accountability with respect to distribution and defamation free property. So in other words, let’s say the malicious authority is running this backdoor service. So here, the party accepts the statement and the proof pi. So pi is the NIZK proof. And then using the trapdoor, there’s a part that recovers the witness inside pi and sends it back to this party. If this happens, we require the existence of a simulator, which again, can make a single query to this backdoor service, get some witness, w’, and then output a cryptographic proof of dishonesty here.


And based on the same assumption SXDH on bilinear maps, we proved that there exists NIZK for all of NP in this model. So now, let me go to some of the ideas behind the NIZK construction. So, here’s a list of the tools that we will use. We will use a re-randomizable bit commitment scheme, which is perfectly binding and computationally hiding. Re-randomizable just means that given a commitment and a randomness, r, we are able to compute a fresh commitment under fresh randomness, r’, to the same value X. This can be instantiated based on the construction from GOS, and also this recent paper of Ananth et al.


We will use non-interactive WI proof systems, and we will also use a pseudorandom generator. So now, let me define the distribution, which is defined by an NP relation. So a statement here, consists of a circuit C, so circuit is this circuit C, and then, there exists the commitments to a bunch of input bits, c1 to cm. So this is the input to the circuit, c1 contains the bit X1, c2 contains X2, and so on. And then finally, b is the output of the circuit. And then formally, in this NP relation, we are trying to prove that the circuit C indeed outputs the bit b on the committed input bits, c1 to cm.


So what is the witness here? The witness is basically just the opening to all of these commitments. So c1 to cm can be opened to X1 to Xm. R is the randomness of the opening such that, the committed input is indeed to these bits X1 to Xm, but the associated decommitment values, r1 to rm, And the circuit C outputs, X1 outputs b on this input, X1 to Xm. This input will also be denoted as X. And what is the distribution here? The distribution just says that this randomness, r1 to rm, is uniformly sampled.


So any valid witness for this NP relation, consists of the values committed inside, as well as the opening. If the party is running a backdoor service, this is what the party has to respond to it. So here is a starting proof system. So, the party publishes a string, y, and the string can just be a uniform string for now. Then we have the prover and the verifier. Provers trying to prove that the circuit C outputs b on this committed input. And the base proof system, the starting idea will be using FLS. The prover sends a NIWI proof, proving that either X is indeed in R, or in other words, the circuit indeed outputs the bit b on this committed input, or there’s a trapdoor branch, which is that the CRS y is the output of a PRG.


And it’s easy to see soundness here if the CRS is honestly generated, then with a very high probability, such a PRG seed does not exist. On the other hand, for zero knowledge, the simulator can generate an indistinguishable CRS, which is indeed the PRG output. So y is G of s for some seed, s, and then using this branch, the simulator can provide. So this is just the starting proof system. What about accountability? Now we will try to modify this proof system slowly to add accountability. So a very high level idea is the authority now, in addition to y, the uniform string, will publish two commitments, com0 and com1. And these two commitments will be to 0 and 1, just two bits. So com0 is a commitment to zero, com1 is a commitment to one. And now, looking ahead of a very high level intuition for the extractor will be, the extractor will try to take these commitments in the CRS, and will mold these commitments to produce the statement. And now, if the authority gives the witness corresponding to the statement, the party ends up giving the opening of all of these commitments, which would allow the extractor to obtain an opening of this com0 and com1.


Which means that now you have, in some sense, obtained a trapdoor corresponding to the CRS, which is a witness, which is a proof that the authority was dishonest. So, let’s go through it in more detail. So these commitments, com0 and com1, they can just be ignored by the honest provers. The honest provers can proceed as usual. And the extractor will use these to create the statement plus the proof. So if the authority opens the witness, the extractor can learn the opening of these commitments, and send to the judge as the proof. So here let’s have a first attempt at making the extractor works. So maybe, the extractor has to sample the statement. Maybe each of c1 to cm is either com0 or com1.


Depending upon what X1 to Xm are. If you want to commit to zero, you pick com0, otherwise, you pick com1. And the statement is indeed true. That extractor does pick an X where C of X gives to b. So maybe the extractor can somehow create the proof C of X equals b. And now, if A ends up opening this, if A ends up giving a witness, it would consist of the opening of either com0 or com1. So this is sort of high level intuition behind our attempt, but there are several problems with it. The first problem is if the commitments are coming from the commitments in the CRS, then the authority, A, can just distinguish and abort.

Right? If in the statement you put com0 and com1 from the CRS, the authority knows that. It’s looking a little bit different from the real proof. So the fix to this is simple, we will just re-randomize the commitment. So we will take com0 and com1, we re-randomize them using fresh randomness each time to prepare, c1 to cm. And now we say that c1 to cm are constructed by mauling the CRS. Second problem is, if the extractor creates c1 to cm by mauling the CRS, the extractor might not have their opening. But in NIWI, the prover must prove that C of X equals b.


Where X is committed inside this. And if you don’t have the opening, it’s not clear how you can prove it. So the fix to this problem is, you add a third branch to NIWI. The extractor proves that even though it doesn’t have the opening, the value X inside it is indeed correct. Which means that ‘ci’ was derived by re-randomizing com0, if X0 is 1, this should X0, and com1 if X1 is 1. So there are two ways to create these commitments. Either you prepare them from scratch, fresh, or you mold the CRS. But in both cases, you have to prove how you constructed these commitments.


And the last problem is, maybe the authority, A, cheats and publishes com0 and com1 incorrectly. Maybe both of them are commitments to 0. And you can’t ask the authority to use NIZK to prove the correctness of the CRS, because authority can cheat. Authority is the one picking the CRS. So the fix to this is, the authority will publish two pairs of commitments. And authority will give NIWI that one of the pairs is correctly generated, and the extractor will just choose one of the pairs at random to maul. Well, if the extractor ended up choosing the bad pair, then no guarantees. If the extractor ended up choosing a good pair, then we’re back to the previous scenario.


So the probability of the success of the extractor goes down by at most a factor of half, which is fine with us, the extractor just has to succeed with some noticeable probability. So here’s the summary, accountability follows from perfectly rerandomization and WI. And defamation free property follows from the hiding of the commitment. And I am kind of running out of time. So let me skip the ideas behind the 2PC construction, just a couple of slides. And let me go to the open problems. So we only considered the 2PC setting. Can we extend that to MPC? It’s a natural problem. Can we expand the class of functionalities beyond partial input hiding?


Another interesting direction is what if the authority, A, only gives partial information about the input or about the witness? For example, one of the bits of the input. Can you still implicate the authority? And then finally, I believe that this whole idea of bringing accountability to cryptography, it’s sort of understudied in some sense. So are there other avenues to bring accountability in cryptography? And towards that end, let me just briefly mention another recent work, which is known as traceable secret sharing. So here we have, Alice. Alice has some secret data, s. So let’s say Alice shares this data across n-servers, shares to Sh1 to Shn. Let’s reveal using traceable secret sharing.


Now we have an adversary who’s interested in learning the secret, s. This adversary can approach some extractor of the servers to obtain enough shares, to recover s, by potentially bribing the servers and so on. So if that happens, and Alice finds out, Alice would like to trace the corrupted servers and potentially even prove it in a court of law. So this is known as a traceable secret sharing, which is also something which we recently studied. And both of these papers are available online. And I’ll stop here. Yeah?


Attendee: Can I ask a question? Sorry. I had to come from somewhere else. So I missed a little bit of it, but could you go over the scenario for the NISK? What is the bad authority trying to do? Is it trying to prove something false or someone else gives a proof and it wants to like, let everyone else know what its witness was? Can you specify?


Vipul Goyal: Yeah. The latter, yeah. The authority is corrupt in the sense that authority can take your NISK and recover your witness and sell it. So we are not concerned with soundness here.


Attendee: Okay. So when it finds your witness, so if the authority is in the habit of doing this, that means some other party could set them, you know, do a sting up, I mean, for lack of a better term, like a sting operation that will demonstrate that it was behaving badly. So that’s the goal, pretty much.

Vipul Goyal: Exactly. And we show an extractor, which makes a single query to the authority. And if the authority is indeed running this backdoor service, and if it answers even a single query like that, with some noticeable probability, you can generate a cryptographic proof that the authority was indeed decrypting witnesses.


Attendee: Okay. Okay. So, if the authority does this bad thing one time, you can’t necessarily catch it. He has to be making a habit of it and then you get it during times it’s behaving badly.


Vipul Goyal: No, just one time might be enough. At least in our construction of our extractor, will make a single query.


Attendee: Oh, okay, but during that one time, you have to be running the extractor, right? And not like a real thing.


Vipul Goyal: Yeah. Yes.


Presented at the NTT Research Upgrade 2021 Summit on September 21, 2021.

vipulGoyal

Vipul Goyal

Senior Scientist, NTT Research Cryptography & Information Security

Vipul Goyal is a senior scientist at NTT Research and an Associate Professor in the Computer Science Department at CMU. Previously, he was a researcher in the Cryptography and Complexity group at Microsoft Research, India. He received his PhD in Computer Science from University of California, Los Angeles in Dec 2009. He received his B.Tech. in Computer Science from Indian Institute of Technology (BHU), Varanasi. He is broadly interested in all areas of cryptography (and in theoretical computer science in general). Currently his research topics include secure multi-party computation, non-malleable cryptography, and foundations of blockchains.  Dr. Goyal is a winner of several honors including a 2016 ACM CCS test of time award, a Microsoft Research graduate fellowship, and, a Google outstanding graduate student award. He was named to the Forbes magazine “30 under 30” list of people changing science and healthcare in 2013.

Your Privacy

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.