当前位置:首页 >> 韩语学习 >>

Securing Threshold Cryptosystems against Chosen Ciphertext Attack


RZ 2974 (# 93020) 11/17/97 Computer Science/Mathematics

15 pages

Research Report
Securing Threshold Cryptosystems against Chosen Ciphertext Attack
V. Shoup IBM Research Division Zurich Research Laboratory 8803 Ruschlikon Switzerland R. Gennaro IBM Research Division T. J. Watson Research Center P. O. Box 704 Yorktown Heights, NY 10598 U. S. A.

LIMITED DISTRIBUTION NOTICE This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents and will be distributed outside of IBM up to one year after the date indicated at the top of this page. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and speci c requests. After outside publication, requests should be lled only by reprints or legally obtained copies of the article (e.g., payment of royalties).

IBM

Research Division Almaden Austin Beijing Haifa T.J. Watson Tokyo Zurich

Securing Threshold Cryptosystems against Chosen Ciphertext Attack
V. Shoup
IBM Research Division, Zurich Research Laboratory, 8803 Ruschlikon, Switzerland

R. Gennaro
IBM Research Division, T. J. Watson Research Center, PO Box 704, Yorktown Heights, NY 10598, U. S. A.

Abstract

For the most compelling applications of threshold cryptosystems, security against chosen ciphertext attack seems to be a requirement. However, there appear to be no practical threshold cryptosystems in the literature that are provably chosen-ciphertext secure, even in the idealized random hash function model. The contribution of this paper is to present two very practical threshold cryptosystems, and to prove that they are secure against chosen ciphertext attack in the random hash function model.

1 Introduction
In this paper, we consider the problem of designing threshold cryptosystems that are secure against chosen ciphertext attack. Our goal is to design a practical scheme, and provide strong evidence that it cannot be broken. Known standard (i.e., non-threshold) cryptosystems that are provably secure against chosen ciphertext attacks (under standard intractability assumptions) are unfortunately only of theoretical interest and not at all practical. We cannot expect to do any better for threshold cryptosystems. However, practical non-threshold schemes have been devised in the so-called random hash function, or random oracle, model. This approach, rst used by Fiat and Shamir 15], and later given a more rigorous treatment by Bellare and Rogaway 2], works as follows. Suppose that a cryptographic scheme that makes use of hash functions can be proven secure under standard intractability assumptions, but in an idealized model of computation where the hash functions are replaced by \black boxes" that output random strings. Then the basic tenet of the random hash function approach is to view this as \strong evidence" that the scheme is secure. The random hash function model has been extensively used, both as a tool for analyzing existing cryptographic schemes, and as a principle in designing new cryptographic schemes. While not as satisfying as a true security proof, proving security in the random hash function model seems far superior to the alternative approach of ad hoc design and wishful thinking. To date, no serious weakness has been found in any cryptographic scheme proven secure in the random hash function model when the random hash function is instantiated with \good" and e cient constructions based on functions like SHA-1. Even though the most compelling applications of threshold cryptosystems seem to require chosen-ciphertext security, there appear to be no practical threshold cryptosystems in the literature that are provably secure, even in the random hash function model. Our main contribution is to present and analyze two such schemes. The rst scheme, which we call TDH1 (for Threshold Di e-Hellman), is secure assuming the hardness of the Di e-Hellman problem 13]. The second scheme, TDH2, is secure under the stronger assumption of the hardness of the Di e-Hellman decision problem, but is more e cient than TDH1. In a threshold cryptosystem there is a single public encryption key, but the corresponding private decryption key is shared among a set of, say, n decryption servers in such a way that a threshold, say, k of them must cooperate to decrypt a message. The servers output decryption shares that can be combined to obtain the cleartext. Security against chosen ciphertext attack intuitively means the following. A message m is encrypted, yielding a ciphertext c, and an adversary is allowed to feed the decryption system arbitrary ciphertexts c0|di erent from, but perhaps related to c|obtaining the corresponding decryption. Security in this scenario means that the adversary should gain no useful information about the message m. In a threshold cryptosystem, the adversary has more power. Firstly, the adversary may corrupt a subset (of size at most k ? 1) of the decryption servers; in particular, the adversary knows their shares of the secret key. Secondly, and perhaps more importantly, the adversary can obtain not only the the decryption of a given ciphertext, but also the decryption shares generated by all of the servers. There is clearly no way to prevent this, since even if it is the responsibility of one of the servers to combine shares, this server may be corrupt and 1

1.1 Threshold cryptosystem and chosen ciphertext attack

collaborate with the adversary. One of the main motivations for a threshold cryptosystem is that it allows one to construct a third-party decryption service in a distributed, secure, fault-tolerant fashion, without a signi cant increase in the size of or the cost of creating a ciphertext vis-a-vis a standard cryptosystem. To be at all useful, the third party should not decrypt everything that comes its way and give it to just anybody, but should implement some kind of useful policy. To implement such a policy securely, in addition to chosen-ciphertext security, one needs an additional facility: the ability to attach a label to the ciphertext during the encryption process. Such a label is a bit string that contains information that can be used by the third party to determine if the decryption request is authorized, according to its policy. Formally, one should think of the label as being a part of the ciphertext, so that changing the label changes the ciphertext; security against chosen ciphertext attack would then imply, in particular, that one cannot subvert the third party's policy by simply swapping labels. Perhaps the most obvious example of this is key recovery. Here, two parties who wish to communicate generate a session key, and encrypt the session key under a third party's public key. The party that creates the encryption attaches a label containing the identities of the two parties, and the current time. This labeled ciphertext is sent along the wire, along with the encrypted conversation. A law enforcement agency may be authorized via a court order to tap the line, and request that the third party decrypt the ciphertext containing the session key. To protect individual privacy, the court order speci es to whom the wiretap applies and a time interval. To enforce this policy, the third party only decrypts a ciphertext if the information in its label is consistent with the given court order. Another example is the recent work of 1] on fair exchange. Here, an \o line" trusted third party is used, meaning that the third party is only needed if one of the two trading partners tries to cheat. As above, ciphertexts are labeled to allow the third party to enforce a fairness policy. A label might also contain the identity or public key of the intended recipient, allowing the decryption service to direct the cleartext to that recipient only. The usefulness of labeled ciphertext was already observed by Lim and Lee 18] (who called it an indicator). In a non-threshold cryptosystem, labeled ciphertexts can be implemented by simply embedding a hash of the label in the cleartext before encrypting. The decryption service is given a ciphertext and a label, computes the cleartext, and compares the value of the embedded hash with the hash of the given label. If these match, and the decryption policy authorizes the given label, then the cleartext is released. If the underlying cryptosystem is secure against chosen ciphertext attack, then so too will be the cryptosystem with labeled ciphertexts. As will become evident, this implementation is not suitable for threshold cryptosystems.

1.2 Motivation

1.3 Outline of the paper

In x2, we review relevant background and related work. In this section, we also sketch a simpler threshold cryptosystem, which we call TDH0, which may indeed be secure, but for which it seems di cult to obtain a security proof under standard cryptographic assumptions, even in the random hash function model. In x3, we de ne what we believe is a simple yet adequate formal security model. In x4, we brie y discuss the basic technical tools we use in the design and analysis of our cryptosystems, TDH1 and TDH2, which we describe and analyzed in 5 and 2

6, respectively. In x7 we discuss some implementation issues, and we conclude with some nal remarks and open questions in x8.

2 Background and Related Work
Impractical but provable schemes. In the context of standard (i.e., non-threshold) cryptosystems, provably secure cryptosystems secure against chosen ciphertext attack were given by Naor and Yung 20], Racko and Simon 23], Dolev, Dwork, and Naor 14], and De Santis and Persiano 10]. Actually, the system in 20] was proven secure only against a restricted type of chosen-ciphertext attack, the so-called \lunch-time attack," wherein the target ciphertext is not revealed to the adversary until after it gets its chosen ciphertexts decrypted. All known provably secure schemes rely on theoretical constructions of non-interactive zero-knowledge proofs 4], and as such are quite impractical. Practical schemes. Again in the context of non-threshold cryptosystems, practical cryptosystems intended to be secure against chosen ciphertext attack were proposed by Damgard 8], Zheng and Seberry 28], and Bellare and Rogaway 2, 3]. The schemes in 28, 2, 3] are all known to be chosen-ciphertext secure in the random hash function model. The scheme in 8] is known to be insecure against chosen ciphertext attack, and its security against lunch-time attacks is not well understood.

2.1 Chosen ciphertext attack

2.2 Threshold cryptosystems

Threshold cryptosystems are part of a general approach known as threshold cryptography, introduced by Boyd 5], Desmedt 11], and Desmedt and Frankel 12]. In particular, in 12], a threshold cryptosystem based on the Di e-Hellman problem is presented. The techniques developed later by De Santis et al 9] yield a corresponding system based on RSA 24]. These schemes can be shown to withstand chosen plaintext attack, but they are not known to withstand chosen ciphertext attack, and it seems doubtful that this can be done with current proof techniques. Very little work has been done on securing threshold cryptosystems against chosen ciphertext attack, and in fact, there does not even appear to be a formal de nition in the literature.

2.3 Why isn't it trivial to secure a threshold cryptosystem against chosen ciphertext attack?

Our rst observation is that none of the practical schemes mentioned above can be readily transformed into threshold schemes that are chosen-ciphertext secure. To see why, consider the scheme in 2], which is representative. This scheme uses a trapdoor permutation f and hash functions G and H ; to encrypt a message m, a random r in the domain of f is chosen, and the ciphertext is (f (r); m G(r); H (r; m)). The output length of G is equal to that of m, and the output length of H is large enough to make it di cult to nd collisions. Given a ciphertext (s; c; v), the decryption algorithm computes r = f ?1(s), m = G(r) c, and v0 = H (r; m). If v0 = v, it outputs m, and otherwise \?". The proof of security in the random hash function model relies in a critical way on the fact that the decryption algorithm makes the \validity test" v = v0 before generating an output. 3

Now consider turning this into a threshold scheme, and assume we can e ectively share the trapdoor. The problem is that the above validity test cannot be performed until after the individual shares of f ?1 (s) are generated and then combined. As mentioned above, we must assume that the adversary can see these shares, making the validity test pointless, and giving the adversary the ability to invert f at chosen points. This destroys any hope of proving security using current techniques. The above di culty was noted by Lim and Lee 18], who observed that a publicly checkable validity test would be useful in this regard. Lim and Lee proposed two practical systems based on this observation; however, both schemes were subsequently broken by Frankel and Yung 16]. Interestingly, one can readily convert all of the impractical schemes mentioned above into secure (but impractical) threshold schemes. It is instructive to see why this is so. All of these schemes use a publicly checkable validity test, which is essentially a non-interactive zeroknowledge proof of knowledge of the plaintext. The key to the proof of security is that one can simulate the adversary's view with a simulator that has a trapdoor that allows it to extract the plaintext >from the given proof of knowledge in a decryption request, thus allowing the simulator to respond correctly to the request. Assuming the underlying decryption function can be e ectively shared, such a scheme can then be transformed into a threshold scheme where each decryption server performs the validity test before generating a decryption share. What makes the above approach impractical is the proof of knowledge. An obvious thing to try is to design an appropriate zero-knowledge proof of knowledge of the plaintext in the random hash function model that is more practical. One obvious \solution" is the following threshold cryptosystem, based on the Di e-Hellman problem, which we call TDH0. Say we have a group G of prime order q with generator g, a hash function H , and a public key h = gx. To encrypt a message m, we choose r 2 Zq at random, and compute u = gr and c = H (hr ) m. The ciphertext consists of u, c, and a non-interactive proof of knowledge of logg u. It is straightforward to share the secret key, and a decryption server only generates a decryption share if the proof of knowledge is valid. For the non-interactive proof of knowledge, we could use Schnorr's 25] signature scheme with \public key" u and \private key" r. Intuitively, this strategy makes sense, and TDH0 may very well be secure, although it does not seem possible to prove that it is secure using known techniques, under standard cryptographic assumptions, even in the random hash function model. The problem is a bit subtle. In the random hash function model, Schnorr's signature scheme is indeed a proof of knowledge, but the corresponding knowledge extractor does not operate \on line"|it must rewind the adversary many times, each time running it forward with di erent random outputs for the hash function used in the signature scheme. This type of \rewinding" extractor is adequate to prove that the corresponding signature scheme is secure in the random hash function model, but appears to be inadequate to prove the security of TDH0. The reason is that a simulator for TDH0 must respond to many decryption requests on-line, and all of this rewinding|possibly \undoing" previous decryption requests which recursively leads to more rewinding|results in a blow-up in the running time of the simulator that is exponential in the number of decryption requests. A similar phenomenon was observed by Pointcheval and Stern 22] in the context of blind digital signatures. One could circumvent all this by straightaway assuming an on-line knowledge extractor; that is, we simply assume that any algorithm that can create create a valid proof of knowledge 4

2.4 The TDH0 Cryptosystem

can be transformed into an algorithm that simultaneously outputs a corresponding witness. A similar type of assumption is made by Damgard 8] and Zheng and Seberry 28]. This type of assumption, however, is not very acceptable: it is completely nonstandard, and it is not at all clear how it is related to any kind of intractability assumption.

3 A Formal Security Model
A k out of n threshold cryptosystem consists of the following components: A key generation algorithm that takes as input a security parameter, the number n of decryption severs, and the threshold parameter k; it outputs a public key PK, and a list SK1 ; : : :; SKn of private keys. An encryption algorithm that takes as input the public key PK and a cleartext m, and a label L, and outputs a ciphertext c. A decryption algorithm that takes as input the public key PK, an index 1 i n, the private key SKi, a ciphertext c, and a label L, and outputs a decryption share i. A recovery algorithm that takes as input the public key PK, a ciphertext c, a label L, and a list 1; : : : ; n of decryption shares, and outputs a cleartext m. Operation of the cryptosystem runs as follows. There is a trusted dealer and a set P1; : : :; Pn of decryption servers. Under certain conditions the presence of a trusted dealer can be eliminated 21]. In an initialization phase, the dealer is run, creating the public and private keys. For 1 i n, the public key PK and private key SKi are given to server Pi . A user who wants to encrypt a message can run the encryption algorithm, using the public key. A user who wants to decrypt a ciphertext gives the ciphertext to each server Pi , who each generate a decryption share i. These shares are then combined using the recovery algorithm to obtain the cleartext. Any practical k out of n cryptosystem should be robust, i.e., it should be able to tolerate the presence of an adversary that tries to hinder the recovery process. When a ciphertext c is decrypted, a list of corresponding decryption shares 1; : : : ; n is produced. The scheme is robust if the recovery algorithm recovers the plaintext m (at least with overwhelming probability) even if at most k ? 1 of the decryption shares are incorrect. Robust threshold cryptosystems are presented in 21, 17] and we follow their general approach in order to make our schemes robust. To de ne security against chosen ciphertext attack, we consider the following game played against an adversary.

Game A

A1 The adversary chooses to corrupt a xed set of k ? 1 servers.

A2 The key generation algorithm is run. The private keys of the corrupted servers are given

to the adversary, while the other private keys are given to the uncorrupted servers, and kept secret from the adversary. The adversary of course receives the public key as well. 5

feeding them ciphertext/label pairs, and obtaining decryption shares. A4 The adversary chooses two cleartexts m0 and m1 and a label L. These are given to an \encryption oracle" that chooses b 2 f0; 1g at random, encrypts mb using the given label L, and gives the ciphertext c to the adversary. A5 The adversary continues to interact with the uncorrupted servers, feeding them ciphertext/label pairs (c0; L0) 6= (c; L). A6 At the end of the game, the adversary outputs b0 2 f0; 1g. Security against chosen ciphertext attack means that for any polynomial time bounded adversary b0 = b with probability only negligibly greater than 1=2.

A3 The adversary interacts with the uncorrupted encryption servers in an arbitrary fashion,

4 Basic Tools

Let q be a prime, and 1 k n < q. Shamir's 26] k out of n secret sharing scheme over Zq works as follows. We have a random secret x 2 Zq . We choose random points f1; : : :; fk?1 2 Zq , ?1 set f0 = x, and de ne the polynomial F (X ) = Pjk=0 fj Xj : For 1 i n, xi = F (i) 2 Zq is the ith share of x. Just for notation purpose we will denote x as its own 0th share, x = x0. If any subset of k ? 1 shares is revealed, then no information about x is obtained, whereas if k shares are revealed, x is completely determined, and can be computed by interpolation. Actually the following property holds: for any ordered subset S = fi1; : : :; ik g f0; : : : ; ng, and any i 2 f0; : : : ; ngnS , there exists an easy-to-compute sequence S1; : : : ; S 2 Zq , such ik i that k X Sx : xi = ij ij
j =1

4.1 Threshold secret sharing

Let G be a group of prime order q with generators g; g. Let L be the language of pairs (u; u) 2 G2 such that logg u = logg u. Our cryptosystems will heavily rely on a zero-knowledge proof of membership for the language L. It is important to notice that our proofs techniques do not require a proof of knowledge (which would create the problems encountered with the TDH0 cryptosystem). The following is a well-known zero-knowledge proof system for L, due to Chaum and Pedersen 7]. Although it happens to also be a proof of knowledge we will not use that property in our schemes. Let (u; u) 2 L be given, so there exists r 2 Zq such that u = gr and u = gr . The prover chooses s 2 Zq at random, computes w = gs and w = gs, and sends w; w to the veri er. The veri er chooses e 2 Zq at random, sending this to the prover. The prover sends f = s + re to the veri er. The veri er checks that gf = wue and g f = w ue . 6

4.2 Zero-knowledge proof of discrete logarithm identities

It is well known that this proof system is sound: the veri er can be cheated into accepting a pair not in the language with probability at most 1=q. It is also well known that this proof system can be simulated in zero-knowledge against an honest veri er. By making the challenge e a hash of (u; w; u; w), then in the random hash function model, this becomes a non-interactive zero-knowledge proof. Actually, a stronger soundness condition holds that we will need in the sequel: the argument that one uses to show soundness in the above non-interactive proof system can be used to show that when the veri er accepts, not only do we have (with overwhelming probability) logg u = logg u, but also logg w = logg w. Let G be a group with generator g. The Di e-Hellman problem is this: given gx and gy , compute gxy , where x and y are random exponents. The Di e-Hellman decision problem is this: given a triple that is either of the form (gx; gy ; gxy ) or (gx; gy ; gz ), where x, y, and z are random exponents, determine which is the case. Clearly, the second problem is no harder than the rst, but it is not known if they are equivalent. The only known method for solving either problem is to solve the discrete logarithm problem: given g x, compute x. For suitable groups, such as a large prime-order subgroup of the multiplicative group modulo a large prime, all of these problems are widely conjectured to be intractable.

4.3 Intractability assumptions

5 The TDH1 Cryptosystem
We now describe the threshold cryptosystem TDH1. TDH1 works over an arbitrary group G of prime order q , with generator g ; for simplicity, assume that messages and labels are l-bit strings. It uses four hash functions: H1 : G ! f0; 1gl ; H2 : f0; 1gl f0; 1gl G G ! G; H3; H4 : G3 ! Zq : Key Generation. For a k out of n scheme, the key generation algorithm runs as follows (we assume q > n). Random points f0; : : :; fk?1 2 Zq are chosen, de ning a polynomial ?1 F (X ) = Pjk=0 fj X j 2 Zq X ]: For 0 i n, set xi = F (i) 2 Zq and hi = gxi . For notational convenience, we set x = F (0) and h = h0 = gx. The public key is (h; h1; : : :; hn ), and the ith private key is xi, for 1 i n. Note that the components h1; : : :; hn of the public key are only needed to verify decryption shares in the recovery algorithm, and are not needed by the encryption algorithm. In other words, the sender needs only to know h while the hi should be made available to all the decryption servers. The key generation protocol may be thought as being performed by a trusted dealer that self-destroys after the generation of the key. However using technique from 21] we may also assume that there is no need for a trusted dealer and that the players jointly generate the key pair. Encryption. The algorithm to encrypt a message m 2 f0; 1gl with label L 2 f0; 1gl runs as follows. We choose r; s 2 Zq at random, and compute c = H1(hr ) m; u = gr ; w = gs; g = H2(c; L; u; w); 7

u = gr ; w = gs; e = H3(g; u; w); f = s + re: The ciphertext is (c; u; u; e; f ): What is happening here is that the encryption includes a non-interactive proof that logg u = logg u. Decryption. Decryption server i does the following given ciphertext (c; u; u; e; f ) and label L. It checks if e = H3(g; u; w), where w = gf =ue ; g = H2(c; L; u; w); w = gf =ue : If this condition does not hold, it outputs \?". Otherwise, it proceeds as follows. It then chooses si 2 Zq at random, and computes ui = uxi ; u0i = usi ; h0i = gsi ; ei = H4(ui; u0i; h0i); fi = si + xiei: i Its output is (ui; ei; fi). What is happening here is that the decryption share includes non-interactive proof that logu ui = logg hi. Recovery. The recovery algorithm takes as input a ciphertext (c; u; u; e; f ), and a elist of decryption shares. A share (ui; ei; fi) is \good" if ei = H4(ui; u0i; h0i), where u0i = ufi =ui i and h0i = hfi =hei . Now assume we have a subset S = fi1; : : :; ik g of good shares. Then, using the i notation de ned in x4.1, the recovery algorithm outputs m = H ( uij0j ) c:
k Y
S

Theorem 1 In the random hash function model, the TDH1 cryptosystem is robust provided n 2k ? 1.
Since the proof that logu ui = logg hi is sound, we can assume that ui = uxi for all good shares. Since n 2k ? 1, there must be a set S = fi1; : : :; ik g of good shares, and for these, we have k Y u Sj = ux = hr ; 0 ij and so the original cleartext is recovered. That proves Theorem 1. Theorem 2 In the random hash function model, the TDH1 cryptosystem is secure against chosen ciphertext attack, assuming the Di e-Hellman problem in G is hard. The proof is in Appendix A.
j =1

j =1

6 The TDH2 Cryptosystem
Cryptosystem TDH2 is very similar to TDH1. The main di erence is that the group element g, instead of changing with each encryption, is chosen at key-generation time. We now give the details. As before, we have a group G of prime order q with generator g. We need three hash functions: H1 : G ! f0; 1gl; H2 : f0; 1gl f0; 1gl G4 ! Zq ; H4 : G3 ! Zq : Key Generation. Same as for TDH1, except that a random element g 2 G is chosen which is also part of the public key. 8

Encryption. The algorithm to encrypt a message m 2 f0; 1gl with label L 2 f0; 1gl runs as follows. We choose r; s 2 Zq at random, and compute
c = H1(hr ) m; u = gr ; w = gs ; u = gr ; w = gs ; e = H2(c; L; u; w; u; w); f = s + re:
The ciphertext is (c; u; u; e; f ): As in TDH1, the encryption includes a non-interactive proof that logg u = logg u. Decryption. Decryption server i does the following given ciphertext (c; u; u; e; f ) and label L. It checks that e = H2(c; L; u; w; u; w), where w = gf =ue and w = gf =ue. If this condition does not hold, it outputs \?". Otherwise, it proceeds exactly as in TDH1, creating an output (ui; ei; fi). Recovery. Same as for TDH1.

Theorem 3 In the random hash function model, the TDH2 cryptosystem is robust provided n 2k ? 1.
The proof is exactly the same for TDH1.

Theorem 4 In the random hash function model, the TDH2 cryptosystem is secure against
chosen ciphertext attack, assuming the Di e-Hellman decision problem in G is hard.

The proof is in Appendix B.

7 Implementation Issues
To implement these schemes, one has to choose concrete hash functions. This is relatively straightforward, but see 2] for a detailed discussion. One technicality that we have to deal with here, though, is the hash function H1 in TDH1, whose output is supposed to be an element of the group G. For example, consider the case where p is a prime, p ? 1 = mq, (m; q) = 1, and G is the group of order q in Zp . We could implement H1 by raising the output of a standard hash function (viewed as a number) to the power m modulo p. This gives us an element in G. Note that the decryption and recovery algorithms must also check that the given group elements lie in G. It is straightforward to modify the proof of security to deal with this. Unfortunately, this implementation of H1 is quite costly, as it requires extra exponentiations, some to the power m, which is typically much larger than q. The TDH2 scheme does not su er from this problem. Moreover, in TDH2, the group element g is xed (per public key). In practice, this makes quite a di erence, as one can pre-compute a table that makes exponentiation to the base g far more e cient than when it is constantly changing 6, 19]. This speeds up the encryption algorithm signi cantly. Of course the same can be done for g already in TDH1.

8 Conclusion
We have proposed two new threshold cryptosystems, TDH1 and TDH2, that are provably secure in the random hash function model assuming, respectively, that the Di e-Hellman and Di e-Hellman decision problems are hard. 9

TDH2 requires a stronger intractability assumption than TDH1, but is much more e cient. Moreover, TDH2 is not much less e cient than the very simple TDH0 scheme in x2.4, which is

not known to be secure in the random hash function model. Indeed, the encryption algorithm in TDH2 is no more than 1:67 times slower than TDH0, and the ciphertexts are less than twice as large. We close with two open problems. As we mentioned earlier, the exact status of the security of TDH0 is unclear. Open Problem 1. Prove or give compelling evidence to the contrary: the TDH0 is secure in the random hash function model. Our techniques exploit special properties of the discrete logarithm problem that are not shared by, for example, the RSA problem. Open Problem 2. Devise a practical threshold cryptosystem secure against chosen ciphertext attack in the random hash function model, based on the hardness of the RSA problem.

Appendix A: Proof of Theorem 2
The proof is by reduction: we assume we have an adversary that can guess bit b in game A, and show how to use this adversary to solve a random instance of the Di e-Hellman problem in G. First, we observe that hash function H1 is directly queried only by the adversary, except in step A4, where it is queried once by the \encryption oracle." Since H1 is a random function, it follows that if the adversary can guess bit b with probability nonnegligibly bounded away from 1=2, then he must, with nonnegligible probability, at some point in game A query H1 at exactly the same point that the encryption oracle queried H1. Given an instance of the Di e-Hellman problem, we simulate the adversary's view in such a way that the simulation remains perfect up until a point in time that the adversary queries H1 at a point that is a solution to the Di e-Hellman problem. When this happens, the simulation is no longer perfect, but it does not matter: we already solved the Di e-Hellman problem. Actually, the output of our algorithm is simply a list of all points at which H1 was queried. With nonnegligible probability, one of the group elements in this list is a solution, but we cannot tell which, as the the Di e-Hellman decision problem is itself apparently hard. If we want, we can apply techniques in 27] to e ciently transform our algorithm into one that outputs a single, correct solution to the Di e-Hellman problem with overwhelming probability. We now give the details of the simulation. Let ; 2 G random elements in G for which we want to solve the Di e-Hellman problem to the base g. That is, we want to compute = logg . At any point in the simulation, the adversary may query one of the random hash functions. The simulator responds by rst checking if the value of the hash function has already been de ned at the given point; if so, it responds with the de ned value; otherwise, it chooses a random value, de nes the value of the hash function at the given point to be this value, and responds with this value. The simulator itself may at some point choose to de ne the value of a hash function at a chosen point. Such \backpatching" is allowable so long as the hash function has not already been de ned at the chosen point. Now suppose the adversary in step A1 chooses to corrupt a subset of k ? 1 servers. Without loss of generality, we can assume these are servers P1 ; : : :; Pk?1 . Let S = f0; : : : ; k ? 1g, and we will write ij instead of S . ij 10

Now in step A2, we proceed as follows. We choose x1; : : : ; xk?1 2 Zq at random, and we set h= : Then for k i n, we compute k?1 Y hi = h i0 gxj ij : Next, we have to describe how to simulate the \encryption oracle" in step A4, and how to simulate each query to one of the noncorrupt decryption servers. We deal rst with the encryption oracle. The adversary gives a label L and two messages, m0 and m1, to the encryption oracle. We ignore the messages completely. Instead, we simply choose c 2 f0; 1gl and t; e; f 2 Zq at random. We then set u = ; g = gt; u = ut; w = gf =ue; w = gf =ue: We then backpatch, de ning H2(c; L; u; w) = g, and H3(g; u; w) = e. The output of the encryption oracle is (c; u; u; e; f ). It is easily veri ed that this backpatching is allowable. Also, one sees that u, g, and u have the right distribution; namely, they are random, subject to to the condition: logg u = logg u: The rest is just a standard zero-knowledge simulation. Thus, simulation is perfect, and will remain perfect, as long as the adversary does not query H1 at the point = ulogg h . Note that the simulator never directly queries or backpatches H1 itself; it only does this upon request of the adversary. We next deal with the simulation of the uncorrupted decryption servers. First of all, whenever the adversary queries H2 at a point other than (c; L; u; w), we arrange that the simulator de nes the value g at that point by rst choosing t 2 Zq at random, and then computing g = ht, so that the simulator knows logh g (but the adversary is oblivious to this). Now suppose Pi is given a valid ciphertext/label pair ((c; u; u; e; f ); L) 6= ((c; u; u; e; f ); L). Now, ((c; u; u; e; f ); L) determines via the validity conditions corresponding variables g, w, w. We rst argue that we can assume that (c; L; u; w) 6= (c; L; u; w). On the contrary, suppose that (c; L; u; w) = (c; L; u; w). Then of course g = g. But then with overwhelming probability, we must also have (u; w) = (u; w). This easily follows from the strong soundness condition discussed in x4.2. It then follows that e = e and f = f , which contradicts our assumption that ((c; u; u; e; f ); L) 6= ((c; u; u; e; f ); L). So assume (c; L; u; w) 6= (c; L; u; w). We can assume that the adversary has already queried H2 at the point (c; L; u; w), so that we have g = H2(c; L; u; w) = ht, where t is known to the simulator, as discussed above. Now suppose u = gr , where r is not known to the simulator. We want to compute hr . But by the soundness of the proof that logg u = logg u, we can assume that u = gr . But then (u)1=t = (g)r=t = hr : So the simulator can compute hr , but we are not quite done. We want to simulate the output of server Pi, who is supposed to output ui = hr , along with a proof that logu ui = logg hi. But i ui can be computed by the simulator as
j =1

ui = (u) i0=t
11

k?1 j =1

Y uxj ij :

Once we have ui, we can readily produce a zero-knowledge simulation of the proof that logu ui = logg hi, backpatching H4 as necessary. That completes the proof of Theorem 2.

Appendix B: Proof of Theorem 4
Again, the proof is by reduction, and we assume the adversary queries, with nonnegligible probability, the same point in game A that was queried by the encryption oracle in step A4. Let ( ; ; ) be a random instance of the Di e-Hellman decision problem. This triple is drawn from one of two distributions: that of Di e-Hellman triples, where = gx, = gy , and = gxy , for random x; y 2 Zq , or from that of random triples, where = gx, = gy , and = gz , for random x; y; z 2 Zq . The job of the simulator is to distinguish between these two distributions. It outputs a 1 or a 0, and to be an e ective test, the expected value of its output on the two distributions should di er by a nonnegligible amount. We simulate the view of the adversary in game A as follows. As in the proof of Theorem 2, we assume the adversary corrupts players P1 ; : : :; Pk?1 in step A1. In step A2, we set h = ( = gx); generate x1; : : : ; xk?1 2 Zq at random, and solve for hk ; : : : ; hn as in the proof of Theorem 2. We also choose t 2 Zq at random and set g = ht ( = gxt): Now we discuss how to simulate the adversary's view of the encryption oracle in step A4, given a label L. We choose c 2 f0; 1gl at random. We set

u = ( = gy ) and u = t;
which is either gxyt or gzt, depending on the distribution from which ( ; ; ) was drawn. We then choose e; f 2 Zq at random, and compute w = gf =ue , and w = gf =ue . We then backpatch, setting H2(c; L; u; w; u; w) = e. The output of the encryption oracle is (c; u; u; e; f ). The simulation of the uncorrupted servers is essentially just as it was in the proof of Theorem 2: the key is that the simulator knows t with g = ht, and so given a valid ciphertext/label pair ((c; u; u; e; f ); L) 6= ((c; u; u; e; f ); L); it is easy to argue that with overwhelming probability logg u = logg u, which implies we can compute ux as (u)1=t, and simulate the rest of the server's output just as before. The simulator itself never directly queries or backpatches H1, except on behalf of the adversary. If the adversary ever queries H1 at , we stop and output 1; otherwise, if the adversary terminates without querying H1 at , we output 0. That completes the description of the simulator. Consider the joint distribution of (h; g; u; u). In the case where ( ; ; ) drawn from the Di e-Hellman triple distribution, (h; g; u; u) is (statistically indistinguishable from) a random element of G4, subject to the condition logg u = logg u: In the case where ( ; ; ) is a random triple, (h; g; u; u) is simply (statistically indistinguishable from) a random element of G4 . In either case, is determined by = ulogg h ; 12

moreover, if ( ; ; ) is a Di e-Hellman triple, then the relation = ulogg h also holds. We now argue as follows. In the case where ( ; ; ) is drawn from the Di e-Hellman triple distribution, the simulation of game A is perfect until the adversary queries H1 at = ulogg h, at which point we stop and output a 1. By our assumption about the behavior of the adversary, and the fact that the simulation is perfect up to this point, this happens with nonneglible probability. Now, if in the case where ( ; ; ) is a random triple the simulator outputs a 1 with negligible probability, we are done: the simulator is an e ective test for distinguishing Di e-Hellman triples from random triples. Otherwise, suppose that in the case where ( ; ; ) is a random triple the simulator outputs 1 with nonnegligible probability. As mentioned above, (h; g; u; u) is just a random element in G4. The other random variables e, f , w, and w are also just random, subject to relations that make the \proof" that logg u = logg u look legitimate; in fact, the relation logg u = logg u does not in general hold, and the \proof" is entirely bogus, but that is irrelevant. The point is that if the adversary makes the simulator output a 1, it can essentially compute logg h u given random (h; g; u; u) 2 G4 . As we show below, we can use this adversary to solve the following variant of the Di e-Hellman problem with the nonnegligible probability: given random 0 = gt and 0 = gv , compute 0 = gv=t. It is easy to see that this problem is equivalent (under polynomial-time reduction) to the Di e-Hellman problem, and is certainly at least as hard as the Di e-Hellman decision problem. Now the details. The new simulation proceeds as follows. The input to the simulator is 0 ; 0 as above. First choose x 2 Zq at random, set

g = ( 0)x ( = gxt); and run the actual key generation algorithm for the cryptosystem, in particular, setting h = gx. Since this new simulator knows the private decryption key, it can without any trouble respond to arbitrary decryption requests. Now consider the encryption oracle in step A4, given label L. We choose c 2 f0; 1g at random, e; f 2 Zq at random, and u 2 G at random. We then set u = 0 ( = gv ): We compute w = gf =ue, and w = gf =ue. We then backpatch, setting H2(c; L; u; w; u; w) = e. The output of the encryption oracle is (c; u; u; e; f ). This new simulator halts when the adversary halts, outputting the list of all queries made to H1. It is straightforward to verify that the view of this adversary relative to this new simulator is identical to the view of the adversary relative to the original simulator on a random triple, at least up until the point that it queries H1 at = ulogg h = (gv )x=xt = gv=t = 0: So, if the adversary causes the rst simulator on a random triple to output 1 with nonneglible probability, then this same adversary causes this new simulator to output a list containing the desired solution 0. That completes the proof of Theorem 4.
13

References
1] N. Asokan, V. Shoup, and M. Waidner. Optimistic fair exchange of digital signatures. Preprint, 1997. 2] M. Bellare and P. Rogaway. Random oracles are practical: a paradigm for designing e cient protocols. In First ACM Conference on Computer and Communications Security, 1993. 3] M. Bellare and P. Rogaway. Optimal asymmetric encryption. In Advances in Cryptology| Crypto '94, pages 92{111, 1994. 4] M. Blum, A. De Santis, S. Micali, and G. Persiano. Non-interactive zero knowledge. SIAM J. Comput., 6(4):1084{1118, 1991. 5] C. Boyd. Digital multisignatures. In H. Baker and F. Piper, editors, Cryptography and Coding, pages 241{246. Claredon Press, 1986. 6] E. F. Brickell, D. M. Gordon, K. S. McCurley, and D. B. Wilson. Fast exponentiation with precomputation. In Advances in Cryptology{Eurocrypt '92, pages 200{207, 1992. 7] D. Chaum and T. Pederson. Wallet databases with observers. In Advances in Cryptology{ Crypto '92, pages 89{105, 1992. 8] I. Damgard. Towards practical public key cryptosystems secure against chosen ciphertext attacks. In Advances in Cryptology{Crypto '91, pages 445{456, 1991. 9] A. De Santis, Y. Desmedt, Y. Frankel, and M. Yung. How to share a function securely. In 26th Annual ACM Symposium on Theory of Computing, pages 522{533, 1994. 10] A. De Santis and G. Persiano. Zero-knowledge proofs of knowledge without interaction. In 33rd Annual Symposium on Foundations of Computer Science, 1992. 11] Y. Desmedt. Society and group oriented cryptography: a new concept. In Advances in Cryptology{Crypto '87, pages 120{127, 1987. 12] Y. Desmedt and Y. Frankel. Threshold cryptosystems. In Advances in Cryptology{Crypto '89, pages 307{315, 1989. 13] W. Di e and M. E. Hellman. New directions in cryptography. IEEE Trans. Info. Theory, 22:644{654, 1976. 14] D. Dolev, C. Dwork, and M. Naor. Non-malleable cryptography. In 23rd Annual ACM Symposium on Theory of Computing, pages 542{552, 1991. 15] A. Fiat and A. Shamir. How to prove yourself: practical solutions to identi cation and signature problems. In Advances in Cryptology|Crypto '86, pages 186{194, 1986. 16] Y. Frankel and M. Yung. Cryptanalysis of immunized LL public key systems. In Advances in Cryptology{Crypto '95, pages 287{296, 1995. 17] R. Gennaro, S. Jarecki, H. Krawczyk, and T. Rabin. Robust and e cient sharing of RSA functions. In Advances in Cryptology{Crypto '96, pages 157{172, 1996. 14

18] C. H. Lim and P. J. Lee. Another method for attaining security against adaptively chosen ciphertext attacks. In Advances in Cryptology{Crypto '93, pages 420{434, 1993. 19] C. H. Lim and P. J. Lee. More exible exponentiation with precomputation. In Advances in Cryptology{Crypto '94, pages 95{107, 1994. 20] M. Naor and M. Yung. Public-key cryptosystems provably secure against chosen ciphertext attacks. In 22nd Annual ACM Symposium on Theory of Computing, pages 427{437, 1990. 21] T. Pedersen. A threshold cryptosystem without a trusted party. In Advances in Cryptology{Eurocrypt '91, pages 522{526, 1991. 22] D. Pointcheval and J. Stern. Provably secure blind signature schemes. In Advances in Cryptology{Asiacrypt '96, pages 252{265, 1996. 23] C. Racko and D. Simon. Noninteractive zero-knowledge proof of knowledge and chosen ciphertext attack. In Advances in Cryptology{Crypto '91, pages 433{444, 1991. 24] R. L. Rivest, A. Shamir, and L. M. Adleman. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, pages 120{126, 1978. 25] C. Schnorr. E cient signature generation by smart cards. J. Cryptology, 4:161{174, 1991. 26] A. Shamir. How to share a secret. Communications of the ACM, 22:612{613, 1979. 27] V. Shoup. Lower bounds for discrete logarithms and related problems. In Advances in Cryptology{Eurocrypt '97, 1997. 28] Y. Zheng and J. Seberry. Practical approaches to attaining security against adaptively chosen ciphertext attacks. In Advances in Cryptology{Crypto '92, pages 292{304, 1992.

15


赞助商链接
相关文章:
更多相关标签: