Subscribe to IACR Eprint feed
Updated: 5 hours 42 min ago

Memory-Tight Reductions

Thu, 04/12/2018 - 12:25
Cryptographic reductions typically aim to be tight by transforming an adversary A into an algorithm that uses essentially the same resources as A. In this work we initiate the study of memory efficiency in reductions. We argue that the amount of working memory used (relative to the initial adversary) is a relevant parameter in reductions, and that reductions that are inefficient with memory will sometimes yield less meaningful security guarantees. We then point to several common techniques in reductions that are memory-inefficient and give a toolbox for reducing memory usage. We review common cryptographic assumptions and their sensitivity to memory usage. Finally, we prove an impossibility result showing that reductions between some assumptions must unavoidably be either memory- or time-inefficient. This last result follows from a connection to data streaming algorithms for which unconditional memory lower bounds are known.

Rigorous Time-Memory Trade-offs for Parallel Collision Search

Thu, 04/12/2018 - 11:57
Parallel versions of collision search algorithms require a significant amount of memory to store a proportion of the points computed by the pseudo-random walks. Implementations available in the literature use a hash table to store these points and allow fast memory access. We provide rigorous theoretical evidence that memory is an important factor in determining the runtime of this method. We propose to replace the traditional hash table by a simple structure, inspired by radix trees, which saves space and provides fast look-up and insertion. In the case of many-collision search algorithms, our variant has a constant-factor improved runtime. We give benchmarks that evaluate the linear parallel performance of the attack on ECDLP.

Quantum FHE (Almost) As Secure as Classical

Wed, 04/11/2018 - 17:30
Fully homomorphic encryption schemes (FHE) allow to apply arbitrary efficient computation to encrypted data without decrypting it first. In Quantum FHE (QFHE) we may want to apply an arbitrary quantumly efficient computation to (classical or quantum) encrypted data. We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multi-key classical FHE. Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on super-polynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.

Invisible Sanitizable Signatures and Public-Key Encryption are Equivalent

Wed, 04/11/2018 - 17:27
Sanitizable signature schemes are signature schemes which support the delegation of modification rights. The signer can allow a sanitizer to perform a set of admissible operations on the original message and then to update the signature, in such a way that basic security properties like unforgeability or accountability are preserved. Recently, Camenisch et al. (PKC 2017) devised new schemes with the previously unattained invisibility property. This property says that the set of admissible operations for the sanitizer remains hidden from outsiders. Subsequently, Beck et al. (ACISP 2017) gave an even stronger version of this notion and constructions achieving it. Here we characterize the invisibility property in both forms by showing that invisible sanitizable signatures are equivalent to IND-CPA-secure encryption schemes, and strongly invisible signatures are equivalent to IND-CCA2-secure encryption schemes. The equivalence is established by proving that invisible (resp. strongly invisible) sanitizable signature schemes yield IND-CPA-secure (resp. IND-CCA2-secure) public-key encryption schemes and that, vice versa, we can build (strongly) invisible sanitizable signatures given a corresponding public-key encryption scheme.

SoK: The Problem Landscape of SIDH

Wed, 04/11/2018 - 17:27
The Supersingular Isogeny Diffie-Hellman protocol (SIDH) has recently been the subject of increased attention in the cryptography community. Conjecturally quantum-resistant, SIDH has the feature that it shares the same data flow as ordinary Diffie-Hellman: two parties exchange a pair of public keys, each generated from a private key, and combine them to form a shared secret. To create a potentially quantum-resistant scheme, SIDH depends on a new family of computational assumptions involving isogenies between supersingular elliptic curves which replace both the discrete logarithm problem and the computational and decisional Diffie-Hellman problems. Like in the case of ordinary Diffie-Hellman, one is interested in knowing if these problems are related. In fact, more is true: there is a rich network of reductions between the isogeny problems securing the private keys of the participants in the SIDH protocol, the computational and decisional SIDH problems, and the problem of validating SIDH public keys. In this article we explain these relationships, which do not appear elsewhere in the literature, in hopes of providing a clearer picture of the SIDH problem landscape to the cryptography community at large.

Fast modular squaring with AVX512IFMA

Wed, 04/11/2018 - 17:16
Modular exponentiation represents a signi cant workload for public key cryptosystems. Examples include not only the classical RSA, DSA, and DH algorithms, but also the partially homomorphic Paillier encryption. As a result, efficient software implementations of modular exponentiation are an important target for optimization. This paper studies methods for using Intel's forthcoming AVX512 Integer Fused Multiply Accumulate (AVX512IFMA) instructions in order to speed up modular (Montgomery) squaring, which dominates the cost of the exponentiation. We further show how a minor tweak in the architectural definition of AVX512IFMA has the potential to further speed up modular squaring.

Impossible Differential Attack on QARMA Family of Block Ciphers

Wed, 04/11/2018 - 17:16
QARMA is a family of lightweight tweakable block ciphers, which is used to support a software protection feature in the ARMv8 architecture. In this paper, we study the security of QARMA family against the impossible differential attack. First, we generalize the concept of truncated difference. Then, based on the generalized truncated difference, we construct the first 6-round impossible differential dinstinguisher of QARMA. Using the 6-round distinguisher and the time-and-memory trade-off technique, we present 10-round impossible differential attack on QARMA. This attack requires $2^{119.3}$ (resp. $2^{237.3}$) encryption units, $2^{61}$ (resp. $2^{122}$) chosen plaintext and $2^{72}$ 72-bit (resp. $2^{144}$ 144-bit) space for QARMA-64 (resp. QARMA-128). Further, if allowed with higher memory complexity (about $2^{116}$ 120-bit and $2^{232}$ 240-bit space for QARMA-64 and QARMA-128, respectively), our attack can break up 11 rounds of QARMA. To the best of our knowledge, these results are currently the best results with respect to attacked rounds.

Breaking the Circuit-Size Barrier in Secret Sharing

Wed, 04/11/2018 - 17:11
We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for $n$ parties is associated to a monotone function $\mathsf F:\{0,1\}^n\to\{0,1\}$. In such a scheme, a dealer distributes shares of a secret $s$ among $n$ parties. Any subset of parties $T \subseteq [n]$ should be able to put together their shares and reconstruct the secret $s$ if $\mathsf F(T)=1$, and should have no information about $s$ if $\mathsf F(T)=0$. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions $\mathsf F$. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general $\mathsf F$ has shares of size $2^{n-o(n)}$, but the best lower bound is $\Omega(n^2/\log n)$. Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for $\mathsf F$. Indeed, several researchers have suggested the existence of a {\em representation size barrier} which implies that the right answer is closer to the upper bound, namely, $2^{n-o(n)}$. In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size $2^{0.994n}$ and a linear secret sharing scheme for any access structure with shares of size $2^{0.999n}$. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size $2^{\tilde{O}(\sqrt{n})}$ for $2^{{n\choose n/2}}$ monotone access structures, out of a total of $2^{{n\choose n/2}\cdot (1+O(\log n/n))}$ of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem.

The Discrete-Logarithm Problem with Preprocessing

Tue, 04/10/2018 - 17:00
This paper studies discrete-log algorithms that use preprocessing. In our model, an adversary may use a very large amount of precomputation to produce an "advice" string about a specific group (e.g., NIST P-256). In a subsequent online phase, the adversary's task is to use the preprocessed advice to quickly compute discrete logarithms in the group. Motivated by surprising recent preprocessing attacks on the discrete-log problem, we study the power and limits of such algorithms. In particular, we focus on generic algorithms -- these are algorithms that operate in every cyclic group. We show that any generic discrete-log algorithm with preprocessing that uses an $S$-bit advice string, runs in online time $T$, and succeeds with probability $\epsilon$, in a group of prime order $N$, must satisfy $ST^2 = \tilde{\Omega}(\epsilon N)$. Our lower bound, which is tight up to logarithmic factors, uses a synthesis of incompressibility techniques and classic methods for generic-group lower bounds. We apply our techniques to prove related lower bounds for the CDH, DDH, and multiple-discrete-log problems. Finally, we demonstrate two new generic preprocessing attacks: one for the multiple-discrete-log problem and one for certain decisional-type problems in groups. This latter result demonstrates that, for generic algorithms with preprocessing, distinguishing tuples of the form $(g, g^x, g^{(x^2)})$ from random is much easier than the discrete-log problem.

Differential Cryptanalysis of Round-Reduced Sparx-64/128

Tue, 04/10/2018 - 12:35
Sparx is a family of ARX-based block ciphers designed according to the long-trail strategy (LTS) that were both introduced by Dinu et al. at ASIACRYPT'16. Similar to the wide-trail strategy, the LTS allows provable upper bounds on the length of differential characteristics and linear paths. Thus, the cipher is a highly interesting target for third-party cryptanalysis. However, the only third-party cryptanalysis on Sparx-64/128 to date was given by Abdelkhalek et al. at AFRICACRYPT'17 who proposed impossible-differential attacks on 15 and 16 (out of 24) rounds. In this paper, we present chosen-ciphertext differential attacks on 16 rounds of Sparx-64/128. First, we show a truncated-differential analysis that requires $2^{32}$ chosen ciphertexts and approximately $2^{93}$ encryptions. Second, we illustrate the effectiveness of boomerangs on Sparx by a rectangle attack that requires approximately $2^{59.6}$ chosen ciphertexts and about $2^{122.2}$ encryption equivalents. Finally, we also considered a yoyo attack on 16 rounds that, however, requires the full codebook and approximately $2^{126}$ encryption equivalents.

Post-Quantum Zero-Knowledge Proofs for Accumulators with Applications to Ring Signatures from Symmetric-Key Primitives

Tue, 04/10/2018 - 12:10
In this paper we address the construction of privacy-friendly cryptographic primitives for the post-quantum era and in particular accumulators with zero-knowledge membership proofs and ring signatures. This is an important topic as it helps to protect the privacy of users in online authentication or emerging technologies such as cryptocurrencies. Recently, we have seen first such constructions, mostly based on assumptions related to codes and lattices. We, however, ask whether it is possible to construct such primitives without relying on structured hardness assumptions, but solely based on symmetric-key primitives such as hash functions or block ciphers. This is interesting because the resistance of latter primitives to quantum attacks is quite well understood. In doing so, we choose a modular approach and firstly construct an accumulator (with one-way domain) that allows to efficiently prove knowledge of (a pre-image of) an accumulated value in zero-knowledge. We, thereby, take care that our construction can be instantiated solely from symmetric-key primitives and that our proofs are of sublinear size. Latter is non trivial to achieve in the symmetric setting due to the absence of algebraic structures which are typically used in other settings to make these efficiency gains. Regarding efficient instantiations of our proof system, we rely on recent results for constructing efficient non-interactive zero-knowledge proofs for general circuits. Based on this building block, we then show how to construct logarithmic size ring signatures solely from symmetric-key primitives. As constructing more advanced primitives only from symmetric-key primitives is a very recent field, we discuss some interesting open problems and future research directions. Finally, we want to stress that our work also indirectly impacts other fields: for the first time it raises the requirement for collision resistant hash functions with particularly low AND count.

More Efficient Commitments from Structured Lattice Assumptions

Tue, 04/10/2018 - 02:51
We present a practical construction of an additively homomorphic commitment scheme based on structured lattice assumptions, together with a zero-knowledge proof of opening knowledge. Our scheme is a design improvement over the previous work of Benhamouda et al. in that it is not restricted to being statistically binding. While it is still possible to instantiate our scheme to be either statistically binding or statistically hiding, it is most efficient when both hiding and binding properties are only computational. This results in approximately a factor of 4 reduction in the size of the proof and a factor of 6 reduction in the size of the commitment over the aforementioned scheme.

Estimate all the {LWE, NTRU} schemes!

Mon, 04/09/2018 - 23:02
We consider all LWE- and NTRU-based encryption, key encapsulation, and digital signature schemes proposed for standardisation as part of the Post-Quantum Cryptography process run by the US National Institute of Standards and Technology (NIST). In particular, we investigate the impact that different estimates for the asymptotic runtime of (block-wise) lattice reduction have on the predicted security of these schemes. Relying on the ``LWE estimator'' of Albrecht et al., we estimate the cost of running primal and dual lattice attacks against every LWE-based scheme, using every cost model proposed as part of a submission. Furthermore, we estimate the security of the proposed NTRU-based schemes against the primal attack under all cost models for lattice reduction.

Time-Based Direct Revocable Ciphertext-Policy Attribute-Based Encryption with Short Revocation List

Mon, 04/09/2018 - 23:02
In this paper, we propose an efficient revocable Ciphertext-Policy Attribute-Based Encryption (CP-ABE) scheme. We base on the direct revocation approach, by embedding the revocation list into ciphertext. However, since the revocation list will grow longer as time goes by, we further leverage this by proposing a secret key time validation technique so that users will have their keys expired on a date and the revocation list only needs to include those user keys revoked before their intended expired date (e.g. those user keys which have been stolen before expiry). These keys can be removed from the revocation list after their expiry date in order to keep the revocation list short, as these keys can no longer be used to decrypt ciphertext generated after their expiry time. This technique is derived from Hierarchical Identity-based Encryption (HIBE) mechanism and thus time periods are in hierarchical structure: year, month, day. Users with validity of the whole year can decrypt any ciphertext associated with time period of any month or any day within the year. By using this technique, the size of public parameters and user secret key can be greatly reduced. A bonus advantage of this technique is the support of discontinuity of user validity (e.g. taking no-paid leave).

Symbolic Side-Channel Analysis for Probabilistic Programs

Mon, 04/09/2018 - 22:32
In this paper we describe symbolic side-channel analysis techniques for detecting and quantifying information leakage, given in terms of Shannon and Min Entropy. Measuring the precise leakage is challenging due to the randomness and noise often present in program executions and side-channel observations. We account for this noise by introducing additional (symbolic) program inputs which are interpreted probabilistically, using symbolic execution with parameterized model counting. We also explore an approximate sampling approach for increased scalability. In contrast to typical Monte Carlo techniques, our approach works by sampling symbolic paths, representing multiple concrete paths, and uses pruning to accelerate computation and guarantee convergence to the optimal results. The key novelty of our approach is to provide bounds on the leakage that are provably under- and over-approximating the real leakage. We implemented the techniques in the Symbolic PathFinder tool and we demonstrate them on Java programs.

Fuzzy Password Authenticated Key Exchange

Mon, 04/09/2018 - 09:53
Consider key agreement by two parties who start out knowing a common secret (which we refer to as “pass-string”, a generalization of “password”), but face two complications: (1) the pass-string may come from a low-entropy distribution, and (2) the two parties’ copies of the pass-string may have some noise, and thus not match exactly. We provide the first efficient and general solutions to this problem that enable, for example, key agreement based on commonly used biometrics such as iris scans. The problem of key agreement with each of these complications individually has been well studied in literature. Key agreement from low-entropy shared pass-strings is achieved by password-authenticated key exchange (PAKE), and key agreement from noisy but high-entropy shared pass-strings is achieved by information-reconciliation protocols as long as the two secrets are “close enough.” However, the problem of key agreement from noisy low-entropy pass-strings has never been studied. We introduce (universally composable) fuzzy password-authenticated key exchange (fPAKE), which solves exactly this problem. fPAKE does not have any entropy requirements for the pass-strings, and enables secure key agreement as long as the two pass-strings are “close” for some notion of closeness. We also give two constructions. The first construction achieves our fPAKE definition for any (efficiently computable) notion of closeness, including those that could not be handled before even in the high-entropy setting. It uses Yao’s garbled circuits in a way that is only two times more costly than their use against semi-honest adversaries, but that guarantees security against malicious adversaries. The second construction is more efficient, but achieves our fPAKE definition only for pass-strings with low Hamming distance. It builds on very simple primitives: robust secret sharing and PAKE.

Improved High-Order Conversion From Boolean to Arithmetic Masking

Mon, 04/09/2018 - 09:17
Masking is a very common countermeasure against side channel attacks. When combining Boolean and arithmetic masking, one must be able to convert between the two types of masking, and the conversion algorithm itself must be secure against side-channel attacks. An efficient high-order Boolean to arithmetic conversion scheme was recently described at CHES 2017, with complexity independent of the register size. In this paper we describe a simplified variant with fewer mask refreshing, and still with a proof of security in the ISW probing model. In practical implementations, our variant is roughly 25% faster.

A Note On Groth-Ostrovsky-Sahai Non-Interactive Zero-Knowledge Proof System

Mon, 04/09/2018 - 09:17
In 2006, Groth, Ostrovsky and Sahai designed one non-interactive zero-knowledge (NIZK) proof system [new version, J. ACM, 59(3), 1-35, 2012] for plaintext being zero or one using bilinear groups with composite order. Based on the system, they presented the first perfect NIZK argument system for any NP language and the first universal composability secure NIZK argument for any NP language in the presence of a dynamic/adaptive adversary. This resolves a central open problem concerning NIZK protocols. In this note, we remark that in their proof system the prover has not to invoke the trapdoor key to generate witnesses. The mechanism was dramatically different from the previous works, such as Blum-Feldman-Micali proof system and Blum-Santis-Micali-Persiano proof system. We would like to stress that the prover can cheat the verifier to accept a false claim if the trapdoor key is available to him.

Verifier Non-Locality in Interactive Proofs

Mon, 04/09/2018 - 09:15
In multi-prover interactive proofs, the verifier interrogates the provers and attempts to steal their knowledge. Other than that, the verifier's role has not been studied. Augmentation of the provers with non-local resources results in classes of languages that may not be NEXP. We have discovered that the verifier plays a much more important role than previously thought. Simply put, the verifier has the capability of providing non-local resources for the provers intrinsically. Therefore, standard MIPs may already contain protocols equivalent to one in which the prover is augmented non-locally. Existing MIPs' proofs of soundness implicitly depend on the fact that the verifier is not a non-local resource provider. The verifier's non-locality is a new unused tool and liability for protocol design and analysis. Great care should have been taken when claiming that ZKMIP = MIP and MIP = NEXP. For the former case, we show specific issues with existing protocols and revisit the proof of this statement. For the latter case, we exhibit doubts that we do not fully resolve. To do this, we define a new model of multi-prover interactive proofs which we call ``correlational confinement form'' (CCF-MIP).

Multi-power Post-quantum RSA

Mon, 04/09/2018 - 09:15
Special purpose factoring algorithms have discouraged the adoption of multi-power RSA, even in a post-quantum setting. We revisit the known attacks and find that a general recommendation against repeated factors is unwarranted. We find that one-terabyte RSA keys of the form $n = p_1^2p_2^3p_3^5p_4^7\cdots p_i^{\pi_i}\cdots p_{20044}^{225287}$ are competitive with one-terabyte RSA keys of the form $n = p_1p_2p_3p_4\cdots p_i\cdots p_{2^{31}}$. Prime generation can be made to be a factor of 100000 times faster at a loss of at least $1$ but not more than $17$ bits of security against known attacks. The range depends on the relative cost of bit and qubit operations under the assumption that qubit operations cost $2^c$ bit operations for some constant $c$.

Pages

IACR Eprint