Subscribe to IACR Eprint feed
Updated: 11 hours 14 min ago

Probabilistic Termination and Composability of Cryptographic Protocols

Thu, 01/11/2018 - 12:47
When analyzing the round complexity of multi-party protocols, one often overlooks the fact that underlying resources, such as a broadcast channel, can by themselves be expensive to implement. For example, it is well known that it is impossible to implement a broadcast channel by a (deterministic) protocol in a sub-linear (in the number of corrupted parties) number of rounds. The seminal works of Rabin and Ben-Or from the early 80's demonstrated that limitations as the above can be overcome by using randomization and allowing parties to terminate at different rounds, igniting the study of protocols over point-to-point channels with probabilistic termination and expected constant round complexity. However, absent a rigorous simulation-based definition, the suggested protocols are proven secure in a property-based manner or via ad hoc simulation-based frameworks, therefore guaranteeing limited, if any, composability. In this work, we put forth the first simulation-based treatment of multi-party cryptographic protocols with probabilistic termination. We define secure multi-party computation (MPC) with probabilistic termination in the UC framework and prove a universal composition theorem for probabilistic-termination protocols. Our theorem allows to compile a protocol using deterministic-termination hybrids into a protocol that uses expected-constant-round protocols for emulating these hybrids, preserving the expected round complexity of the calling protocol. We showcase our definitions and compiler by providing the first composable protocols (with simulation-based security proofs) for the following primitives, relying on point-to-point channels: (1) expected-constant-round perfect Byzantine agreement, (2) expected-constant-round perfect parallel broadcast, and (3) perfectly secure MPC with round complexity independent of the number of parties.

High Order Masking of Look-up Tables with Common Shares

Wed, 01/10/2018 - 13:58
Masking is an effective countermeasure against side-channel attacks. In this paper, we improve the efficiency of the high-order masking of look-up tables countermeasure introduced at Eurocrypt 2014, based on a combination of three techniques, and still with a proof of security in the Ishai-Sahai-Wagner (ISW) probing model. The first technique consists in proving security under the stronger t-SNI definition, which enables to use n=t+1 shares instead of n=2t+1 against t-th order attacks. The second technique consists in progressively incrementing the number of shares within the countermeasure, from a single share to n, thereby reducing the complexity of the countermeasure. The third technique consists in adapting the common shares approach introduced by Coron et al. at CHES 2016, so that half of a randomized look-up table can be pre-computed for multiple SBoxes. When combined, our three techniques lead to a factor 10.7 improvement in efficiency, asymptotically for a large number of shares n. For a practical implementation with a reasonable number of shares, we get a 4.8 speed-up factor compared to the initial countermeasure for AES.

Scalable, transparent, and post-quantum secure computational integrity

Wed, 01/10/2018 - 11:50
Human dignity demands that personal information, like medical and forensic data, be hidden from the public. But veils of secrecy designed to preserve privacy may also be abused to cover up lies and deceit by parties entrusted with Data, unjustly harming citizens and eroding trust in central institutions. Zero knowledge (ZK) proof systems are an ingenious cryptographic solution to the tension between the ideals of personal privacy and institutional integrity, enforcing the latter in a way that does not compromise the former. Public trust demands transparency from ZK systems, meaning they be set up with no reliance on any trusted party, and have no trapdoors that could be exploited by powerful parties to bear false witness. For ZK systems to be used with Big Data, it is imperative that the public verification process scale sublinearly in data size. Transparent ZK proofs that can be verified exponentially faster than data size were first described in the 1990s but early constructions were impractical, and no ZK system realized thus far in code (including that used by crypto-currencies like Zcash) has achieved both transparency and exponential verification speedup, simultaneously, for general computations. Here we report the first realization of a transparent ZK system (ZK-STARK) in which verification scales exponentially faster than database size, and moreover, this exponential speedup in verification is observed concretely for meaningful and sequential computations, described next. Our system uses several recent advances on interactive oracle proofs (IOP), such as a “fast” (linear time) IOP system for error correcting codes. Our proof-of-concept system allows the Police to prove to the public that the DNA profile of a Presidential Candidate does not appear in the forensic DNA profile database maintained by the Police. The proof, which is generated by the Police, relies on no external trusted party, and reveals no further information about the contents of the database, nor about the candidate’s profile; in particular, no DNA information is disclosed to any party outside the Police. The proof is shorter than the size of the DNA database, and verified faster than the time needed to examine that database naively.

Efficient Batch Zero-Knowledge Arguments for Low Degree Polynomials

Wed, 01/10/2018 - 11:48
The work of Bootle et al. (EUROCRYPT 2016) constructs an extremely efficient zero-knowledge argument for arithmetic circuit satisfiability in the discrete logarithm setting. However, the argument does not treat relations involving commitments, and furthermore, for simple polynomial relations, the complex machinery employed is unnecessary. In this work, we give a framework for expressing simple relations between commitments and field elements, and present a zero-knowledge argument which is considerably more efficient than Bootle et al. in the case where the polynomials in the relation have low degree. Our method also directly yields a batch protocol, which allows many copies of the same relation to be more efficiently proved and verified in a single argument. We instantiate our protocol with concrete polynomial relations to construct zero-knowledge arguments for membership proofs, polynomial evaluation proofs, and range proofs. Our work can be seen as a unified explanation of the underlying ideas of these protocols. In some of these instantiations we also achieve better efficiency than the state of the art.

Fast Lattice Basis Reduction Suitable for Massive Parallelization and Its Application to the Shortest Vector Problem

Wed, 01/10/2018 - 11:42
The hardness of the shortest vector problem for lattices is a fundamental assumption underpinning the security of many lattice-based cryptosystems, and therefore, it is important to evaluate its difficulty. Here, recent advances in studying the hardness of problems in large-scale lattice computing have pointed to need to study the design and methodology for exploiting the performance of massive parallel computing environments. In this paper, we propose a lattice basis reduction algorithm suitable for massive parallelization. Our parallelization strategy is an extension of the Fukase-Kashiwabara algorithm~(J. Information Processing, Vol. 23, No. 1, 2015). In our algorithm, given a lattice basis as input, variants of the lattice basis are generated, and then each process reduces its lattice basis; at this time, the processes cooperate and share auxiliary information with each other to accelerate lattice basis reduction. In addition, we propose a new strategy based on our evaluation function of a lattice basis in order to decrease the sum of squared lengths of orthogonal basis vectors. We applied our algorithm to problem instances from the SVP Challenge. We solved a 150-dimension problem instance in about 394 days by using large clusters, and we also solved problem instances of dimensions 134, 138, 140, 142, 144, 146, and 148. Since the previous world record is the problem of dimension 132, these results demonstrate the effectiveness of our proposal.

Efficient Adaptively Secure Zero-knowledge from Garbled Circuits

Wed, 01/10/2018 - 11:41
Zero-knowledge (ZK) protocols are undoubtedly among the central primitives in cryptography, lending their power to numerous applications such as secure computation, voting, auctions, and anonymous credentials to name a few. The study of efficient ZK protocols for non-algebraic statements has seen rapid progress in recent times, relying on the techniques from secure computation. The primary contribution of this work lies in constructing efficient UC-secure constant round ZK protocols from garbled circuits that are secure against $adaptive$ corruptions, with communication linear in the size of the statement. We begin by showing that the practically efficient ZK protocol of Jawurek et al. (CCS 2013) is adaptively secure when the underlying oblivious transfer (OT) satisfies a mild adaptive security guarantee. We gain adaptive security with little to no overhead over the static case. A conditional verification technique is then used to obtain a three-round adaptively secure zero-knowledge argument in the non-programmable random oracle model (NPROM). We draw motivation from state-of-the-art non-interactive secure computation protocols and leveraging specifics of ZK functionality show a two-round protocol that achieves static security. It is a proof, while most known efficient ZK protocols and our three round protocol are only arguments.

Improved (Almost) Tightly-Secure Structure-Preserving Signatures

Wed, 01/10/2018 - 11:40
Structure Preserving Signatures (SPS) allow the signatures and the messages signed to be further encrypted while retaining the ability to be proven valid under zero-knowledge. In particular, SPS are tailored to have structure suitable for Groth-Sahai NIZK proofs. More precisely, the messages, signatures, and verification keys are required to be elements of groups that support efficient bilinear-pairings (bilinear groups), and the signature verification consists of just evaluating one or more bilinear-pairing product equations. Since Groth-Sahai NIZK proofs can (with zero-knowledge) prove the validity of such pairing product equations, it leads to interesting applications such as blind signatures, group signatures, traceable signatures, group encryption, and delegatable credential systems. In this paper, we further improve on the SPS scheme of Abe, Hofheinz, Nishimaki, Ohkubo and Pan (CRYPTO 2017) while maintaining only an $O(\lambda)$-factor security reduction loss to the SXDH assumption. In particular, we compress the size of the signatures by almost 40%, and reduce the number of pairing-product equations in the verifier from fifteen to seven. Recall that structure preserving signatures are used in applications by encrypting the messages and/or the signatures, and hence these optimizations are further amplified as proving pairing-product equations in Groth-Sahai NIZK system is not frugal. While our scheme uses an important novel technique introduced by Hofheinz (EuroCrypt 2017), i.e., structure-preserving adaptive partitioning, our approach to building the signature scheme is different and this leads to the optimizations mentioned. Thus we make progress towards an open problem stated by Abe et al (CRYPTO 2017) to design more compact SPS-es with smaller number of group elements.

New Approaches for Distinguishers and Attacks on round-reduced AES

Wed, 01/10/2018 - 10:35
At Eurocrypt 2017 the first secret-key distinguisher for 5-round AES has been presented. Although it allows to distinguish a random permutation from an AES-like one, it seems (rather) hard to exploit such a distinguisher in order to implement a key-recovery attack different than brute-force like. In this paper, we propose new secret-key distinguishers for 4 and 5 rounds of AES that exploit properties which are independent of the secret key and of the details of the S-Box. While the 4-round distinguisher exploits in a different way the same property presented at Eurocrypt 2017, the new proposed 5-round ones are obtained by combining our new 4-round distinguisher with a modified version of a truncated differential distinguisher. As a result, while a "classical" truncated differential distinguisher exploits the probability that a couple of texts satisfies or not a given differential trail independently of the others couples, our distinguishers work with sets of N >> 1 (related) couples of texts. In particular, our new 5-round AES distinguishers exploit the fact that such sets of texts satisfy some properties with a different probability than a random permutation. Even if such 5-round distinguishers have higher complexity than the one present in the literature, one of them can be used as starting point to set up the first key-recovery attack on 6-round AES that exploits directly a 5-round secret-key distinguisher. The goal of this paper is indeed to present and explore new approaches, showing that even a distinguisher like the one presented at Eurocrypt - believed to be hard to exploit - can be used to set up a key-recovery attack.

Equational Security Proofs of Oblivious Transfer Protocols

Tue, 01/09/2018 - 20:01
We exemplify and evaluate the use of the equational framework of Micciancio and Tessaro (ITCS 2013) by analyzeing a number of concrete Oblivious Transfer protocols: a classic OT transformation to increase the message size, and the recent (so called ``simplest'') OT protocol in the random oracle model of Chou and Orlandi (Latincrypt 2015), together with some simple variants. Our analysis uncovers subtle timing bugs or shortcomings in both protocols, or the OT definition typically employed when using them. In the case of the OT length extension transformation, we show that the protocol can be formally proved secure using a revised OT definition and a simple protocol modification. In the case of the ``simplest'' OT protocol, we show that it cannot be proved secure according to either the original or revised OT definition, in the sense that for any candidate simulator (expressible in the equational framework) there is an environment that distinguishes the real from the ideal system.

Improving Stateless Hash-Based Signatures

Tue, 01/09/2018 - 17:56
We present several optimizations to SPHINCS, a stateless hash-based signature scheme proposed by Bernstein et al. in 2015: PORS, a more secure variant of the HORS few-time signature scheme used in SPHINCS; secret key caching, to speed-up signing and reduce signature size; batch signing, to amortize signature time and reduce signature size when signing multiple messages at once; mask-less constructions to reduce the key size and simplify the scheme; and Octopus, a technique to eliminate redundancies from authentication paths in Merkle trees. Based on a refined analysis of the subset resilience problem, we show that SPHINCS' parameters can be modified to reduce the signature size while retaining a similar security level and computation time. We then propose Gravity-SPHINCS, our variant of SPHINCS embodying the aforementioned tricks. Gravity-SPHINCS has shorter keys (32 and 64 bytes instead of $\approx1$ KB), shorter signatures ($\approx30$ KB instead of 41 KB), and faster signing and verification for a same security level as SPHINCS.

State of the Art in Lightweight Symmetric Cryptography

Tue, 01/09/2018 - 11:26
Lightweight cryptography has been one of the "hot topics" in symmetric cryptography in the recent years. A huge number of lightweight algorithms have been published, standardized and/or used in commercial products. In this paper, we discuss the different implementation constraints that a "lightweight" algorithm is usually designed to satisfy in both the software and the hardware case. We also present an extensive survey of all lightweight symmetric primitives we are aware of. It covers designs from the academic community, from government agencies and proprietary algorithms which were reverse-engineered or leaked. Relevant national (NIST...) and international (ISO/IEC...) standards are listed. We identified several trends in the design of lightweight algorithms, such as the designers' preference for ARX-based and bitsliced-S-Box-based designs or simpler key schedules. We also discuss more general trade-offs facing the authors of such algorithms and suggest a clearer distinction between two subsets of lightweight cryptography. The first, ultra-lightweight cryptography, deals with primitives fulfilling a unique purpose while satisfying specific and narrow constraints. The second is ubiquitous cryptography and it encompasses more versatile algorithms both in terms of functionality and in terms of implementation trade-offs.

Practical Witness Encryption for Algebraic Languages Or How to Encrypt Under Groth-Sahai Proofs

Tue, 01/09/2018 - 10:59
Witness encryption (WE) is a recent powerful encryption paradigm, which allows to encrypt a message using the description of a hard problem (a word in an NP-language) and someone who knows a solution to this problem (a witness) is able to efficiently decrypt the ciphertext. Recent work thereby focuses on constructing WE for NP complete languages (and thus NP). While this rich expressiveness allows flexibility w.r.t. applications, it makes existing instantiations impractical. Thus, it is interesting to study practical variants of WE schemes for subsets of NP that are still expressive enough for many cryptographic applications. We show that such WE schemes can be generically constructed from smooth projective hash functions (SPHFs). In terms of concrete instantiations of SPHFs (and thus WE), we target languages of statements proven in the popular Groth-Sahai (GS) non-interactive witness-indistinguishable/zero-knowledge proof framework. This allows us to provide a novel way to encrypt. In particular, encryption is with respect to a GS proof and efficient decryption can only be done by the respective prover. The so obtained constructions are entirely practical. To illustrate our techniques, we apply them in context of privacy-preserving exchange of information.

Related Randomness Security for Public Key Encryption, Revisited

Tue, 01/09/2018 - 09:58
Motivated by the history of randomness failures in practical systems, Paterson, Schuldt, and Sibborn (PKC 2014) introduced the notion of related randomness security for public key encryption. In this paper, we firstly show an inherent limitation of this notion: if the family of related randomness functions is sufficiently rich to express the encryption function of the considered scheme, then security cannot be achieved. This suggests that achieving security for function families capable of expressing more complex operations, such as those used in random number generation, might be difficult. The current constructions of related randomness secure encryption in the standard model furthermore reflect this; full security is only achieved for function families with a convenient algebraic structure. We additionally revisit the seemingly optimal random oracle model construction by Paterson et al. and highlight its limitations. To overcome this difficulty, we propose a new notion which we denote related refreshable randomness security. This notion captures a scenario in which an adversary has limited time to attack a system before new entropy is added. More specifically, the number of encryption queries with related randomness the adversary can make before the randomness is refreshed, is bounded, but the adversary is allowed to make an unbounded total number of queries. Furthermore, the adversary is allowed to influence how entropy is added to the system. In this setting, we construct an encryption scheme which remains secure in the standard model for arbitrary function families of size $2^p$ (where $p$ is polynomial in the security parameter) that satisfy certain collision-resistant and output-unpredictability properties. This captures a rich class of functions, which includes, as a special case, circuits of polynomial size. Our scheme makes use of a new construction of a (bounded) related-key attack secure pseudorandom function, which in turn is based on a new flavor of the leftover hash lemma. These technical results might be of independent interest.

A note on the implementation of the Number Theoretic Transform

Tue, 01/09/2018 - 09:28
The Number Theoretic Transform (NTT) is the time critical function required by cryptographic protocols based on the Ring Learning With Errors problem (RLWE),a popular choice for post-quantum cryptography. Here we apply a simple methodology to convert the NTT and its inverse from a mathematically correct (but side-channel vulnerable) description, to an efficient constant-time side-channel resistant version.

An Analysis of Acceptance Policies For Blockchain Transactions

Tue, 01/09/2018 - 08:35
The standard acceptance policy for a cryptocurrency transaction at most exchanges is to wait until the transaction is placed in the blockchain and followed by a certain number of blocks. However, as noted by Sompolinsky and Zohar, the amount of time for blocks to arrive should also be taken into account as it affects the probability of double spending. Specifically, they propose a dynamic policy for transaction acceptance that depends on both the number of confirmations and the amount of time since transaction broadcast. In this work we study the implications of using such a policy compared with the standard option that ignores block timing information. Using an exact expression for the probability of double spend, via numerical results, we analyze time to transaction acceptance (performance) as well as the time and cost to perform a double spend attack (security). We show that while expected time required for transaction acceptance is improved using a dynamic policy, the time and cost to perform a double spend attack for a particular transaction is reduced.

Faster AVX2 optimized NTT multiplication for Ring-LWE lattice cryptography

Tue, 01/09/2018 - 08:29
Constant-time polynomial multiplication is one of the most time-consuming operations in many lattice-based cryptographic constructions. For schemes based on the hardness of Ring-LWE in power-of-two cyclotomic fields with completely splitting primes, the AVX2 optimized implementation of the Number-Theoretic Transform (NTT) from the NewHope key-exchange scheme is the state of the art for fast multiplication. It uses floating point vector instructions. We show that by using a modification of the Montgomery reduction algorithm that enables a fast approach with integer instructions, we can improve on the polynomial multiplication speeds of NewHope and Kyber by a factor of $4.2$ and $6.3$ on Skylake, respectively.

Scaling ORAM for Secure Computation

Tue, 01/09/2018 - 02:07
We design and implement a Distributed Oblivious Random Access Memory (ORAM) data structure that is optimized for use in two-party secure computation protocols. We improve upon the access time of previous constructions by a factor of up to ten, their memory overhead by a factor of one hundred or more, and their initialization time by a factor of thousands. We are able to instantiate ORAMs that hold $2^{34}$ bytes, and perform operations on them in seconds, which was not previously feasible with any implemented scheme. Unlike prior ORAM constructions based on hierarchical hashing, permutation, or trees, our Distributed ORAM is derived from the new Function Secret Sharing scheme introduced by Boyle, Gilboa and Ishai. This significantly reduces the amount of secure computation required to implement an ORAM access, albeit at the cost of $O(n)$ efficient local memory operations. We implement our construction and find that, despite its poor $O(n)$ asymptotic complexity, it still outperforms the fastest previously known constructions, Circuit ORAM and Square-root ORAM, for datasets that are 32 KiB or larger, and outperforms prior work on applications such as stable matching or binary search by factors of two to ten.

Fast and Secure Linear Regression and Biometric Authentication with Security Update

Mon, 01/08/2018 - 22:46
We explicitly present a homomorphic encryption scheme with a flexible encoding of plaintexts. We prove its security under the LWE assumption, and innovatively show how the scheme can be used to handle computations over both binary strings and real numbers. In addition, using the scheme and its features, we build fast and secure systems of - linear regression using gradient descent, namely finding a reasonable linear relation between data items which remain encrypted. Compared to the best previous work over a simulated dataset of $10^8$ records each with 20 features, our system dramatically reduces the server running time from about 8.75 hours (of the previous work) to only about 10 minutes. - biometric authentication, in which we show how to reduce ciphertext sizes by half and to do the computation at the server very fast, compared with the state-of-the-art. Moreover, as key rotation is a vital task in practice and is recommended by many authorized organizations for key management, - we show how to do key rotation over encrypted data, without any decryption involved, and yet homomorphic properties of ciphertexts remain unchanged. In addition, our method of doing key rotation handles keys of different security levels (e.g., 80- and 128-bit securities), so that the security of ciphertexts and keys in our scheme can be "updated", namely can be changed into a higher security level.

Two-Factor Authentication with End-to-End Password Security

Mon, 01/08/2018 - 14:22
We present a secure two-factor authentication (TFA) scheme based on the possession by the user of a password and a crypto-capable device. Security is ``end-to-end" in the sense that the attacker can attack all parts of the system, including all communication links and any subset of parties (servers, devices, client terminals), can learn users' passwords, and perform active and passive attacks, online and offline. In all cases the scheme provides the highest attainable security bounds given the set of compromised components. Our solution builds a TFA scheme using any Device-Enhanced PAKE, defined by Jarecki et al., and any Short Authenticated String (SAS) Message Authentication, defined by Vaudenay. We show an efficient instantiation of this modular construction which utilizes any password-based client-server authentication method, with or without reliance on public-key infrastructure. The security of the proposed scheme is proven in a formal model that we formulate as an extension of the traditional PAKE model. We also report on a prototype implementation of our schemes, including TLS-based and PKI-free variants, as well as several instantiations of the SAS mechanism, all demonstrating the practicality of our approach.

On the Message Complexity of Secure Multiparty Computation

Mon, 01/08/2018 - 14:19
We study the minimal number of point-to-point messages required for general secure multiparty computation (MPC) in the setting of computational security against semi-honest, static adversaries who may corrupt an arbitrary number of parties. We show that for functionalities that take inputs from $n$ parties and deliver outputs to $k$ parties, $2n+k-3$ messages are necessary and sufficient. The negative result holds even when given access to an arbitrary correlated randomness setup. The positive result can be based on any 2-round MPC protocol (which can in turn can be based on 2-message oblivious transfer), or on a one-way function given a correlated randomness setup.

Pages

IACR Eprint