Subscribe to IACR Eprint feed
Updated: 11 hours 2 min ago

Long-Term Secure Time-Stamping using Preimage-Aware Hash Functions

Tue, 08/08/2017 - 07:08
Commonly used digital signature schemes have a limited lifetime because their security is based on computational assumptions that will potentially break in the future when more powerful computers are available. In 1993, Bayer et al.\ proposed to renew a digital signature by time-stamping the signature together with the signed document. Based on their idea long-term timestamp schemes have been proposed and standardized that allow to protect data integrity over long periods of time. To minimize the risk of a design failure that affects the security of these schemes, it is important to formally analyze their security. However, many of the proposed schemes have not been subject to a formal security analysis yet. In this paper, we address this issue by formally analyzing the security of a hash-based long-term timestamp scheme that is based on the ideas of Bayer et al. Our analysis shows that the security level of this scheme degrades cubic over time, a security loss that needs to be taken into account when the scheme is used in practice.

On the Security of Practical and Complete Homomorphic Encrypted Computation

Tue, 08/08/2017 - 06:00
Security with respect to the operator as an adversary is considered for processors supporting unbounded general purpose homomorphic encrypted computation. An efficient machine code architecture is defined for those platforms and it is proved that user programs expressed in it are cryptographically obfuscated, guaranteeing privacy though they, their traces and (encrypted) data are visible to the operator. It is proved that encrypted user data cannot be deciphered by the operator, nor may programs be altered to give an intended result. A compiler is defined and it is proved that any recompilation produces uniformly distributed random variations in runtime data, supporting cryptographic obfuscation.

GIFT: A Small Present (Full version)

Mon, 08/07/2017 - 21:11
In this article, we revisit the design strategy of PRESENT, leveraging all the advances provided by the research community in construction and cryptanalysis since its publication, to push the design up to its limits. We obtain an improved version, named GIFT, that provides a much increased efficiency in all domains (smaller and faster), while correcting the well-known weakness of PRESENT with regards to linear hulls. GIFT is a very simple and clean design that outperforms even SIMON or SKINNY for round-based implementations, making it one of the most energy efficient ciphers as of today. It reaches a point where almost the entire implementation area is taken by the storage and the Sboxes, where any cheaper choice of Sbox would lead to a very weak proposal. In essence, GIFT is composed of only Sbox and bit-wiring, but its natural bitslice data flow ensures excellent performances in all scenarios, from area-optimised hardware implementations to very fast software implementation on high-end platforms. We conducted a thorough analysis of our design with regards to state-of-the-art cryptanalysis, and we provide strong bounds with regards to differential/linear attacks.

Simple Amortized Proofs of Shortness for Linear Relations over Polynomial Rings

Mon, 08/07/2017 - 13:44
For a public value $y$ and a linear function $f$, giving a zero-knowledge proof of knowledge of a secret value $x$ that satisfies $f(x)=y$ is a key ingredient in many cryptographic protocols. Lattice-based constructions, in addition, require proofs of ``shortness'' of $x$. Of particular interest are constructions where $f$ is a function over polynomial rings, since these are the ones that result in efficient schemes with short keys and outputs. All known approaches for such lattice-based zero-knowledge proofs are not very practical because they involve a basic protocol that needs to be repeated many times in order to achieve negligible soundness error. In the amortized setting, where one needs to give zero-knowledge proofs for many equations for the same function $f$, the situation is more promising, though still not yet fully satisfactory. Current techniques either result in proofs of knowledge of $x$'s that are exponentially larger than the $x$'s actually used for the proof (i.e. the \emph{slack} is exponential), or they have polynomial slack but require the number of proofs to be in the several thousands before the amortization advantages ``kick in''. In this work, we give a new approach for constructing amortized zero-knowledge proofs of knowledge of short solutions over polynomial rings. Our proof has small polynomial slack and is practical even when the number of relations is as small as the security parameter.

On Improving Integer Factorization and Discrete Logarithm Computation using Partial Triangulation

Mon, 08/07/2017 - 13:41
The number field sieve is the best-known algorithm for factoring integers and solving the discrete logarithm problem in prime fields. In this paper, we present some new improvements to various steps of the number field sieve. We apply these improvements on the current 768-bit discrete logarithm record and show that we are able to perform the overall computing time in about 1260 core$\cdot$years using these improvements instead of 2350 core$\cdot$years using the best known parameters for this problem. Moreover, we show that the pre-computation phase for a 768-bit discrete logarithm problem, that allows for example to build a massive decryption tool of IPsec traffic protected by the Oakley group~1, was feasible in reasonable time using technologies available before the year 2000.

CAKE: Code-based Algorithm for Key Encapsulation

Mon, 08/07/2017 - 13:40
Current widely-used key-exchange (KE) mechanisms will be vulnerable to quantum attacks when sufficiently strong quantum computers become available. Therefore, devising quantum-resistant replacements that combine efficiency with solid security guarantees is an important and challenging task. This paper proposes several contributions towards this goal. First, we introduce ``{\em CAKE}'', a key-encapsulation algorithm based on the QC-MDPC McEliece encryption scheme, with two major improvements: a) the use of ephemeral keys that defeats a recent reaction-attack against MDPC decoding of the corresponding encryption scheme and b) a highly efficient key-generation procedure for QC-MDPC-based cryptosystems. Then, we present an authenticated key-exchange protocol based on CAKE, which is suitable for the Internet Key-Exchange (IKE) standard. We prove that CAKE is IND-CCA secure, that the protocol is SK-Secure, and suggest practical parameters. Compared to other post-quantum schemes, we believe that CAKE is a promising candidate for post-quantum key-exchange standardization.

Verifiable Private Polynomial Evaluation

Mon, 08/07/2017 - 13:38
Delegating the computation of a polynomial to a server in a verifiable way is challenging. An even more challenging problem is ensuring that this polynomial remains hidden to clients who are able to query such a server. In this paper, we formally define the notion of \emph{Private Polynomial Evaluation} (PPE). Our main contribution is to design a rigorous security model along with relations between the different security properties. We define \emph{polynomial protection} (PP), \emph{proof unforgeability} (UNF), and \emph{indistinguishability against chosen function attack} (INDCFA), which formalizes the resistance of a PPE against attackers trying to guess which polynomial is used among two polynomials of their choice. As a second contribution, we give a cryptanalysis of two PPE schemes of the literature. Finally, we design a PPE scheme called PIPE and we prove that it is PP-, UNF- and INDCFA-secure under the decisional Diffie-Hellman assumption in the random oracle model.

Efficient, Reusable Fuzzy Extractors from LWE

Mon, 08/07/2017 - 13:37
A fuzzy extractor (FE), proposed for deriving cryptographic keys from biometric data, enables reproducible generation of high-quality randomness from noisy inputs having sufficient min-entropy. FEs rely in their operation on a public \helper string" that is guaranteed not to leak too much information about the original input. Unfortunately, this guarantee may not hold when multiple independent helper strings are generated from correlated inputs as would occur if a user registers their biometric data with multiple servers; reusable FEs are needed in that case. Although the notion of reusable FEs was introduced in 2004, it has received relatively little attention since then. We first analyze an FE proposed by Fuller et al. (Asiacrypt 2013) based on the learning-with-errors (LWE) assumption, and show that it is not reusable. We then show how to adapt their construction to obtain a weakly reusable FE. We also show a generic technique for turning any weakly reusable FE to a strongly reusable one, in the random-oracle model. Finally, we give a direct construction of a strongly reusable FE based on the LWE assumption, that does not rely on random oracles.

CryptHOL: Game-based Proofs in Higher-order Logic

Mon, 08/07/2017 - 13:36
Game-based proofs are a well-established paradigm for structuring security arguments and simplifying their understanding. We present a novel framework, CryptHOL, for rigorous game-based proofs that is supported by mechanical theorem proving. CryptHOL is based on a new semantic domain with an associated functional programming language for expressing games. We embed our framework in the Isabelle/HOL theorem prover and, using the theory of relational parametricity, we tailor Isabelle’s existing proof automation to game-based proofs. By basing our framework on a conservative extension of higher-order logic and providing sufficient automation support, the resulting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable. We evaluate our framework by formalizing different game-based proofs from the literature and comparing the results with existing formal-methods tools.

Attribute-Based Group Homomorphic Encryption and Additively Homomorphic IBE

Mon, 08/07/2017 - 13:34
Group Homomorphic Encryption (GHE), formally defined by Armknecht, Katzenbeisser and Peter, is a public-key encryption primitive where the decryption algorithm is a group homomorphism. Hence it suports homomorphic evaluation of a single algebraic operation such as modular addition or modular multiplication. Most classical homomorphic encryption schemes such as as Goldwasser-Micali and Paillier are instances of GHE. In this work, we extend GHE to the attribute-based setting. We introduce and formally define the notion of Attribute-Based GHE (ABGHE) and explore its properties. Our main result is the construction of an Identity-Based Encryption (IBE) scheme supporting homomorphic addition modulo a poly-sized prime $e$, which is an instance of ABGHE. Our construction builds upon the IBE scheme of Boneh, LaVigne and Sabin (BLS). BLS relies on a hash function that maps identities to $e^{\text{th}}$ residues. However there is no known way to securely instantiate such a function. Our construction extends BLS so that it can use a hash function that can be securely instantiated. We prove our scheme IND-ID-CPA secure under the (slightly modified) $e^{\text{th}}$ residuosity assumption in the random oracle model and show that it supports a (modular) additive homomorphism. By using multiple instances of the scheme with distinct primes and leveraging the Chinese Remainder Theorem, we can support homomorphic addition modulo a ``large'' (i.e. superpolynomial) integer, the first such IBE scheme. We also show that our scheme for $e > 2$ is anonymous assuming the hardness of deciding solvability of a special system of multivariate polynomial equations. Finally, we define a primitive for attribute-based group homomorphisms in the multi-key setting, introduce an important security property and present a generic construction of the primitive meeting this security property.

Twisting Lattice and Graph Techniques to Compress Transactional Ledgers

Mon, 08/07/2017 - 13:34
Keeping track of financial transactions (e.g., in banks and blockchains) means keeping track of an ever-increasing list of exchanges between accounts. In fact, many of these transactions can be safely “forgotten”, in the sense that purging a set of them that compensate each other does not impact the network’s semantic meaning (e.g., the accounts’ balances). We call nilcatenation a collection of transactions having no effect on a network’s semantics. Such exchanges may be archived and removed, yielding a smaller, but equivalent ledger. Motivated by the computational and analytic benefits obtained from more compact representations of numerical data, we formalize the problem of finding nilcatenations, and propose detection methods based on graph and lattice-reduction techniques. Atop interesting applications of this work (e.g., decoupling of centralized and distributed databases), we also discuss the original idea of a “community-serving proof of work”: finding nilcatenations constitutes a proof of useful work, as the periodic removal of nilcatenations reduces the transactional graph’s size.

Adaptive-Secure VRFs with Shorter Keys from Static Assumptions

Mon, 08/07/2017 - 13:33
Verifiable random functions are pseudorandom functions producing publicly verifiable proofs for their outputs, allowing for efficient checks of the correctness of their computation. In this work, we introduce a new computational hypothesis, the n-Eigen-Value assumption, which can be seen as a relaxation of the U_n-MDDH assumption, and prove its equivalence with the n-Rank assumption. Based on the newly introduced computational hypothesis, we build the core of a verifiable random function having an exponentially large input space and reaching adaptive security under a static assumption. The final construction achieves shorter public and secret keys compared to the existing schemes reaching the same properties.

Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency

Mon, 08/07/2017 - 13:32
We propose the first linear-space searchable encryption scheme with constant locality and sublogarithmic read efficiency, strictly improving the previously best known read efficiency bound (Asharov et al., STOC 2016) from $\Theta(\log N\log \log N)$ to $O(\log N/\log \log N)$, where $N$ is the size of the dataset. Our scheme is size-sensitive, meaning our bound is tight only for keyword lists whose sizes lie within the specific range $(N^{1-1/\log\log N},N/\log^2 N]$---outside this range the read efficiency improves to $O(\log^{2/3} N)$. For our construction we develop two techniques that can be of independent interest: New probability bounds for the offline two-choice allocation problem and a new I/O-efficient oblivious RAM with $o(\sqrt {n})$ bandwidth overhead and zero failure probability.

Efficient reductions in cyclotomic rings - Application to R-LWE based FHE schemes

Mon, 08/07/2017 - 13:32
With Fully Homomorphic Encryption (FHE), it is possible to process encrypted data without having an access to the private-key. This has a wide range of applications, most notably the offloading of sensitive data processing. Most research on FHE has focused on the improvement of its efficiency, namely by introducing schemes based on the Ring-Learning With Errors (R-LWE) problem, and techniques such as batching, which allows for the encryption of multiple messages in the same ciphertext. Much of the related research has focused on RLWE relying on power-of-two cyclotomic polynomials. While it is possible to achieve efficient arithmetic with such polynomials, one cannot exploit batching. Herein, the efficiency of ring arithmetic underpinned by non-power-of-two cyclomotic polynomials is analysed and improved. Two methods for polynomial reduction are proposed, one based on the Barrett reduction and the other on a Montgomery representation. Speed-ups up to 2.66 are obtained for the reduction operation using an i7-5960X processor when compared with a straightforward implementation of the Barrett reduction. Moreover, the proposed methods are exploited to enhance homomorphic multiplication of FV and BGV encryption schemes, producing experimental speed-ups up to 1.37.

sLiSCP: Simeck-based Permutations for Lightweight Sponge Cryptographic Primitives

Mon, 08/07/2017 - 13:31
In this paper, we propose a family of lightweight cryptographic permutations called sLiSCP, with the sole aim to provide a realistic minimal design}that suits a variety of lightweight device applications. More precisely, we argue that for such devices the chip area dedicated for security purposes should, not only be consumed by an encryption or hashing algorithm, but also provide as many cryptographic functionalities as possible. Our main contribution is the design of a lightweight permutation employing a 4-subblock Type-2 Generalized-like Structure (GFS) and round-reduced unkeyed Simeck with either 48 or 64-bit block length as the two round functions, thus resulting in two lightweight instances of the permutation, sLiSCP-192 and sLiSCP-256. We leverage the extensive security analysis on both Simeck (Simon-like functions) and Type-2 GFSs and present bounds against differential and linear cryptanalysis. In particular, we provide an estimation on the maximum differential probability of the round-reduced Simeck and use it for bounding the maximum expected differential/linear characteristic probability for our permutation. Due to the iterated nature of the Simeck round function and the simple XOR and cyclic shift mixing layer of the GFS that fosters the propagation of long trails, the long trail strategy}is adopted to provide tighter bounds on both characteristics. Moreover, we analyze sLiSCP against a wide range of distinguishing attacks, and accordingly, claim that there exists no structural distinguishers for sLiSCP with a complexity below $2^{b/2}$ where $b$ is the state size. We demonstrate how sLiSCP can be used as a unified round function in the duplex sponge construction to build (authenticated) encryption and hashing functionalities. The parallel hardware implementation area of the unified duplex mode of sLiSCP-192 (resp. sLiSCP-256) in CMOS $65\,nm$ ASIC is 2289 (resp. 3039) GEs with a throughput of 29.62 (resp. 44.44) kbps, and their areas in CMOS $130\, nm$ are 2498 (resp. 3319) GEs.

On the Tightness of Forward-Secure Signature Reductions

Mon, 08/07/2017 - 13:30
In this paper, we revisit the security of factoring-based signature schemes built via the Fiat-Shamir transform and show that they can admit tighter reductions to certain decisional complexity assumptions such as the quadratic-residuosity, the high-residuosity, and the $\phi$-hiding assumptions. We do so by proving that the underlying identification schemes used in these schemes are a particular case of the lossy identification notion recently introduced by Abdalla et al. at Eurocrypt 2012. Next, we show how to extend these results to the forward-security setting based on ideas from the Itkis-Reyzin forward-secure signature scheme. Unlike the original Itkis-Reyzin scheme, our construction can be instantiated under different decisional complexity assumptions and has a much tighter security reduction. Moreover, we also show that the tighter security reductions provided by our proof methodology can result in concrete efficiency gains in practice, both in the standard and forward-security setting, as long as the use of stronger security assumptions is deemed acceptable. Finally, we investigate the design of forward-secure signature schemes whose security reductions are fully tight.

A Quantum ``Magic Box'' for the Discrete Logarithm Problem

Mon, 08/07/2017 - 13:20
In their foundational paper on pseudorandom bit generation, Blum and Micali showed that the discrete logarithm problem could be solved efficiently given a ``magic box'' oracle that computes the most significant bit of the discrete logarithm with a slight advantage over guessing. This magic box can be realized on a quantum computer with a new, simplified variant of Shor's algorithm. The resulting combination of Blum and Micali's reduction and this new quantum magic box offers an intriguing hybrid approach to solving the discrete logarithm problem with a quantum computer. Because the only requirement on the quantum portion of the algorithm is that it provide an approximate estimate of a single bit of the discrete logarithm, the new algorithm may be easier to implement, more resilient to errors, and more amenable to optimization than previous approaches. Further analysis is needed to quantify the extent of these benefits in practice. The result applies to the discrete logarithm problem over both finite fields and elliptic curves. (The views expressed are my own and do not necessarily reflect those of my employer.)

Binary Hash Tree based Certificate Access Management

Mon, 08/07/2017 - 13:19
We present a certificate access management system to support the USDOT's proposed rule on Vehicle-to-Vehicle (V2V) communications, Federal Motor Vehicle Safety Standard (FMVSS) No.~150. Our proposal, which we call Binary Hash Tree based Certificate Access Management (BCAM) eliminates the need for vehicles to have bidirectional connectivity with the Security Credential Management System (SCMS) for certificate update. BCAM significantly improves the ability of the SCMS to manage large-scale software and/or hardware compromise events. Vehicles are provisioned at the start of their lifetime with all the certificates they will need. However, certificates and corresponding private key reconstruction values are provided to the vehicle encrypted, and the keys to decrypt them are only made available to the vehicles shortly before the start of the validity periods of those certificates. Vehicles that are compromised can be effectively removed from the V2V system by preventing them from decrypting the certificates. We demonstrate that the system is feasible with a broadcast channel for decryption keys and other revocation information, even if that channel has a relatively low capacity.

Cryptanalysis of 22 1/2 rounds of Gimli

Mon, 08/07/2017 - 13:18
Bernstein et al. have proposed a new permutation, Gimli, which aims to provide simple and performant implementations on a wide variety of platforms. One of the tricks used to make Gimli performant is that it processes data mostly in 96-bit columns, only occasionally swapping 32-bit words between them. Here we show that this trick is dangerous by presenting a distinguisher for reduced-round Gimli. Our distinguisher takes the form of an attack on a simple and practical PRF that should be nearly 192-bit secure. Gimli has 24 rounds. Against 15.5 of those rounds, our distinguisher uses two known plaintexts, takes about $2^{64}$ time and uses enough memory for a set with $2^{64}$ elements. Against 19$\frac12$ rounds, the same attack uses three non-adaptively chosen plaintexts, and uses twice as much memory and about $2^{128}$ time. Against $22\frac12$ rounds, it requires about $2^{138.5}$ work, $2^{129}$ bits of memory and $2^{10.5}$ non-adaptively chosen plaintexts. The same attack would apply to 23$\frac12$ rounds if Gimli had more rounds. Our attack does not use the structure of the SP-box at all, other than that it is invertible, so there may be room for improvement. On the bright side, our toy PRF puts keys and data in different positions than a typical sponge mode would do, so the attack might not work against sponge constructions.

Cryptanalysis of Compact-LWE

Mon, 08/07/2017 - 13:14
As an invited speaker of the ACISP 2017 conference, Dongxi Liu recently introduced a new lattice-based encryption scheme (joint work with Li, Kim and Nepal) designed for lightweight IoT applications, and announced plans to submit it to the NIST postquantum competition. The new scheme is based on a variant of standard LWE called Compact-LWE, but is claimed to achieve high security levels in considerably smaller dimensions than usual lattice-based schemes. In fact, the proposed parameters, allegedly suitable for 138-bit security, involve the Compact-LWE assumption in dimension only 13. In this note, we show that this particularly aggressive choice of parameters fails to achieve the stated security level. More precisely, we show that ciphertexts in the new encryption scheme can be decrypted using the public key alone with >99.9% probability in a fraction of a second on a standard PC, which is not quite as fast as legitimate decryption, but not too far off.

Pages

IACR Eprint