Subscribe to IACR Eprint feed
Updated: 12 hours 14 min ago

Round and Communication Efficient Unconditionally-secure MPC with $t < n/3$ in Partially Synchronous Network

Fri, 10/13/2017 - 15:11
In this work, we study unconditionally-secure multi-party computation (MPC) tolerating $t < n/3$ corruptions, where $n$ is the total number of parties involved. In this setting, it is well known that if the underlying network is completely asynchronous, then one can achieve only statistical security; moreover it is impossible to ensure input provision and consider inputs of all the honest parties. The best known statistically-secure asynchronous MPC (AMPC) with $t<n/3$ requires a communication of $O(n^5)$ field elements per multiplication. We consider a partially synchronous setting, where the parties are assumed to be globally synchronized initially for few rounds and then the network becomes completely asynchronous. In such a setting, we present a MPC protocol, which requires $O(n^2)$ communication per multiplication while ensuring input provision. Our MPC protocol relies on a new four round, communication efficient statistical verifiable secret-sharing (VSS) protocol with broadcast communication complexity independent of the number of secret-shared values.

A New Digital Rights Management Solution Based on White-Box Cryptography

Fri, 10/13/2017 - 14:31
Digital rights management is an important technique to protect digital contents from abuse. Usually it is confronted with severe challenges because of the untrusted environment its application executed in. This condition is formally described as white-box attack model. White-box cryptography aims at protecting software implementation of cryptographic algorithms from white-box attack, hence can be employed to provide security guarantee for digital rights management. Key extraction, code lifting, and illegal distribution are three major threats in digital rights management application, they extremely compromise the benefit of content producer. In this paper, we propose the first solution based on white-box cryptography against the three threats above simultaneously, by implementing traceability of a white-box scheme which has unbreakability and incompressibility. Specifically, We constructively confirm there exists secure white-box compiler which can generate traceable white-box programs, by hiding slight perturbations in the lookup-table based white-box implementation. Security and performance analyses show our solution can be effectively achieved in practice.

Architecture level Optimizations for Kummer based HECC on FPGAs

Fri, 10/13/2017 - 14:30
On the basis of a software implementation of Kummer based HECC over Fp presented in 2016, we propose new hardware architectures. Our main objectives are: definition of architecture parameters (type, size and number of units for arithmetic operations, memory and internal communications); architecture style optimization to exploit internal par-allelism. Several architectures have been designed and implemented on FPGAs for scalar multiplication acceleration in embedded systems. Our results show significant area reduction for similar computation time than best state of the art hardware implementations of curve based solutions.

Automatic Characterization of Exploitable Faults: A Machine Learning Approach

Fri, 10/13/2017 - 14:30
Characterization of the fault space of a cipher to filter out a set of faults potentially exploitable for fault attacks (FA), is a problem with immense practical value. A quantitative knowledge of the exploitable fault space is desirable in several applications, like security evaluation, cipher construction and implementation, design, and testing of countermeasures etc. In this work, we investigate this problem in the context of block ciphers. The formidable size of the fault space of a block cipher suggests for an automation to solve this problem, which should be able to characterize each individual fault instance quickly. On the other hand, the automation is expected to be applicable to most of the block cipher constructions. Existing techniques for automated fault attacks do not satisfy both of these goals simultaneously and hence are not directly applicable in the context of exploitable fault characterization. In this paper, we present a supervised machine learning (ML) assisted automated framework, which successfully addresses both of the criteria mentioned. The key idea is to extrapolate the knowledge of some existing FAs on a cipher to rapidly figure out new attack instances on the same. Experimental validation of the proposed framework on two state-of-the-art block ciphers – PRESENT and LED, establishes that our approach is able to provide fairly good accuracy in identifying exploitable fault instances at a reasonable cost. Finally, the effect of different S-Boxes on the fault space of a cipher is evaluated utilizing the framework.

Malware encryption schemes - rerandomizable ciphertexts encrypted using environmental keys

Fri, 10/13/2017 - 14:29
Protecting malware using encryption prevents an analyst, defending some computer(s) in the network, from analyzing the malicious code and identifying the intentions of the malware author. We discuss malware encryption schemes that use environmental encryption keys, generated from some computer(s) the malware author intends to attack, and is able to rerandomize ciphertexts, to make each malware sample in the network indistinguishable. We are interested in hiding the intentions and identity of the malware author, not in hiding the existence of malware.

Tightly-Secure Key-Encapsulation Mechanism in the Quantum Random Oracle Model

Fri, 10/13/2017 - 14:27
We give a first tight security reduction for a conversion from a weakly secure public-key encryption scheme to an IND-CCA-secure key-encapsulation mechanism scheme in the quantum random oracle model. To the best of our knowledge, previous reductions are non-tight as the security levels of the obtained schemes are degraded to at most half or quater of the original security level (Boneh, Dagdelen, Fischlin, Lehmann, Schafner, and Zhandry (CRYPTO 2012), Targhi and Unruh (TCC 2016-B), and Hofheinz, Hövelmanns, and Kiltz (TCC 2017)).

Garbled Protocols and Two-Round MPC from Bilinear Maps

Fri, 10/13/2017 - 14:26
In this paper, we initiate the study of \emph{garbled protocols} --- a generalization of Yao's garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol. We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random/reference string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain two-round protocols (i) for the setting of random access machines (RAM programs) while keeping the (amortized) communication and computational costs proportional to running times, (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations and (iii) satisfying semi-honest security in the plain model. Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].

Secure Multi-Party Computation in Large Networks

Fri, 10/13/2017 - 14:12
We describe scalable protocols for solving the secure multi-party computation (MPC) problem among a significant number of parties. We consider both the synchronous and the asynchronous communication models. In the synchronous setting, our protocol is secure against a static malicious adversary corrupting less than a $1/3$ fraction of the parties. In the asynchronous environment, we allow the adversary to corrupt less than a $1/8$ fraction of parties. For any deterministic function that can be computed by an arithmetic circuit with $m$ gates, both of our protocols require each party to send a number of messages and perform an amount of computation that is $\tilde{O}(m/n + \sqrt n)$. We also show that our protocols provide statistical and universally-composable security. To achieve our asynchronous MPC result, we define the threshold counting problem and present a distributed protocol to solve it in the asynchronous setting. This protocol is load balanced, with computation, communication and latency complexity of $O(\log{n})$, and can also be used for designing other load-balanced applications in the asynchronous communication model.

On the Closest Vector Problem for Lattices Constructed from Polynomials and Their Cryptographic Applications

Fri, 10/13/2017 - 14:12
In this paper, we propose new classes of trapdoor functions to solve the closest vector problem in lattices. Specifically, we construct lattices based on properties of polynomials for which the closest vector problem is hard to solve unless some trapdoor information is revealed. We thoroughly analyze the security of our proposed functions using state-of-the-art attacks and results on lattice reductions. Finally, we describe how our functions can be used to design quantum-safe encryption schemes with reasonable public key sizes. In particular, our scheme can offer around $106$ bits of security with a public key size of around $6.4$ KB. Our encryption schemes are efficient with respect to key generation, encryption and decryption.

Impossibility of Order-Revealing Encryption in Idealized Models

Fri, 10/13/2017 - 14:11
An Order-Revealing Encryption (ORE) scheme gives a public procedure by which two ciphertext can be compared to reveal the order of their underlying plaintexts. The ideal security notion for ORE is that only the order is revealed --- anything else, such as the distance between plaintexts, is hidden. The only known constructions of ORE achieving such ideal security are based on cryptographic multilinear maps, and are currently too impractical for real-world applications. In this work, we give evidence that building ORE from weaker tools may be hard. Indeed, we show black-box separations between ORE and most symmetric-key primitives, as well as public key encryption and anything else implied by generic groups in a black-box way. Thus, any construction of ORE must either (1) achieve weaker notions of security, (2) be based on more complicated cryptographic tools, or (3) require non-black-box techniques. This suggests that any ORE achieving ideal security will likely be somewhat inefficient. Central to our proof is an proof of impossibility for something we call information theoretic ORE, which has connections to tournament graphs and a theorem by Erdos. This impossibility proof will be useful for proving other black box separations for ORE.

Resource-efficient OT combiners with active security

Thu, 10/12/2017 - 11:23
An OT-combiner takes $n$ candidate implementations of the oblivious transfer (OT) functionality, some of which may be faulty, and produces a secure instance of oblivious transfer as long as a large enough number of the candidates are secure. We see an OT-combiner as a 2-party protocol that can make several black-box calls to each of the $n$ OT candidates, and we want to protect against an adversary that can corrupt one of the parties and a certain number of the OT candidates, obtaining their inputs and (in the active case) full control of their outputs. In this work we consider perfectly (unconditionally, zero-error) secure OT-combiners and we focus on \emph{minimizing the number of calls} to the candidate OTs. First, we construct a single-use (one call per OT candidate) OT-combiner which is perfectly secure against active adversaries corrupting one party and a constant fraction of the OT candidates. This extends a previous result by Ishai et al. (ISIT 2014) that proves the same fact for passive adversaries. Second, we consider a more general asymmetric corruption model where an adversary can corrupt different sets of OT candidates depending on whether it is Alice or Bob who is corrupted. We give sufficient and necessary conditions for the existence of an OT combiner with a given number of calls to the candidate OTs in terms of the existence of secret sharing schemes with certain access structures and share-lengths. This allows in some cases to determine the optimal number of calls to the OT candidates which are needed to construct an OT combiner secure against a given adversary.

Catching MPC Cheaters: Identification and Openability

Thu, 10/12/2017 - 08:43
Secure multi-party computation (MPC) protocols do not completely prevent malicious parties from cheating or disrupting the computation. We augment MPC with three new properties to discourage cheating. First is a strengthening of identifiable abort, called completely identifiable abort, where all parties who do not follow the protocol will be identified as cheaters by each honest party. The second is completely identifiable auditability, which means that a third party can determine whether the computation was performed correctly (and who cheated if it was not). The third is openability, which means that a distinguished coalition of parties can recover the MPC inputs. We construct the first (efficient) MPC protocol achieving these properties. Our scheme is built on top of the SPDZ protocol (Damgard et al., Crypto 2012), which leverages an offline (computation-independent) pre-processing phase to speed up the online computation. Our protocol is optimistic, retaining online SPDZ efficiency when no one cheats. If cheating does occur, each honest party performs only local computation to identify cheaters. Our main technical tool is a new locally identifiable secret sharing scheme (as defined by Ishai, Ostrovsky, and Zikas (TCC 2012)) which we call commitment enhanced secret sharing or CESS. The work of Baum, Damgard, and Orlandi (SCN 2014) introduces the concept of auditability, which allows a third party to verify that the computation was executed correctly, but not to identify the cheaters if it was not. We enable the third party to identify the cheaters by augmenting the scheme with CESS. We add openability through the use of verifiable encryption and specialized zero-knowledge proofs.

Indistinguishability Obfuscation for All Circuits from Secret-Key Functional Encryption

Thu, 10/12/2017 - 03:02
We show that indistinguishability obfuscation (IO) for all circuits can be constructed solely from secret-key functional encryption (SKFE). In the construction, SKFE need to be able to issue a-priori unbounded number of functional keys, that is, collusion-resistant. Our strategy is to replace public-key functional encryption (PKFE) in the construction of IO proposed by Bitansky and Vaikuntanathan (FOCS 2015) with puncturable SKFE. Bitansky and Vaikuntanathan introduced the notion of puncturable SKFE and observed that the strategy works. However, it has not been clear whether we can construct puncturable SKFE without assuming PKFE. In particular, it has not been known whether puncturable SKFE is constructed from ordinary SKFE. In this work, we show that a relaxed variant of puncturable SKFE can be constructed from collusion-resistant SKFE. Moreover, we show that the relaxed variant of puncturable SKFE is sufficient for constructing IO.

Simple Generic Constructions of Succinct Functional Encryption

Thu, 10/12/2017 - 02:23
We propose simple generic constructions of succinct functional encryption. Our key tool is exponentially-efficient indistinguishability obfuscator (XIO), which is the same as indistinguishability obfuscator (IO) except that the size of an obfuscated circuit (or the running-time of an obfuscator) is slightly smaller than that of a brute-force canonicalizer that outputs the entire truth table of a circuit to be obfuscated. A ``compression factor'' of XIO indicates how much XIO compresses the brute-force canonicalizer. In this study, we propose a significantly simple framework to construct succinct functional encryption via XIO and show that XIO is a powerful enough to achieve cutting-edge cryptography. In particular, we propose the following constructions: Single-key weakly succinct secret-key functional encryption (SKFE) is constructed from XIO (even with a bad compression factor) and one-way function. Single-key weakly succinct public-key functional encryption (PKFE) is constructed from XIO with a good compression factor and public-key encryption. Single-key weakly succinct PKFE is constructed from XIO (even with a bad compression factor) and identity-based encryption. Our new framework has side benefits. Our constructions do not rely on any number theoretic or lattice assumptions such as decisional Diffie-Hellman and learning with errors assumptions. Moreover, all security reductions incur only polynomial security loss. Known constructions of weakly succinct SKFE or PKFE from XIO with polynomial security loss rely on number theoretic or lattice assumptions.

No right to remain silent: Isolating Malicious Mixes

Wed, 10/11/2017 - 12:43
Mix networks are a key technology to provide network anonymity, used for messaging, voting and private lookups. However, simple mix networks are insecure against malicious mixes, which can drop or delay packets to facilitate traffic analysis attacks. Mix networks with provable robustness address this by using complex and expensive proofs of correct shuffling, which come with a cost and make assumptions that are unnatural to many settings in which mix networks are deployed. We present \sysname, a synchronous mix network mechanism, which is provably secure against malicious mixes -- yet retaining the simplicity, efficiency and practicality of mix network designs. \sysname\ uses first-hand experience of unreliability by mixes and clients, to derive a mix `reputation', and to ensure that each active attack -- including dropping of packets -- results in reduction in the connectivity of the malicious mixes, thus reducing their ability to attack. Besides the practical importance of \sysname itself, our results are applicable to other mix networks designs and anonymous communication, and even unrelated settings in which reputation could provide effective defense against malicious participants.

Shortest Vector from Lattice Sieving: a Few Dimensions for Free

Wed, 10/11/2017 - 12:42
Asymptotically, the best known algorithms for solving the Shortest Vector Problem (SVP) in a lattice of dimension $n$ are sieve algorithms, which have heuristic complexity estimates ranging from $(4/3)^{n+o(n)}$ down to $(3/2)^{n/2 +o(n)}$ when Locality Sensitive Hashing techniques are used. Sieve algorithms are however outperformed by pruned enumeration algorithms in practice by several orders of magnitudes, despite the larger super-exponential asymptotical complexity $2^{\Theta(n \log n)}$ of the latter. In this work, we show a concrete improvement of sieve-type algorithms. Precisely, we show that a few calls to the sieve algorithm in lattices of dimension less than $n-d$ allows to solve SVP in dimension $n$, where $d = \Theta(n/\log n)$. Although our improvement is only sub-exponential, its practical effect in relevant dimensions is quite significant. We implemented it over a simple sieve algorithm with $(4/3)^{n+o(n)}$ complexity, and it outperforms the best sieve algorithms from the literature by a factor 10 in dimensions 70-80. It performs less than an order of magnitude slower than pruned enumeration in the same range. By design, this improvement can also be applied to most other variants of sieve algorithms, including LSH sieve algorithms and tuple-sieve algorithms. In this light, we may expect sieve-techniques to outperform pruned enumeration in practice in the near future.

A Comparative Investigation of Approximate Attacks on Logic Encryptions

Wed, 10/11/2017 - 12:42
Logic encryption is an important hardware protection technique that adds extra keys to lock a given circuit. With recent discovery of the effective SAT-based attack, new enhancement methods such as SARLock and Anti-SAT have been proposed to thwart the SAT-based and similar exact attacks. Since these new techniques all have very low error rate, approximate attacks such as Double DIP and AppSAT have been proposed to find an almost correct key with low error rate. However, measuring the performance of an approximate attack is extremely challenging, since exact computation of the error rate is very expensive, while estimation based on random sampling has low confidence. In this paper, we develop a suite of scientific encryption benchmarks where a wide range of error rates are possible and the error rate can be found out by simple eyeballing. Then, we conduct a thorough comparative study on different approximate attacks, including AppSAT and Double DIP. The results show that approximate attacks are far away from closing the gap and more investigations are needed in this area.

Hash Proof Systems over Lattices Revisited

Wed, 10/11/2017 - 12:41
Hash Proof Systems or Smooth Projective Hash Functions (SPHFs) are a form of implicit arguments introduced by Cramer and Shoup at Eurocrypt'02. They have found many applications since then, in particular for authenticated key exchange or honest-verifier zero-knowledge proofs. While they are relatively well understood in group settings, they seem painful to construct directly in the lattice setting. Only one construction of an SPHF over lattices has been proposed in the standard model, by Katz and Vaikuntanathan at Asiacrypt'09. But this construction has an important drawback: it only works for an ad-hoc language of ciphertexts. Concretely, the corresponding decryption procedure needs to be tweaked, now requiring $q$ many trapdoor inversion attempts, where $q$ is the modulus of the underlying Learning With Errors (LWE) problem. Using harmonic analysis, we explain the source of this limitation, and propose a way around it. We show how to construct SPHFs for standard languages of LWE ciphertexts, and explicit our construction over a tag-IND-CCA2 encryption scheme à la Micciancio-Peikert (Eurocrypt'12). We then improve our construction and our analysis in the case where the tag is known in advance or fixed (in the latter case, the scheme is only IND-CPA) with a super-polynomial modulus, to get a stronger type of SPHF, which was never achieved before for any language over lattices. Finally, we conclude with applications of these SPHFs: password-based authenticated key exchange, honest-verifier zero-knowledge proofs, and a relaxed version of witness encryption.

Large FHE gates from Tensored Homomorphic Accumulator

Wed, 10/11/2017 - 12:41
The main bottleneck of all known Fully Homomorphic Encryption schemes lies in the bootstrapping procedure invented by Gentry (STOC'09). The cost of this procedure can be mitigated either using Homomorphic SIMD techniques, or by performing larger computation per bootstrapping procedure. In this work, we propose new techniques allowing to perform more operations per bootstrapping in FHEW-type schemes (EUROCRYPT'13). While maintaining the quasi-quadratic $\tilde O(n^2)$ complexity of the whole cycle, our new scheme allows to evaluate gates with $\Omega(\log n)$ input bits, which constitutes a quasi-linear speed-up. Our scheme is also very well adapted to large threshold gates, natively admitting up to $\Omega(n)$ inputs. This could be helpful for homomorphic evaluation of neural networks. Our theoretical contribution is backed by a preliminary prototype implementation, which can perform $6$-to-$6$ bit gates in less than $10$ seconds on a single core, as well as threshold gates over $63$ input bits even faster.

A signature scheme from Learning with Truncation

Wed, 10/11/2017 - 12:40
In this paper we revisit the modular lattice signature scheme and its efficient instantiation known as pqNTRUSign. First, we show that a modular lattice signature scheme can be based on a standard lattice problem. As the fundamental problem that needs to be solved by the signer or a potential forger is recovering a lattice vector with a restricted norm, given the least significant bits, we refer to this general class of problems as the ``learning with truncation" problem. We show that by replacing the uniform sampling in pqNTRUSign with a bimodal Gaussian sampling, we can further reduce the size of a signature. As an example, we show that the size of the signature can be as low as 4608 bits for a security level of 128 bits. The most significant new contribution, enabled by this Gaussian sampling version of pqNTRUSign, is that we can now perform batch verification, which allows the verifier to check approximately 2000 signatures in a single verification process.

Pages

IACR Eprint