Subscribe to IACR Eprint feed
Updated: 12 hours 17 min ago

The Algebraic Group Model and its Applications

Wed, 09/27/2017 - 05:54
One of the most important tools for assessing hardness assumptions in cryptography is the Generic Group Model (GGM). Over the past two decades, numerous assumptions have been analyzed within this model. While a proof in the GGM can certainly provide some measure of confidence in an assumption, its scope is rather limited since it does not capture group-specific algorithms that make use of the representation of the group. To overcome this limitation, we propose the Algebraic Group Model (AGM), a model that lies in between the standard model and the GGM. It is the first restricted model of computation covering group-specific algorithms yet allowing to derive simple and meaningful security statements. We show that several important assumptions, among them the Computational Diffie-Hellman, the Strong Diffie-Hellman, and the interactive LRSW assumptions, are equivalent to the Discrete Logarithm (DLog) assumption in the AGM. On the more practical side, we prove tight security reductions for two important schemes in the AGM to DLog or a variant thereof: the BLS signature scheme and Groth's zero-knowledge SNARK (Eurocrypt '16), which is the most efficient SNARK for which only a proof in the GGM was known. Moreover, in combination with known lower bounds on the Discrete Logarithm assumption in the GGM, our results can be used to derive lower bounds for all the above-mentioned results in the GGM.

Towards Doubly Efficient Private Information Retrieval

Wed, 09/27/2017 - 02:39
Private Information Retrieval (PIR) allows a client to obtain data from a public database without disclosing the locations accessed. Traditionally, the stress is on preserving sublinear work for the client, while the server's work is taken to inevitably be at least linear in the database size. Beimel, Ishai and Malkin (JoC 2004) show PIR schemes where, following a linear-work preprocessing stage, the server's work per query is sublinear in the database size. However, that work only addresses the case of multiple non-colluding servers; the existence of single-server PIR with sublinear server work remained unaddressed. We consider single-server PIR schemes where, following a preprocessing stage in which the server obtains an encoded version of the database and the client obtains a short key, the per-query work of both server and client is polylogarithmic in the database size. We call such schemes doubly efficient. Concentrating on the case where the client's key is secret, we show: - A scheme, based on one-way functions, that works for a bounded number of queries, and where the server storage is linear in the number of queries plus the database size. - A family of schemes for an unbounded number of queries, whose security follows from a corresponding family of new hardness assumption that are related to the hardness of solving a system of noisy linear equations. We also show the insufficiency of a natural approach for obtaining doubly efficient PIR in the setting where the preprocessing is public.

Faster Unbalanced Private Set Intersection

Tue, 09/26/2017 - 18:12
Protocols for Private Set Intersection (PSI) are important cryptographic primitives that perform joint operations on datasets in a privacy-preserving way. They allow two parties to compute the intersection of their private sets without revealing any additional information beyond the intersection itself. PSI implementations in the literature usually do not use the best possible cryptographic implementation techniques. This results in protocols presenting computational and communication complexities that are prohibitive, particularly in the case when one of the protocol’s participant is a low-powered device and there are bandwidth restrictions. This paper builds on modern cryptographic engineering techniques and proposes optimizations for a promising one-way PSI protocol based on public-key cryptography. For the case when one of the parties holds a set much smaller than the other (a realistic assumption in many scenarios) we show that our improvements and optimizations yield a protocol that over performs the communication complexity and the run time of previous proposals by up to one thousand times.

Multi Collision Resistant Hash Functions and their Applications

Tue, 09/26/2017 - 16:53
Collision resistant hash functions are functions that shrink their input, but for which it is computationally infeasible to find a collision, namely two strings that hash to the same value (although collisions are abundant). In this work we study multi-collision resistant hash functions (MCRH) a natural relaxation of collision resistant hash functions in which it is difficult to find a t-way collision (i.e., t strings that hash to the same value) although finding (t-1)-way collisions could be easy. We show the following: 1. The existence of MCRH follows from the average case hardness of a variant of the Entropy Approximation problem. The goal in the entropy approximation problem (Goldreich, Sahai and Vadhan, CRYPTO '99) is to distinguish circuits whose output distribution has high entropy from those having low entropy. 2. MCRH imply the existence of constant-round statistically hiding (and computationally binding) commitment schemes. As a corollary, using a result of Haitner et-al (SICOMP, 2015), we obtain a blackbox separation of MCRH from any one-way permutation.

Back to Massey: Impressively fast, scalable and tight security evaluation tools

Tue, 09/26/2017 - 14:18
None of the existing rank estimation algorithms can scale to large cryptographic keys, such as 4096-bit (512 bytes) RSA keys. In this paper, we present the first solution to estimate the guessing entropy of arbitrarily large keys, based on mathematical bounds, resulting in the fastest and most scalable security evaluation tool to date. Our bounds can be computed within a fraction of a second, with no memory overhead, and provide a margin of only a few bits for a full 128-bit AES key.

PRF-ODH: Relations, Instantiations, and Impossibility Results

Tue, 09/26/2017 - 10:50
The pseudorandom-function oracle-Diffie–Hellman (PRF-ODH) assumption has been introduced recently to analyze a variety of DH-based key exchange protocols, including TLS 1.2 and the TLS 1.3 candidates, as well as the extended access control (EAC) protocol. Remarkably, the assumption comes in different flavors in these settings and none of them has been scrutinized comprehensively yet. In this paper here we therefore present a systematic study of the different PRF-ODH variants in the literature. In particular, we analyze their strengths relative to each other, carving out that the variants form a hierarchy. We further investigate the boundaries between instantiating the assumptions in the standard model and the random oracle model. While we show that even the strongest variant is achievable in the random oracle model under the strong Diffie–Hellman assumption, we provide a negative result showing that it is implausible to instantiate even the weaker variants in the standard model via algebraic black-box reductions to common cryptographic problems.

Improving Stateless Hash-Based Signatures

Tue, 09/26/2017 - 10:02
We present several optimizations to SPHINCS, a stateless hash-based signature scheme proposed by Bernstein et al. in 2015: PORS, a more secure variant of the HORS few-time signature scheme used in SPHINCS; secret key caching, to speed-up signing and reduce signature size; batch signing, to amortize signature time and reduce signature size when signing multiple messages at once; mask-less constructions to reduce the key size and simplify the scheme; and Octopus, a technique to eliminate redundancies from authentication paths in Merkle trees. Based on a refined analysis of the subset resilience problem, we show that SPHINCS' parameters can be modified to reduce the signature size while retaining a similar security level and computation time. We then propose Gravity-SPHINCS, our variant of SPHINCS embodying the aforementioned tricks. Gravity-SPHINCS has shorter keys (32 and 64 bytes instead of $\approx1$ KB), shorter signatures ($\approx30$ KB instead of 41 KB), and faster signing and verification for a same security level as SPHINCS.

A Modular Analysis of the Fujisaki-Okamoto Transformation

Tue, 09/26/2017 - 09:27
The Fujisaki-Okamoto (FO) transformation (CRYPTO 1999 and Journal of Cryptology 2013) turns any weakly secure public-key encryption scheme into a strongly (i.e., IND-CCA) secure one in the random oracle model. Unfortunately, the FO analysis suffers from several drawbacks, such as a non-tight security reduction, and the need for a perfectly correct scheme. While several alternatives to the FO transformation have been proposed, they have stronger requirements, or do not obtain all desired properties. In this work, we provide a fine-grained and modular toolkit of transformations for turning weakly secure into strongly secure public-key encryption schemes. All of our transformations are robust against schemes with correctness errors, and their combination leads to several tradeoffs among tightness of the reduction, efficiency, and the required security level of the used encryption scheme. For instance, one variant of the FO transformation constructs an IND-CCA secure scheme from an IND-CPA secure one with a tight reduction and very small efficiency overhead. Another variant assumes only an OW-CPA secure scheme, but leads to an IND-CCA secure scheme with larger ciphertexts. We note that we also analyze our transformations in the quantum random oracle model, which yields security guarantees in a post-quantum setting.

Stateful Multi-Client Verifiable Computation

Tue, 09/26/2017 - 08:55
This paper develops a cryptographic protocol for outsourcing arbitrary stateful computation among multiple clients to an untrusted server, while guaranteeing integrity of the data. The clients communicate only with the server and store only a short authenticator to ensure that the server does not cheat. Our contribution is two-fold. First, we extend the recent hash&prove scheme of Fiore et al. (CCS 2016) to stateful computations that support arbitrary updates by the untrusted server, in a way that can be verified by the clients. We use this scheme to generically instantiate authenticated data types. Second, we describe a protocol for multi-client verifiable computation based on an authenticated data type, and prove that it achieves a computational version of fork linearizability. This is the strongest guarantee that can be achieved in the setting where clients do not communicate directly; it ensures correctness and consistency of outputs seen by the clients individually.

An Equivalence Between Attribute-Based Signatures and Homomorphic Signatures, and New Constructions for Both

Tue, 09/26/2017 - 07:42
In Attribute-Based Signatures (ABS; first defined by Maji, Prabhakaran and Rosulek, CT-RSA 2011) an authority can generate multiple signing keys, where each key is associated with an attribute $x$. Messages are signed with respect to a constraint $f$, such that a key for $x$ can sign messages respective to $f$ only if $f(x) = 0$. The security requirements are unforgeability and key privacy (signatures should not expose the specific signing key used). In (single-hop) Homomorphic Signatures (HS; first defined by Boneh and Freeman, PKC 2011), given a signature for a data-set $x$, one can evaluate a signature for the pair $(f(x),f)$, for functions $f$. In context-hiding HS, evaluated signatures do not reveal information about the original (pre-evaluated) signatures. In this work we start by showing that these two notions are in fact equivalent. The first implication of this equivalence is a new lattice-based ABS scheme for polynomial-depth circuits, based on the HS construction of Gorbunov, Vaikuntanathan and Wichs (GVW; STOC 2015). We then construct a new ABS candidate from a worst case lattice assumption (SIS), with different parameters. Using our equivalence again, now in the opposite direction, our new ABS implies a new lattice-based HS scheme with different parameter trade-off, compared to the aforementioned GVW.

Position-Based Cryptography and Multiparty Communication Complexity

Tue, 09/26/2017 - 07:16
\emph{Position based cryptography (PBC)}, proposed in the seminal work of Chandran, Goyal, Moriarty, and Ostrovsky {(SIAM J. Computing, 2014)}, aims at constructing cryptographic schemes in which the identity of the user is his geographic position. Chandran et al.~construct PBC schemes for \emph{secure positioning} and \emph{position-based key agreement} in the \emph{bounded-storage model} (Maurer, J. Cryptology, 1992). Apart from bounded memory, their security proofs need a strong additional restriction on the power of the adversary: he cannot compute \emph{joint} functions of his inputs. Removing this assumption is left as an open problem. We show that an answer to this question would resolve a long standing open problem in multiparty communication complexity: finding a function that is hard to compute with low communication complexity in the simultaneous message model, but easy to compute in the fully adaptive model. On a more positive side: we also show some implications in the other direction, i.e.: we prove that lower bounds on the communication complexity of certain multiparty problems imply existence of PBC primitives. Using this result we then show two attractive ways to ``bypass'' our hardness result: the first uses the random oracle model, the second weakens the \emph{locality} requirement in the bounded-storage model to \emph{online computability}. The random oracle construction is arguably one of the simplest proposed so far in this area. Our results indicate that constructing improved provably secure protocols for PBC requires a better understanding of multiparty communication complexity. This is yet another example where \emph{negative} results in one area (in our case: lower bounds in multiparty communication complexity) can be used to construct secure cryptographic schemes.

Card-Based Protocols Using Unequal Division Shuffles

Tue, 09/26/2017 - 03:59
Card-based cryptographic protocols can perform secure computation of Boolean functions. In 2013, Cheung et al. presented a protocol that securely produces a hidden AND value using five cards; however, it fails with a probability of 1/2. The protocol uses an unconventional shuffle operation called an unequal division shuffle; after a sequence of five cards is divided into a two-card portion and a three-card portion, these two portions are randomly switched so that nobody knows which is which. In this paper, we first show that the protocol proposed by Cheung et al. securely produces not only a hidden AND value but also a hidden OR value (with a probability of 1/2). We then modify their protocol such that, even when it fails, we can still evaluate the AND value in the clear. Furthermore, we present two five-card copy protocols (which can duplicate a hidden value) using unequal division shuffle. Because the most efficient copy protocol currently known requires six cards, our new protocols improve upon the existing results. We also design a general copy protocol that produces multiple copies using an unequal division shuffle. Furthermore, we show feasible implementations of unequal division shuffles by the use of card cases.

Efficient Algorithms for Broadcast and Consensus Based on Proofs of Work

Tue, 09/26/2017 - 03:55
Inspired by the astonishing success of cryptocurrencies, most notably the Bitcoin system, several recent works have focused on the design of robust blockchain-style protocols that work in a peer-to-peer setting such as the Internet. In contrast to the setting traditionally considered in multiparty computation (MPC), in these systems, honesty is measured by computing power instead of requiring that only a certain fraction of parties is controlled by the adversary. This provides a potential countermeasure against the so-called Sybil attack, where an adversary creates fake identities, thereby easily taking over the majority of parties in the system. In this work we design protocols for Broadcast and Byzantine agreement that are secure under the assumption that the majority of computing power is controlled by the honest parties and for the first time have expected constant round complexity. This is in contrast to earlier works (Crypto'15, ePrint'14) which have round complexities that scale linearly with the number n of parties; an undesirable feature in a P2P environment with potentially thousands of users. In addition, our main protocol which runs in quasi-constant rounds, introduces novel ideas that significantly decrease communication complexity. Concretely, this is achieved by using an appropriate time-locked encryption scheme and by structuring the parties into a network of so-called cliques.

Round Optimal Concurrent MPC via Strong Simulation

Tue, 09/26/2017 - 03:02
In this paper, we study the round complexity of concurrently secure multi-party computation (MPC) with super-polynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model. We obtain the following results: – Three-round concurrent MPC with SPS security against Byzantine adversaries, assuming sub-exponentially secure DDH and LWE. – Two-round concurrent MPC with SPS security against Byzantine adversaries for input-less randomized functionalities, assuming sub- exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications. Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multi-party setting. We also improve over the previous best round complexity for the two-party setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017). To obtain our results, we compile protocols that already achieve security against “semi-malicious” adversaries, to protocols secure against fully malicious adversaries, additionally assuming sub-exponential DDH. Our protocols develop new techniques to use two-round zero-knowledge with super-polynomial strong simulation, defined by Pass (Eurocrypt 2003) and very recently realized by Khurana and Sahai (FOCS 2017). These remain zero-knowledge against adversaries running in time larger than the running time of the simulator.

Near-Optimal Secret Sharing and Error Correcting Codes in AC0

Mon, 09/25/2017 - 21:52
We study the question of minimizing the computational complexity of (robust) secret sharing schemes and error correcting codes. In standard instances of these objects, both encoding and decoding involve linear algebra, and thus cannot be implemented in the class AC0. The feasibility of non-trivial secret sharing schemes in AC0 was recently shown by Bogdanov et al. (Crypto 2016) and that of (locally) decoding errors in AC0 by Goldwasser et al. (STOC 2007). In this paper, we show that by allowing some slight relaxation such as a small error probability, we can construct much better secret sharing schemes and error correcting codes in the class AC0. In some cases, our parameters are close to optimal and would be impossible to achieve without the relaxation. Our results significantly improve previous constructions in various parameters. Our constructions combine several ingredients in pseudorandomness and combinatorics in an innovative way. Specifically, we develop a general technique to simultaneously amplify security threshold and reduce alphabet size, using a two-level concatenation of protocols together with a random permutation. We demonstrate the broader usefulness of this technique by applying it in the context of a variant of secure broadcast.

Overcoming Cryptographic Impossibility Results using Blockchains

Mon, 09/25/2017 - 15:22
Blockchain technology has the potential to disrupt how cryptography is done. In this work, we propose to view blockchains as an "enabler", much like indistinguishability obfuscation (Barak et al., CRYPTO 2001, Garg et al., FOCS 2013, and Sahai and Waters, STOC 2014) or one-way functions, for building a variety of cryptographic systems. Our contributions in this work are as follows: 1. A Framework for Proof-of-Stake based Blockchains: We provide an abstract framework for formally analyzing and defining useful security properties for Proof-of-Stake (POS) based blockchain protocols. Interestingly, for some of our applications, POS based protocols are more suitable. We believe our framework and assumptions would be useful in building applications on top of POS based blockchain protocols even in the future. 2. Blockchains as an Alternative to Trusted Setup Assumptions in Cryptography: A trusted setup, such as a common reference string (CRS) has been used to realize numerous systems in cryptography. The paragon example of a primitive requiring trusted setup is a non-interactive zero-knowledge (NIZK) system. We show that already existing blockchains systems including Bitcoin, Ethereum etc. can be used as a foundation (instead of a CRS) to realize NIZK systems. The novel aspect of our work is that it allows for utilizing an already existing (and widely trusted) setup rather than proposing a new one. Our construction does not require any additional functionality from the miners over the already existing ones, nor do we need to modify the underlying blockchain protocol. If an adversary can violate the security of our NIZK, it could potentially also take over billions of dollars worth of coins in the Bitcoin, Ethereum or any such cryptocurrency! We believe that such a "trusted setup" represents significant progress over using CRS published by a central trusted party. Indeed, NIZKs could further serve as a foundation for a variety of other cryptographic applications such as round efficient secure computation (Katz and Ostrovsky, CRYPTO 2004 and Horvitz and Katz, CRYPTO 2007). 3. One-time programs and pay-per use programs: Goldwasser et al. (CRYPTO 2008) introduced the notion of one time program and presented a construction using tamper-proof hardware. As noted by Goldwasser et al., clearly a one-time program cannot be solely software based, as software can always be copied and run again. While there have been a number of follow up works (Goyal et al., TCC 2010, Bellare et al., ASIACRYPT 2012, and Applebaum et al., SIAM Journal on Computing 2015), there are indeed no known constructions of one-time programs which do not rely on self destructing tamper-proof hardware (even if one uses trusted setup or random oracles). Somewhat surprisingly, we show that it is possible to base one-time programs on POS based blockchain systems without relying on trusted hardware. Our ideas do not seem to translate over to Proof-of-Work (POW) based blockchains. We also introduce the notion of pay-per-use programs which is simply a contract between two parties --- service provider and customer. A service provider supplies a program such that if the customer transfers a specific amount of coins to the provider, it can evaluate the program on any input of its choice once, even if the provider is offline. This is naturally useful in a subscription based model where your payment is based on your usage.

Adaptively Indistinguishable Garbled Circuits

Mon, 09/25/2017 - 14:33
A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output C(x) but hides everything else. An adaptively secure scheme allows the adversary to specify the input x after seeing the garbled circuit. Applebaum et al. (CRYPTO '13) showed that in any garbling scheme with adaptive simulation-based security, the size of the garbled input must exceed the output size of the circuit. Here we show how to circumvent this lower bound and achieve significantly better efficiency under the minimal assumption that one-way functions exist by relaxing the security notion from simulation-based to indistinguishability-based. We rely on the recent work of Hemenway et al. (CRYPTO '16) which constructed an adaptive simulation-based garbling scheme under one-way functions. The size of the garbled input in their scheme is as large as the output size of the circuit plus a certain pebble complexity of the circuit, where the latter is (e.g.,) bounded by the space complexity of the computation. By building on top of their construction and adapting their proof technique, we show how to remove the output size dependence in their result when considering indistinguishability-based security. As an application of the above result, we get a symmetric-key functional encryption based on one-way functions, with indistinguishability-based security where the adversary can obtain an unbounded number of function secret keys and then adaptively a single challenge ciphertext. The size of the ciphertext only depends on the maximal pebble complexity of each of the functions but not on the number of functions or their circuit size.

Rational Trust Modeling

Mon, 09/25/2017 - 14:14
Trust models are widely used in various computer science disciplines. The main purpose of a trust model is to continuously measure trustworthiness of a set of entities based on their behaviors. In this article, the novel notion of rational trust modeling is introduced by bridging trust management and game theory. Note that trust models/reputation systems have been used in game theory (e.g., repeated games) for a long time, however, game theory has not been utilized in the process of trust model construction; this is where the novelty of our approach comes from. In our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational/selfish, i.e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. In other words, the players are incentivized (or penalized) by the model itself to act properly. The problem of trust management can be then approached by game theoretical analyses and solution concepts such as Nash equilibrium. Although rationality might be built-in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer's perspective. This approach will result in two fascinating outcomes. First of all, the designer of a trust model can incentivize trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust-based interactions (e.g., e-commerce scenarios). Furthermore, using a rational trust model, we can prevent many well-known attacks on trust models. These two prominent properties also help us to predict behavior of the players in subsequent steps by game theoretical analyses.

Oblivious Hashing Revisited, and Applications to Asymptotically Efficient ORAM and OPRAM

Mon, 09/25/2017 - 13:54
Oblivious RAM (ORAM) is a powerful cryptographic building block that allows a program to provably hide its access patterns to sensitive data. Since the original proposal of ORAM by Goldreich and Ostrovsky, numerous improvements have been made. To date, the best asymptotic overhead achievable for general block sizes is $O(\log^2 N/\log \log N)$, due to an elegant scheme by Kushilevitz et al., which in turn relies on the oblivious Cuckoo hashing scheme by Goodrich and Mitzenmacher. In this paper, we make the following contributions: we first revisit the prior $O(\log^2 N/\log \log N)$-overhead ORAM result. We demonstrate the somewhat incompleteness of this prior result, due to the subtle incompleteness of a core building block, namely, Goodrich and Mitzenmacher's oblivious Cuckoo hashing scheme. Even though we do show how to patch the prior result such that we can fully realize Goodrich and Mitzenmacher's elegant blueprint for oblivious Cuckoo hashing, it is clear that the extreme complexity of oblivious Cuckoo hashing has made understanding, implementation, and proofs difficult. We show that there is a conceptually simple $O(\log^2 N/\log \log N)$-overhead ORAM that dispenses with oblivious Cuckoo hashing entirely. We show that such a conceptually simple scheme lends to further extensions. Specifically, we obtain the first $O(\log^2 N/\log \log N)$ Oblivious Parallel RAM (OPRAM) scheme, thus not only matching the performance of the best known sequential ORAM, but also achieving super-logarithmic improvements in comparison with known OPRAM schemes.

On differential equivalence of APN functions

Mon, 09/25/2017 - 13:07
For a given vectorial Boolean function $F$ from $\mathbb{F}_2^n$ to itself it was defined an associated Boolean function $\gamma_F(a,b)$ in $2n$ variables by C. Carlet, P. Charpin, V. Zinoviev in 1998 that takes value $1$ iff $a\neq{\bf 0}$ and equation $F(x)+F(x+a)=b$ has solutions. In this paper we introduce the notion of differentially equivalent functions as vectorial functions that have equal associated Boolean functions. To describe differential equivalence class of a given APN function is an open problem of great interest. We obtained that each quadratic APN function $G$ in $n$ variables, $n\leq 6$, that is differentially equivalent to a given quadratic APN function $F$, is represented as $G = F + A$, where $A$ is an affine function. For the APN Gold function $F(x)=x^{2^k+1}$, where gcd$(k,n)=1$, we completely described all affine functions $A$ such that $F$ and $F+A$ are differentially equivalent. This result implies that APN Gold functions $F$ with $k=n/2 - 1$ for $n=4t$ form the first infinitive family of functions up to EA-equivalence having non-trivial differential equivalence class consisting of more that $2^{2n}$ trivial functions $F_{c,d}(x) = F(x+c)+d$, $c,d\in\mathbb{F}_2^n$.

Pages

IACR Eprint