Subscribe to IACR Eprint feed
Updated: 9 hours 30 min ago

On the Security Properties of e-Voting Bulletin Boards

Tue, 06/05/2018 - 08:49
In state-of-the-art e-voting systems, a bulletin board (BB) is a critical component for preserving election integrity and availability. Although it is common in the literature to assume that a BB is a centralized entity that is trusted, in the recent works of Culnane and Schneider [CSF 2014] and Chondros et al. [ICDCS 2016], the importance of removing BB as a single point of failure has been extensively discussed. Motivated by these works, we introduce a framework for the formal security analysis of the BB functionality modeled as a distributed system that comprises (i) a subsystem of item collection (IC) peers that receive and store the submitted items, and (ii) a subsystem of audit board (AB) peers, where the IC subsystem publishes the stored items for verification. Our framework treats a secure BB as a robust public transaction ledger, defined by Garay et al. [Eurocrypt 2015], that additionally supports the generation of receipts for successful posting. Namely, in our model, a secure BB system achieves Persistence and Liveness that are confirmable, in the sense that any malicious behavior of the BB system can be detected via a verification mechanism. As a case study for our framework, we analyze the BB system of Culnane and Schneider and point out its weaknesses. We demonstrate an attack revealing that the said system does not achieve Confirmable Liveness, even in the case where the adversary is computationally bounded and covert, i.e., it may deviate from the protocol specification but does not want to be detected. In addition, we show that special care should be taken for the choice of the underlying cryptographic primitives, so that the claimed fault tolerance threshold of N/3 out-of N corrupted IC peers is preserved. Next, based on our analysis, we introduce a new BB protocol that upgrades the [CSF 2014] protocol. We prove that it tolerates any number less than N/3 out-of N corrupted IC peers both for Persistence and Confirmable Liveness, against a computationally bounded general Byzantine adversary. Furthermore, Persistence can also be Confirmable, if we distribute the AB (originally a centralized entity in [CSF 2014]) as a replicated service with honest majority.

The Curse of Small Domains: New Attacks on Format-Preserving Encryption

Tue, 06/05/2018 - 00:23
Format-preserving encryption (FPE) produces ciphertexts which have the same format as the plaintexts. Building secure FPE is very challenging, and recent attacks (Bellare, Hoang, Tessaro, CCS'16; Durak and Vaudenay, CRYPTO'17) have highlighted security deficiencies in the recent NIST SP800-38G standard. This has left the question open of whether practical schemes with high security exist. In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks. We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC,'99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications. All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.

Private Circuits: A Modular Approach

Mon, 06/04/2018 - 23:51
We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability $p > 0$, the leakage reveals essentially nothing about the input. In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $p<1$, there is a finite basis B such that leakage-resilient computation with leakage probability $p$ can be realized using circuits over the basis B. We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $p'$-leakage of input values alone, for any $p<p'<1$. Finally, we complement this by a negative result, showing that for every basis B there is some leakage probability $p<1$ such that for any $p'<1$, leakage tolerance as above cannot be achieved in general. We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with $O(t^{1+\varepsilon})$ random bits, for every constant $\varepsilon > 0$. This (near-optimal) bound significantly improves upon previous constructions that required more than $t^{3}$ random bits.

Homomorphic Encryption for Approximate Matrix Arithmetic

Mon, 06/04/2018 - 23:50
Homomorphic Encryption for Arithmetic of Approximate Numbers (HEAAN) with its vector packing technique proved its potential in cryptographic applications. In this paper, we propose MHEAAN - a generalization of the HEAAN scheme to a multivariate case. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetic, such as floating point operations. In addition, with a multivariate structure of the plaintext space, we suggest a general method of packing multidimensional structures as matrices and tensors in a single ciphertext. We provide a concrete two-dimensional construction and show the efficiency of our scheme on several matrix operations, such as matrix transposition, matrix multiplication, and inverse.

Impossibility on Tamper-Resilient Cryptography with Uniqueness Properties

Mon, 06/04/2018 - 20:23
In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS ’13), and the tampering model we treat is the plain model, where public parameters and public/secret key pairs are potentially tampered with.

Multi-client Predicate-only Encryption for Conjunctive Equality Tests

Mon, 06/04/2018 - 19:53
We propose the first multi-client predicate-only encryption scheme capable of efficiently testing the equality of two encrypted vectors. Our construction can be used for the privacy-preserving monitoring of relations among multiple clients. Since both the clients’ data and the predicates are encrypted, our system is suitable for situations in which this information is considered sensitive. We prove our construction plaintext and predicate private in the generic bilinear group model using random oracles, and secure under chosen-plaintext attack with unbounded corruptions under the symmetric external Diffie-Hellman assumption. Additionally, we provide a proof-of-concept implementation that is capable of evaluating one thousand predicates defined over the inputs of ten clients in less than a minute on commodity hardware.

maskVerif: a formal tool for analyzing software and hardware masked implementations

Mon, 06/04/2018 - 19:53
Masking is a popular countermeasure for protecting both hardware and software implementations against differential power analysis. A main strength of software masking is that its security guarantees can be captured formally through well-established models. The existence of such models, and their relation with probabilistic information flow studied in formal methods, has been instrumental to the emergence of fully automated methods for analyzing masked implementations. In particular, state-of-the-art tools such as maskVerif (Barthe et al., EUROCRYPT 2015), have been used successfully for analyzing masked implementations at high orders. In contrast, security models for hardware implementations have remained somewhat less developed, and no prior verification tool has accommodated hardware-specific sources of vulnerabilities such as glitches. Recently, Bloem et al. formalize the security of masked hardware implementations against glitches and give a method based on SAT solvers for verifying security automatically. However, their method works for small functionalities and low orders. In this paper, we extend maskVerif tool (Barthe et al., EUROCRYPT 2015) with a unified framework to efficiently and formally verify both software and hardware implementations. In this process, we introduce a simple but expressive intermediate language. Our representation requires that each instruction is instrumented with leakage expressions that may depend on the expressions that arise in the instruction and on previous computation. Despite its simplicity, our intermediate representation covers a broad range of models from the literature; moreover, it is also easily amenable to efficient formal verification. Our results significantly improve over prior work, both in terms of coverage and efficiency. In particular, we demonstrate that our tool is able to analyze examples from (Bloem et al, EUROCRYPT 2018) much faster and at high orders.

Blockchain Abstract Data Type

Mon, 06/04/2018 - 19:52
The presented work continues the line of recent distributed computing community efforts dedicated to the theoretical aspects of blockchains. This paper is the first to specify blockchains as a composition of abstract data types all together with a hierarchy of consistency criteria that formally characterizes the histories admissible for distributed programs that use them. Our work is based on an original oracle-based construction that, along with new consistency definitions, captures the eventual convergence process in blockchain systems. The paper presents as well some results on implementability of the presented abstractions and a mapping of representative existing blockchains from both academia and industry in our framework.

Sub-Linear Lattice-Based Zero-Knowledge Arguments for Arithmetic Circuits

Mon, 06/04/2018 - 19:46
We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $p$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $N$ gates, the communication complexity of our protocol is $O\left(\sqrt{N\lambda\log^3{N}}\right)$, where $\lambda$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damg{\aa}rd et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest.

Proofs of Work from Worst-Case Assumptions

Mon, 06/04/2018 - 19:44
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC '17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept' of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs' structure can be exploited in ways previous heuristic PoWs could not. This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS '16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC '17) can be modified to have much faster verification time, can be proved in zero knowledge, and more. Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.

Simplifying Game-Based Definitions: Indistinguishability up to Correctness and Its Application to Stateful AE

Mon, 06/04/2018 - 19:43
Often the simplest way of specifying game-based cryptographic definitions is apparently barred because the adversary would have some trivial win. Disallowing or invalidating these wins can lead to complex or unconvincing definitions. We suggest a generic way around this difficulty. We call it indistinguishability up to correctness, or IND|C. Given games G and H and a correctness condition C we define an advantage measure Adv_{G,H,C}^indc wherein G/H distinguishing attacks are effaced to the extent that they are inevitable due to C. We formalize this in the language of oracle silencing, an alternative to exclusion-style and penalty-style definitions. We apply our ideas to a domain where game-based definitions have been cumbersome: stateful authenticated-encryption (sAE). We rework existing sAE notions and encompass new ones, like replay-free AE permitting a specified degree of out-of-order message delivery.

Non-Interactive Zero-Knowledge Proofs for Composite Statements

Mon, 06/04/2018 - 19:41
The two most common ways to design non-interactive zero-knowledge (NIZK) proofs are based on Sigma protocols and QAP-based SNARKs. The former is highly efficient for proving algebraic statements while the latter is superior for arithmetic representations. Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of $x$ such that $SHA(g^x)=y$ for some public $y$ where the prover's work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.

Limits on the Power of Garbling Techniques for Public-Key Encryption

Mon, 06/04/2018 - 19:37
Understanding whether public-key encryption can be based on one-way functions is a fundamental open problem in cryptography. The seminal work of Impagliazzo and Rudich [STOC'89] shows that black-box constructions of public-key encryption from one-way functions are impossible. However, this impossibility result leaves open the possibility of using non-black-box techniques for achieving this goal. One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao's garbled circuit technique [FOCS'86]. As for the non-black-box power of this technique, the recent work of Dottling and Garg [CRYPTO'17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption. We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.

A new class of irreducible pentanomials for polynomial based multipliers in binary fields

Mon, 06/04/2018 - 19:36
We introduce a new class of irreducible pentanomials over $\mathbb{F}_2$ of the form $f(x) = x^{2b+c} + x^{b+c} + x^b + x^c + 1$. Let $m=2b+c$ and use $f$ to define the finite field extension of degree $m$. We give the exact number of operations required for computing the reduction modulo $f$. We also provide a multiplier based on Karatsuba algorithm in $\mathbb{F}_2[x]$ combined with our reduction process. We give the total cost of the multiplier and found that the bit-parallel multiplier defined by this new class of polynomials has improved XOR and AND complexity. Our multiplier has comparable time delay when compared to other multipliers based on Karatsuba algorithm.

Optimal Channel Security Against Fine-Grained State Compromise: The Safety of Messaging

Mon, 06/04/2018 - 19:36
We aim to understand the best possible security of a (bidirectional) cryptographic channel against an adversary that may arbitrarily and repeatedly learn the secret state of either communicating party. We give a formal security definition and a proven-secure construction. This construction provides better security against state compromise than the Signal Double Ratchet Algorithm or any other known channel construction. To facilitate this we define and construct new forms of public-key encryption and digital signatures that update their keys over time.

On the Complexity of Compressing Obfuscation

Mon, 06/04/2018 - 19:35
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct. In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters. We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation. First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives. Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions. Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.

Structured Encryption and Leakage Suppression

Mon, 06/04/2018 - 19:34
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. One aspect of STE that is still poorly understood is its leakage. In this work, we describe a general framework to design STE schemes that do not leak the query/search pattern (i.e., if and when a query was previously made). Our framework consists of two compilers. The first can be used to make any dynamic STE scheme rebuildable in the sense that the encrypted structures it produces can be rebuilt efficiently using only O(1) client storage. The second transforms any rebuildable scheme that leaks the query/search pattern into a new scheme that does not. Our second compiler is a generalization of Goldreich and Ostrovsky's square root oblivious RAM (ORAM) solution but does not make use of black-box ORAM simulation. We show that our framework produces STE schemes with query complexity that is asymptotically better than ORAM simulation in certain (natural) settings and comparable to special-purpose oblivious data structures. We use our framework to design a new STE scheme that is ``almost" zero-leakage in the sense that it reveals an, intuitively-speaking, small amount of information. We also show how the scheme can be used to achieve zero-leakage queries when one can tolerate a probabilistic guarantee of correctness. This construction results from applying our compilers to a new STE scheme we design called the piggyback scheme. This scheme is a general-purpose STE construction (in the sense that it can encrypt any data structure) that leaks the search/query pattern but hides the response length on non-repeating queries.

Trapdoor Functions from the Computational Diffie-Hellman Assumption

Mon, 06/04/2018 - 19:31
Trapdoor functions (TDFs) are a fundamental primitive in cryptography. Yet, the current set of assumptions known to imply TDFs is surprisingly limited, when compared to public-key encryption. We present a new general approach for constructing TDFs. Specifically, we give a generic construction of TDFs from any Hash Encryption (Döttling and Garg [CRYPTO '17]) satisfying a novel property which we call recyclability. By showing how to adapt current Computational Diffie-Hellman (CDH) based constructions of hash encryption to yield recyclability, we obtain the first construction of TDFs with security proved under the CDH assumption. While TDFs from the Decisional Diffie-Hellman (DDH) assumption were previously known, the possibility of basing them on CDH had remained open for more than 30 years.

PRank: Fast Analytical Rank Estimation via Pareto Distributions

Mon, 06/04/2018 - 19:30
Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over $d$ subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search. We propose a new method called PRank for rank estimation, that is conceptually simple, and more time and memory efficient than previous proposals. Our main idea is to bound each subkey distribution by a Pareto-like function: since these are analytical functions, we can then estimate the rank by a closed formula. We evaluated the performance of PRank through extensive simulations based on two real SCA data corpora, and compared it to the currently-best histogram-based algorithm. We show that PRank gives a good rank estimation with much improved time and memory efficiency, especially for large ranks: For ranks between $2^{80}-2^{100}$ PRank estimation is at most 10 bits above the histogram rank and for ranks beyond $2^{100}$ the PRank estimation is only 4 bits above the histogram rank---yet it runs faster, and uses negligible memory. PRank gives a new and interesting method to solve the rank estimation problem based on reduction to analytical functions and calculating one closed formula hence using negligible time and space.

Adaptive Garbled RAM from Laconic Oblivious Transfer

Mon, 06/04/2018 - 19:29
We give a construction of an adaptive garbled RAM scheme. In the adaptive setting, a client first garbles a ``large'' persistent database which is stored on a server. Next, the client can provide multiple adaptively and adversarially chosen RAM garbled programs that execute and modify the stored database arbitrarily. The garbled database and the garbled program should reveal nothing more than the running time and the output of the computation. Furthermore, the sizes of the garbled database and the garbled program grow only linearly in the size of the database and the running time of the executed program respectively (up to polylogarithmic factors). The security of our construction is based on the assumption that laconic oblivious transfer (Cho et al., CRYPTO 2017) exists. Previously, such adaptive garbled RAM constructions were only known using indistinguishability obfuscation or in random oracle model. As an additional application, we note that this work yields the first constant round secure computation protocol for persistent RAM programs in the malicious setting from standard assumptions. Prior works did not support persistence in the malicious setting.

Pages

IACR Eprint