Tight on Budget? Tight Bounds for r-Fold Approximate Differential Privacy

IACR Eprint - Wed, 09/05/2018 - 13:02
Many applications, such as anonymous communication systems, privacy-enhancing database queries, or privacy-enhancing machine-learning methods, require robust guarantees under thousands and sometimes millions of observations. The notion of r-fold approximate differential privacy (ADP) offers a well-established framework with a precise characterization of the degree of privacy after r observations of an attacker. However, existing bounds for r-fold ADP are loose and, if used for estimating the required degree of noise for an application, can lead to over-cautious choices for perturbation randomness and thus to suboptimal utility or overly high costs. We present a numerical and widely applicable method for capturing the privacy loss of differentially private mechanisms under composition, which we call privacy buckets. With privacy buckets we compute provable upper and lower bounds for ADP for a given number of observations. We compare our bounds with state-of-the-art bounds for r-fold ADP, including Kairouz, Oh, and Viswanath's composition theorem (KOV), concentrated differential privacy and the moments accountant. While KOV proved optimal bounds for heterogeneous adaptive k-fold composition, we show that for concrete sequences of mechanisms tighter bounds can be derived by taking the mechanisms' structure into account. We compare previous bounds for the Laplace mechanism, the Gauss mechanism, for a timing leakage reduction mechanism, and for the stochastic gradient descent and we significantly improve over their results (except that we match the KOV bound for the Laplace mechanism, for which it seems tight). Our lower bounds almost meet our upper bounds, showing that no significantly tighter bounds are possible.

Secure Outsourcing of Circuit Manufacturing

IACR Eprint - Wed, 09/05/2018 - 09:56
The fabrication process of integrated circuits (ICs) is complex and requires the use of off -shore foundries to lower the costs and to have access to leading-edge manufacturing facilities. Such an outsourcing trend leaves the possibility of inserting malicious circuitry (a.k.a. hardware Trojans) during the fabrication process, causing serious security concerns. Hardware Trojans are very hard and expensive to detect and can disrupt the entire circuit or covertly leak sensitive information via a subliminal channel. In this paper, we propose a formal model for assessing the security of ICs whose fabrication has been outsourced to an untrusted off -shore manufacturer. Our model captures that the IC specifi cation and design are trusted but the fabrication facility(ies) may be malicious. Our objective is to investigate security in an ideal sense and follows a simulation based approach that ensures that Trojans cannot release any sensitive information to the outside. It follows that the Trojans' impact in the overall IC operation, in case they exist, will be negligible up to simulation. We then establish that such level of security is in fact achievable for the case of a single and of multiple outsourcing facilities. We present two compilers for ICs for the single outsourcing facility case relying on verifi able computation (VC) schemes, and another two compilers for the multiple outsourcing facilities case, one relying on multi-server VC schemes, and the other relying on secure multiparty computation (MPC) protocols with certain suitable properties that are attainable by existing schemes

No right to remain silent: Isolating Malicious Mixes

IACR Eprint - Tue, 09/04/2018 - 19:53
Mix networks are a key technology to achieve network anonymity, private messaging, voting and database lookups. However, simple mix networks are vulnerable to malicious mixes, which may drop or delay packets to facilitate traffic analysis attacks. Mix networks with provable robustness address this drawback through complex and expensive proofs of correct shuffling, but come at a great cost and make limiting or unrealistic systems assumptions. We present {\em Miranda}, a synchronous mix network mechanism, which is {\em provably secure} against malicious mixes attempting active attacks to de-anonymize users, while retaining the simplicity, efficiency and practicality of mix networks designs. Miranda derives a robust mix reputation through the first-hand experience of mix node unreliability, reported by clients or other mixes. As a result, each active attack -- including dropping packets -- leads to reduced connectivity for malicious mixes and reduces their ability to attack. We show, through experiments, the effectiveness and practicality of Miranda by demonstrating that attacks are neutralized early, and that performance does not suffer.

VeritasDB: High Throughput Key-Value Store with Integrity

IACR Eprint - Tue, 09/04/2018 - 12:35
While businesses shift their databases to the cloud, they continue to depend on them to operate correctly. Alarmingly, cloud services constantly face threats from exploits in the privileged computing layers (e.g. OS, Hypervisor) and attacks from rogue datacenter administrators, which tamper with the database's storage and cause it to produce incorrect results. Although integrity verification of outsourced storage and file systems is a well-studied problem, prior techniques impose prohibitive overheads (up to 30x in throughput) and place additional responsibility on clients. We present VeritasDB, a key-value store that guarantees data integrity to the client in the presence of exploits or implementation bugs in the database server. VeritasDB is implemented as a network proxy that mediates communication between the unmodified client(s) and the unmodified database server, which can be any off-the-shelf database engine (e.g., Redis, RocksDB, Apache Cassandra). The proxy transforms each client request before forwarding it to the server and checks the correctness of the server's response before forwarding it to the client. To ensure the proxy is trusted, we use the protections of modern trusted hardware platforms, such as Intel SGX, to host the proxy's code and trusted state, thus completely eliminating trust on the cloud provider. To maintain high performance in VeritasDB while scaling to large databases, we design an authenticated Merkle B+-tree that leverages features of SGX (modest amount of protected RAM, direct access to large unprotected RAM, and CPU parallelism) to implement several novel optimizations based on caching, concurrency, and compression. On standard YCSB and Visa transaction workloads, we observe an average overhead of 2.8x in throughput and 2.5x in latency, compared to the (insecure) system with no integrity checks --- using CPU parallelism, we bring the throughput overhead down to 1.05x.

SIFA: Exploiting Ineffective Fault Inductions on Symmetric Cryptography

IACR Eprint - Tue, 09/04/2018 - 08:57
Since the seminal work of Boneh et al., the threat of fault attacks has been widely known and techniques for fault attacks and countermeasures have been studied extensively. The vast majority of the literature on fault attacks focuses on the ability of fault attacks to change an intermediate value to a faulty one, such as differential fault analysis (DFA), collision fault analysis, statistical fault attack (SFA), fault sensitivity analysis, or differential fault intensity analysis (DFIA). The other aspect of faults---that faults can be induced and do not change a value---has been researched far less. In case of symmetric ciphers, ineffective fault attacks (IFA) exploit this aspect. However, IFA relies on the ability of an attacker to reliably induce reproducible deterministic faults like stuck-at faults on parts of small values (e.g., one bit or byte), which is often considered to be impracticable. As a consequence, most countermeasures against fault attacks do not focus on such attacks, but on attacks exploiting changes of intermediate values and usually try to detect such a change (detection-based), or to destroy the exploitable information if a fault happens (infective countermeasures). Such countermeasures implicitly assume that the release of "fault-free" ciphertexts in the presence of a fault-inducing attacker does not reveal any exploitable information. In this work, we show that this assumption is not valid and we present novel fault attacks that work in the presence of detection-based and infective countermeasures. The attacks exploit the fact that intermediate values leading to "fault-free" ciphertexts show a non-uniform distribution, while they should be distributed uniformly. The presented attacks are entirely practical and are demonstrated to work for software implementations of AES and for a hardware co-processor. These practical attacks rely on fault induction by means of clock glitches and hence, are achieved using only low-cost equipment. This is feasible because our attack is very robust under noisy fault induction attempts and does not require the attacker to model or profile the exact fault effect. We target two types of countermeasures as examples: simple time redundancy with comparison and several infective countermeasures. However, our attacks can be applied to a wider range of countermeasures and are not restricted to these two countermeasures.

Simple and Efficient Two-Server ORAM

IACR Eprint - Mon, 09/03/2018 - 16:33
We show a protocol for two-server oblivious RAM (ORAM) that is simpler and more efficient than the best prior work. Our construction combines any tree-based ORAM with an extension of a two-server private information retrieval scheme by Boyle et al., and is able to avoid recursion and thus use only one round of interaction. In addition, our scheme has a very cheap initialization phase, making it well suited for RAM-based secure computation. Although our scheme requires the servers to perform a linear scan over the entire data, the cryptographic computation involved consists only of block-cipher evaluations. A practical instantiation of our protocol has excellent concrete parameters: for storing an $N$-element array of arbitrary size data blocks with statistical security parameter $\lambda$, the servers each store $4N$ encrypted blocks, the client stores $\lambda+2\log N$ blocks, and the total communication per logical access is roughly $10 \log N$ encrypted blocks.

Key Rotation for Authenticated Encryption

IACR Eprint - Mon, 09/03/2018 - 16:21
A common requirement in practice is to periodically rotate the keys used to encrypt stored data. Systems used by Amazon and Google do so using a hybrid encryption technique which is eminently practical but has questionable security in the face of key compromises and does not provide full key rotation. Meanwhile, symmetric updatable encryption schemes (introduced by Boneh et al. CRYPTO 2013) support full key rotation without performing decryption: ciphertexts created under one key can be rotated to ciphertexts created under a different key with the help of a re-encryption token. By design, the tokens do not leak information about keys or plaintexts and so can be given to storage providers without compromising security. But the prior work of Boneh et al. addresses relatively weak confidentiality goals and does not consider integrity at all. Moreover, as we show, a subtle issue with their concrete scheme obviates a security proof even for confidentiality against passive attacks. This paper presents a systematic study of updatable Authenticated Encryption (AE). We provide a set of security notions that strengthen those in prior work. These notions enable us to tease out real-world security requirements of different strengths and build schemes that satisfy them efficiently. We show that the hybrid approach currently used in industry achieves relatively weak forms of confidentiality and integrity, but can be modified at low cost to meet our stronger confidentiality and integrity goals. This leads to a practical scheme that has negligible overhead beyond conventional AE. We then introduce re-encryption indistinguishability, a security notion that formally captures the idea of fully refreshing keys upon rotation. We show how to repair the scheme of Boneh et al., attaining our stronger confidentiality notion. We also show how to extend the scheme to provide integrity, and we prove that it meets our re- encryption indistinguishability notion. Finally, we discuss how to instantiate our scheme efficiently using off-the-shelf cryptographic components (AE, hashing, elliptic curves). We report on the performance of a prototype implementation, showing that fully secure key rotations can be performed at a throughput of approximately 116 kB/s.

Short Digital Signatures and ID-KEMs via Truncation Collision Resistance

IACR Eprint - Mon, 09/03/2018 - 07:30
Truncation collision resistance is a simple non-interactive complexity assumption that seems very plausible for standard cryptographic hash functions like SHA-3. We describe how this assumption can be leveraged to obtain standard-model constructions of public-key cryptosystems that previously seemed to require a programmable random oracle. This includes the first constructions of identity-based key encapsulation mechanisms (ID-KEMs) and digital signatures over bilinear groups with full adaptive security and without random oracles, where a ciphertext or signature consists of only a single element of a prime-order group. We also describe a generic construction of ID-KEMs with full adaptive security from a scheme with very weak security ("selective and non-adaptive chosen-ID security"), and a similar generic construction for digital signatures.

Faster PCA and Linear Regression through Hypercubes in HElib

IACR Eprint - Sun, 09/02/2018 - 10:46
The significant advancements in the field of homomorphic encryption have led to a grown interest in securely outsourcing data and computation for privacy critical applications. In this paper, we focus on the problem of performing secure predictive analysis, such as principal component analysis (PCA) and linear regression, through exact arithmetic over encrypted data. We improve the plaintext structure of Lu et al.'s protocols (from NDSS 2017), by switching over from linear array arrangement to a two-dimensional hypercube. This enables us to utilize the SIMD (Single Instruction Multiple Data) operations to a larger extent, which results in improving the space and time complexity by a factor of matrix dimension. We implement both Lu et al.'s method and ours for PCA and linear regression over HElib, a software library that implements the Brakerski-Gentry-Vaikuntanathan (BGV) homomorphic encryption scheme. In particular, we show how to choose optimal parameters of the BGV scheme for both methods. For example, our experiments show that our method takes 45 seconds to train a linear regression model over a dataset with 32k records and 6 numerical attributes, while Lu et al.'s method takes 206 seconds.

Finding Ordinary Cube Variables for Keccak-MAC with Greedy Algorithm

IACR Eprint - Sat, 09/01/2018 - 12:38
In this paper, we present an alternative method to choose ordinary cube variables for Keccak-MAC. Firstly, we choose some good candidates for ordinary cube variables with key-independent conditions. Then, we construct inequalities of these candidates by considering their relations after one round. In this way, we can greatly reduce the number of inequalities and therefore can solve them without a solver. Moreover, based on our method, the property of the constructed inequalities can be considered, while it is not clear by using a solver in previous work. As a straight result, we can prove why only 30 ordinary cube variables without key-dependent bit conditions can be found by a solver in Ling Song et al's work. Besides, we can find more than 63 ordinary cube variables without key-dependent bit conditions for Keccak-MAC-384 as well. Based on our new way to recover the 128-bit key, the key-recovery attack on 7-round Keccak-MAC-128/256/384 is improved to $2^{71}$.

Perun: Virtual Payment Hubs over Cryptocurrencies

IACR Eprint - Sat, 09/01/2018 - 10:21
Payment channels emerged recently as an efficient method for performing cheap \emph{micropayments} in cryptocurrencies. In contrast to traditional on-chain transactions, payment channels have the advantage that they allow for nearly unlimited number of transactions between parties without involving the blockchain. In this work, we introduce \emph{Perun}, an off-chain channel system that offers a new method for connecting channels that is more efficient than the existing technique of ``routing transactions'' over multiple channels. To this end, Perun introduces a technique called ``virtual payment channels'' that avoids involvement of the intermediary for each individual payment. In this paper we formally model and prove security of this technique in the case of one intermediary, who can be viewed as a ``payment hub'' that has direct channels with several parties. Our scheme works over any cryptocurrency that provides Turing-complete smart contracts. As a proof of concept, we implemented Perun's smart contracts in \emph{Ethereum}.

Leakage-Resilient Cryptography from Puncturable Primitives and Obfuscation

IACR Eprint - Sat, 09/01/2018 - 10:04
In this work, we develop a framework for building leakage-resilient cryptosystems in the bounded leakage model from puncturable primitives and indistinguishability obfuscation ($i\mathcal{O}$). The major insight of our work is that various types of puncturable pseudorandom functions (PRFs) can achieve leakage resilience on an obfuscated street. First, we build leakage-resilient weak PRFs from weak puncturable PRFs and $i\mathcal{O}$, which readily imply leakage-resilient secret-key encryption. Second, we build leakage-resilient publicly evaluable PRFs (PEPRFs) from puncturable PEPRFs and $i\mathcal{O}$, which readily imply leakage-resilient key encapsulation mechanism and thus public-key encryption. As a building block of independent interest, we realize puncturable PEPRFs from either newly introduced puncturable objects such as puncturable trapdoor functions and puncturable extractable hash proof systems or existing puncturable PRFs with $i\mathcal{O}$. Finally, we construct the first leakage-resilient public-coin signature from selective puncturable PRFs, leakage-resilient one-way functions and $i\mathcal{O}$. This settles the open problem posed by Boyle, Segev and Wichs (Eurocrypt 2011). By further assuming the existence of lossy functions, all the above constructions achieve optimal leakage rate of $1 - o(1)$. Such a leakage rate is not known to be achievable for weak PRFs, PEPRFs and public-coin signatures before.

Security of the Blockchain against Long Delay Attack

IACR Eprint - Sat, 09/01/2018 - 09:36
The consensus protocol underlying Bitcoin (the blockchain) works remarkably well in practice. However proving its security in a formal setting has been an elusive goal. A recent analytical result by Pass, Seeman and shelat indicates that an idealized blockchain is indeed secure against attacks in an asynchronous network where messages are maliciously delayed by at most $\Delta\ll1/np$, with $n$ being the number of miners and $p$ the mining hardness. This paper improves upon the result by showing that if appropriate inconsistency tolerance is allowed the blockchain can withstand even more powerful external attacks in the honest miner setting. Specifically we prove that the blockchain is secure against long delay attacks with $\Delta\geq1/np$ in an asynchronous network.

Recovering Secrets From Prefix-Dependent Leakage

IACR Eprint - Sat, 09/01/2018 - 09:31
We discuss how to recover a secret bitstring given partial information obtained during a computation over that string, assuming the computation is a deterministic algorithm processing the secret bits sequentially. That abstract situation models certain types of side-channel attacks against discrete logarithm and RSA-based cryptosystems, where the adversary obtains information not on the secret exponent directly, but instead on the group or ring element that varies at each step of the exponentiation algorithm. Our main result shows that for a leakage of a single bit per iteration, under suitable statistical independence assumptions, one can recover the whole secret bitstring in polynomial time. We also discuss how to cope with imperfect leakage, extend the model to $k$-bit leaks, and show how our algorithm yields attacks on popular cryptosystems such as (EC)DSA.

Quantum algorithms for computing general discrete logarithms and orders with tradeoffs

IACR Eprint - Sat, 09/01/2018 - 09:24
In this paper, we generalize and bridge Shor's groundbreaking works on computing orders and general discrete logarithms, our earlier works on computing short discrete logarithms with tradeoffs and Seifert's work on computing orders with tradeoffs. In particular, we demonstrate how the idea of enabling tradeoffs may be extended to the case of computing general discrete logarithms. This yields a reduction by up to a factor of two in the number of group operations that need to be computed in each run of the quantum algorithm at the expense of having to run the algorithm multiple times. Combined with our earlier works this implies that the number of group operations that need to be computed in each run of the quantum algorithm equals the number of bits in the logarithm times a small constant factor that depends on the tradeoff factor. We comprehensively analyze the probability distribution induced by our quantum algorithm, describe how the algorithm may be simulated, and estimate the number of runs required to compute logarithms with a given minimum success probability for different tradeoff factors. Our algorithm does not require the group order to be known. If the order is unknown, it may be computed at no additional quantum cost from the same set of outputs as is used to compute the logarithm.

On relations between CCZ- and EA-equivalences

IACR Eprint - Sat, 09/01/2018 - 09:23
In the present paper we introduce some sufficient conditions and a procedure for checking whether, for a given function, CCZ-equivalence is more general than EA-equivalence together with taking inverses of permutations. It is known from Budaghyan, Carlet and Pott (2006) and Budaghyan, Carlet and Leander (2009) that for quadratic APN functions (both monomial and polynomial cases) CCZ-equivalence is more general. We prove hereby that for non-quadratic APN functions CCZ-equivalence can be more general (by studying the only known APN function which is CCZ-inequivalent to both power functions and quadratics). On the contrary, we prove that for pawer no-Gold APN functions, CCZ equivalence coincides with EA-equivalence and inverse transformation for $n\le 8$. We conjecture that this is true for any $n$.

Solving ECDLP via List Decoding

IACR Eprint - Sat, 09/01/2018 - 09:21
We provide a new approach to the elliptic curve discrete logarithm problem (ECDLP). First, we construct Elliptic Codes (EC codes) from the ECDLP. Then we propose an algorithm of finding the minimum weight codewords for algebraic geometry codes, especially for the elliptic code, via list decoding. Finally, with the minimum weight codewords, we show how to solve ECDLP. This work may provide a potential approach to speeding up the computation of ECDLP

Blending FHE-NTRU keys - The Excalibur Property

IACR Eprint - Sat, 09/01/2018 - 09:21
Can Bob give Alice his decryption secret and be convinced that she will not give it to someone else? This is achieved by a proxy re-encryption scheme where Alice does not have Bob’s secret but instead she can transform ciphertexts in order to decrypt them with her own key. In this article, we answer this question in a different perspective, relying on a property that can be found in the well-known modified NTRU encryption scheme. We show how parties can collaborate to one-way-glue their secret-keys together, giving Alice’s secret-key the additional ability to decrypt Bob’s ciphertexts. The main advantage is that the proto cols we propose can be plugged directly to the modified NTRU scheme with no post-key-generation space or time costs, nor any modification of ciphertexts. In addition, this property translates to the NTRU-based multikey homomorphic scheme, allowing to equip a hierarchic chain of users with automatic re-encryption of messages and supporting homomorphic operations of ciphertexts. To achieve this, we propose two-party computation protocols in cyclotomic polynomial rings. We base the security in presence of various types of adversaries on the RLWE and DSPR assumptions, and on two new problems in the modified NTRU ring.

Universal Forgery and Multiple Forgeries of MergeMAC and Generalized Constructions

IACR Eprint - Sat, 09/01/2018 - 09:21
This article presents universal forgery and multiple forgeries against MergeMAC that has been recently proposed to fit scenarios where bandwidth is limited and where strict time constraints apply. MergeMAC divides an input message into two parts, $m\|\tilde{m}$, and its tag is computed by $\mathcal{F}( \mathcal{P}_1(m) \oplus \mathcal{P}_2(\tilde{m}) )$, where $\mathcal{P}_1$ and $\mathcal{P}_2$ are PRFs and $\mathcal{F}$ is a public function. The tag size is 64 bits. The designers claim $64$-bit security and imply a risk of accepting beyond-birthday-bound queries. This paper first shows that it is inevitable to limit the number of queries up to the birthday bound, because a generic universal forgery against CBC-like MAC can be adopted to MergeMAC. Afterwards another attack is presented that works with a very few number of queries, 3 queries and $2^{58.6}$ computations of $\mathcal{F}$, by applying a preimage attack against weak $\mathcal{F}$, which breaks the claimed security. The analysis is then generalized to a MergeMAC variant where $\mathcal{F}$ is replaced with a one-way function $\mathcal{H}$. Finally, multiple forgeries are discussed in which the attacker's goal is to improve the ratio of the number of queries to the number of forged tags. It is shown that the attacker obtains tags of $q^2$ messages only by making $2q-1$ queries in the sense of existential forgery, and this is tight when $q^2$ messages have a particular structure. For universal forgery, tags for $3q$ arbitrary chosen messages can be obtained by making $5q$ queries.

Faster Modular Arithmetic For Isogeny Based Crypto on Embedded Devices

IACR Eprint - Sat, 09/01/2018 - 09:17
We show how to implement the Montgomery reduction algorithm for isogeny based cryptography such that it can utilize the "unsigned multiply accumulate accumulate long" instruction present on modern ARM architectures. This results in a practical speed-up of a factor 1.34 compared to the approach used by SIKE: the supersingular isogeny based submission to the ongoing post-quantum standardization effort. Moreover, motivated by the recent work of Costello and Hisil (ASIACRYPT 2017), which shows that there is only a moderate degradation in performance when evaluating large odd degree isogenies, we search for more general supersingular isogeny friendly moduli. Using graphics processing units to accelerate this search we find many such moduli which allow for faster implementations on embedded devices. By combining these two approaches we manage to make the modular reduction 1.5 times as fast on a 32-bit ARM platform.

Pages

Feed aggregator

Subscribe to Crypto aggregator