Subscribe to IACR Eprint feed
Updated: 2 hours 22 min ago

Instant Privacy-Preserving Biometric Authentication for Hamming Distance

Sun, 12/23/2018 - 22:52
In recent years, there has been enormous research attention in privacy-preserving biometric authentication, which enables a user to verify him or herself to a server without disclosing raw biometric information. Since biometrics is irrevocable when exposed, it is very important to protect its privacy. In IEEE TIFS 2018, Zhou and Ren proposed a privacy-preserving user-centric biometric authentication scheme named PassBio, where the end-users encrypt their own templates, and the authentication server never sees the raw templates during the authentication phase. In their approach, it takes about 1 second to encrypt and compare 2000-bit templates based on Hamming distance on a laptop. However, this result is still far from practice because the size of templates used in commercialized products is much larger: according to NIST IREX IX report of 2018 which analyzed 46 iris recognition algorithms, size of their templates varies from 4,632-bit (579-byte) to 145,832-bit (18,229-byte). In this paper, we propose a new privacy-preserving user-centric biometric authentication (HDM-PPBA) based on Hamming distance, which shows a big improvement in efficiency to the previous works. It is based on our new single-key function-hiding inner product encryption, which encrypts and computes the Hamming distance of 145,832-bit binary in about 0.3 seconds on Intel Core i5 2.9GHz CPU. We show that it satisfies simulation-based security under the hardness assumption of Learning with Errors (LWE) problem. The storage requirements, bandwidth and time complexity of HDM-PPBA depend linearly on the bit-length of biometrics, and it is applicable to any large templates used in NIST IREX IX report with high efficiency.

New Hybrid Method for Isogeny-based Cryptosystems using Edwards Curves

Sun, 12/23/2018 - 18:48
Along with the resistance against quantum computers, isogeny-based cryptography offers attractive cryptosystems due to small key sizes and compatibility with the current elliptic curve primitives. While the state-of-the-art implementation uses Montgomery curves which facilitates efficient elliptic curve arithmetic and isogeny computations, other forms of elliptic curves can be used to produce an efficient result. In this paper, we present the new hybrid method for isogeny-based cryptosystem using Edwards curves. Unlike the previous hybrid methods, we used Edwards curves for isogeny computations and Montgomery curves for other operations. We demonstrated that our hybrid method outperforms the previously proposed hybrid method, and is as fast as Montgomery-only implementation. We present the implementation results of Supersingular Isogeny Diffie--Hellman (SIDH) key exchange using the proposed hybrid method. Our results show that the use of Edwards curves for isogeny-based cryptosystem can be quite practical.

Deep Learning vs Template Attacks in front of fundamental targets: experimental study

Sun, 12/23/2018 - 18:33
This study compares the experimental results of Template Attacks (TA) and Deep Learning (DL) techniques called Multi Layer Perceptron (MLP) and Convolutional Neural Network (CNN), concurrently in front of classical use cases often encountered in the side-channel analysis of cryptographic devices (restricted to SK). The starting point regards their comparative effectiveness against masked encryption which appears as intrinsically vulnerable. Surprisingly TA improved with Principal Components Analysis (PCA) and normalization, honorably makes the grade versus the latest DL methods which demand more calculation power. Another result is that both approaches face high difficulties against static targets such as secret data transfers or key schedule. The explanation of these observations resides in cross-matching. Beyond masking, the effects of other protections like jittering, shuffling and coding size are also tested. At the end of the day the benefit of DL techniques, stands in the better resistance of CNN to misalignment.

Side-Channel Analysis of SM2: A Late-Stage Featurization Case Study

Fri, 12/21/2018 - 09:45
SM2 is a public key cryptography suite originating from Chinese standards, including digital signatures and public key encryption. Ahead of schedule, code for this functionality was recently mainlined in OpenSSL, marked for the upcoming 1.1.1 release. We perform a security review of this implementation, uncovering various deficiencies ranging from traditional software quality issues to side-channel risks. To assess the latter, we carry out a side-channel security evaluation and discover that the implementation hits every pitfall seen for OpenSSL's ECDSA code in the past decade. We carry out remote timings, cache timings, and EM analysis, with accompanying empirical data to demonstrate secret information leakage during execution of both digital signature generation and public key decryption. Finally, we propose, implement, and empirically evaluate countermeasures.

The IITM Model: a Simple and Expressive Model for Universal Composability

Fri, 12/21/2018 - 06:30
The universal composability paradigm allows for the modular design and analysis of cryptographic protocols. It has been widely and successfully used in cryptography. However, devising a coherent yet simple and expressive model for universal composability is, as the history of such models shows, highly non-trivial. For example, several partly severe problems have been pointed out in the literature for the UC model. In this work, we propose a coherent model for universal composability, called the IITM model (``Inexhaustible Interactive Turing Machine''). A main feature of the model is that it is stated without a priori fixing irrelevant details, such as a specific way of addressing of machines by session and party identifiers, a specific modeling of corruption, or a specific protocol hierarchy. In addition, we employ a very general notion of runtime. All reasonable protocols and ideal functionalities should be expressible based on this notion in a direct and natural way, and without tweaks, such as (artificial) padding of messages or (artificially) adding extra messages. Not least because of these features, the model is simple and expressive. Also the general results that we prove, such as composition theorems, hold independently of how such details are fixed for concrete applications. Being inspired by other models for universal composability, in particular the UC model and because of the flexibility and expressivity of the IITM model, conceptually, results formulated in these models directly carry over to the IITM model.

Continuous-Source Fuzzy Extractors: Source uncertainty and security

Thu, 12/20/2018 - 13:49
Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a high-entropy source into the same uniformly distributed key. The functionality of a fuzzy extractor outputs the key when provided with a value close to the original reading of the source. A necessary condition for security, called fuzzy min-entropy, is that the probability of every ball of values of the noisy source is small. Many noisy sources are best modeled using continuous metric spaces. To build continuous-source fuzzy extractors, prior work assumes that the system designer has a good model of the distribution (Verbitskiy et al., IEEE TIFS 2010). However, it is impossible to build an accurate model of a high entropy distribution just by sampling from the distribution. Model inaccuracy may be a serious problem. We demonstrate a family of continuous distributions W that is impossible to secure. No fuzzy extractor designed for W extracts a meaningful key from an average element of W. This impossibility result is despite the fact that each element W ∈ W has high fuzzy min-entropy. We show a qualitatively stronger negative result for secure sketches, which are used to construct most fuzzy extractors. Our results are for the Euclidean metric and are information-theoretic in nature. To the best of our knowledge all continuous-source fuzzy extractors argue information-theoretic security. Fuller, Reyzin, and Smith showed comparable negative results for a discrete metric space equipped with the Hamming metric (Asiacrypt 2016). Continuous Euclidean space necessitates new techniques.

Vault: Fast Bootstrapping for the Algorand Cryptocurrency

Wed, 12/19/2018 - 19:09
Decentralized cryptocurrencies rely on participants to keep track of the state of the system in order to verify new transactions. As the number of users and transactions grows, this requirement becomes a significant burden, requiring users to download, verify, and store a large amount of data to participate. Vault is a new cryptocurrency design based on Algorand that minimizes these storage and bootstrapping costs for participants. Vault’s design is based on Algorand’s proof-of-stake consensus protocol and uses several techniques to achieve its goals. First, Vault decouples the storage of recent transactions from the storage of account balances, which enables Vault to delete old account state. Second, Vault allows sharding state across participants in a way that preserves strong security guarantees. Finally, Vault introduces the notion of stamping certificates, which allow a new client to catch up securely and efficiently in a proof-of-stake system without having to verify every single block. Experiments with a prototype implementation of Vault’s data structures show that Vault’s design reduces the bandwidth cost of joining the network as a full client by 99.7% compared to Bitcoin and 90.5% compared to Ethereum when downloading a ledger containing 500 million transactions.

Multi-Target Attacks on the Picnic Signature Scheme and Related Protocols

Wed, 12/19/2018 - 15:01
Picnic is a signature scheme that was presented at ACM CCS 2017 by Chase et al. and submitted to NIST's post-quantum standardization project. Among all submissions to NIST's project, Picnic is one of the most innovative, making use of recent progress in construction of practically efficient zero-knowledge (ZK) protocols for general circuits. In this paper, we devise multi-target attacks on Picnic and its underlying ZK protocol, ZKB++. Given access to $S$ signatures, produced by a single or by several users, our attack can (information theoretically) recover the $\kappa$-bit signing key of a user in complexity of about $2^{\kappa - 7}/S$. This is faster than Picnic's claimed $2^{\kappa}$ security against classical (non-quantum) attacks by a factor of $2^7 \cdot S$ (as each signature contains about $2^7$ potential attack targets). Whereas in most multi-target attacks, the attacker can easily sort and match the available targets, this is not the case in our attack on Picnic, as different bits of information are available for each target. Consequently, it is challenging to reach the information theoretic complexity in a computational model, and we had to perform cryptanalytic optimizations by carefully analyzing ZKB++ and its underlying circuit. Our best attack for $\kappa = 128$ has time complexity of $T = 2^{77}$ for $S = 2^{64}$. Alternatively, we can reach the information theoretic complexity of $T = 2^{64}$ for $S = 2^{57}$, given that all signatures are produced with the same signing key. Our attack exploits a weakness in the way that the Picnic signing algorithm uses a pseudo-random generator. The attack is mitigated in the recent Picnic 2.0 version. In addition to our attack on Picnic, we show that a recently proposed improvement of the ZKB++ protocol (due to Katz, Kolesnikov and Wang) is vulnerable to a similar multi-target attack.

Countering Block Withholding Attack Effciently

Wed, 12/19/2018 - 15:01
Bitcoin, well-known cryptocurrency, selected Poof-of-Work (PoW) for its security. PoW mechanism incentivizes participants and deters attacks on the network. Bitcoin seems to have operated the stable distributed network with PoW until now. Researchers found, however, some vulnerabilities in PoW such as selfish mining, block withholding attack, and so on. Especially, after Rosenfeld suggested block withholding attack and Eyal made this attack practical, many variants and countermeasures have been proposed. Most countermeasures, however, were accompanied by changes in the mining algorithm to make the attack impossible, which lowered the practical adaptability. In this paper, we propose a countermeasure to prevent block withholding attack effectively. Mining pools can adapt our method without changing their mining environment.

MProve: A Proof of Assets Protocol for Monero Exchanges

Wed, 12/19/2018 - 14:59
Theft from cryptocurrency exchanges due to cyberattacks or internal fraud is a major problem. Exchanges can partially alleviate customer concerns by providing periodic proofs of solvency. We describe MProve, a proof of assets protocol for Monero exchanges which can be combined with a known proof of liabilities protocol to provide a proof of solvency. MProve also provides a simple proof of non-collusion between exchanges.

Teleportation-based quantum homomorphic encryption scheme with quasi-compactness and perfect security

Wed, 12/19/2018 - 14:59
Quantum homomorphic encryption (QHE) is an important cryptographic technology for delegated quantum computation. It enables remote Server performing quantum computation on encrypted quantum data, and the specific algorithm performed by Server is unnecessarily known by Client. Quantum fully homomorphic encryption (QFHE) is a QHE that satisfies both compactness and $\mathcal{F}$-homomorphism, which is homomorphic for any quantum circuits. However, Yu et al.[Phys. Rev. A 90, 050303(2014)] proved a negative result: assume interaction is not allowed, it is impossible to construct perfectly secure QFHE scheme. So this article focuses on non-interactive and perfectly secure QHE scheme with loosen requirement, specifically quasi-compactness. This article defines encrypted gate, which is denoted by $EG[U]:|\alpha\rangle\rightarrow\left((a,b),Enc_{a,b}(U|\alpha\rangle)\right)$. We present a gate-teleportation-based two-party computation scheme for $EG[U]$, where one party gives arbitrary quantum state $|\alpha\rangle$ as input and obtains the encrypted $U$-computing result $Enc_{a,b}(U|\alpha\rangle)$, and the other party obtains the random bits $a,b$. Based on $EG[P^x](x\in\{0,1\})$, we propose a method to remove the $P$-error generated in the homomorphic evaluation of $T/T^\dagger$-gate. Using this method, we design two non-interactive and perfectly secure QHE schemes named \texttt{GT} and \texttt{VGT}. Both of them are $\mathcal{F}$-homomorphic and quasi-compact (the decryption complexity depends on the $T/T^\dagger$-gate complexity). Assume $\mathcal{F}$-homomorphism, non-interaction and perfect security are necessary property, the quasi-compactness is proved to be bounded by $O(M)$, where $M$ is the total number of $T/T^\dagger$-gates in the evaluated circuit. \texttt{VGT} is proved to be optimal and has $M$-quasi-compactness. According to our QHE schemes, the decryption would be inefficient if the evaluated circuit contains exponential number of $T/T^\dagger$-gates. Thus our schemes are suitable for homomorphic evaluation of any quantum circuit with low $T/T^\dagger$-gate complexity, such as any polynomial-size quantum circuit or any quantum circuit with polynomial number of $T/T^\dagger$-gates.

Revisiting Orthogonal Lattice Attacks on Approximate Common Divisor Problems and their Applications

Wed, 12/19/2018 - 14:57
In this paper, we revisit three existing types of orthogonal lattice (OL) attacks and propose optimized cases to solve approximate common divisor (ACD) problems. In order to reduce both space and time costs, we also make an improved lattice using the rounding technique. Further, we present asymptotic formulas of the time complexities on our optimizations as well as three known OL attacks. Besides, we give specific conditions that the optimized OL attacks can work and show how the attack ability depends on the blocksize $\beta$ in the BKZ-$\beta$ algorithm. Therefore, we put forward a method to estimate the concrete cost of solving the random ACD instances. It can be used in the choice of practical parameters in ACD problems. Finally, we give the security estimates of some ACD-based FHE constructions from the literature and also analyze the implicit factorization problem with sufficient number of samples. In the above situations, our optimized OL attack using the rounding technique performs fastest in practice.

On the Decoding Failure Rate of QC-MDPC Bit-Flipping Decoders

Wed, 12/19/2018 - 14:57
Quasi-cyclic moderate density parity check codes allow the design of McEliece-like public-key encryption schemes with compact keys and a security that provably reduces to hard decoding problems for quasi-cyclic codes. In particular, QC-MDPC are among the most promising code-based key encapsulation mechanisms (KEM) that are proposed to the NIST call for standardization of quantum safe cryptography (two proposals, BIKE and QC-MDPC KEM). The first generation of decoding algorithms suffers from a small, but not negligible, decoding failure rate (DFR in the order of $10^{-7}$ to $10^{-10}$). This allows a key recovery attack presented by Guo, Johansson, and Stankovski (GJS attack) at Asiacrypt 2016 which exploits a small correlation between the faulty message patterns and the secret key of the scheme, and limits the usage of the scheme to KEMs using ephemeral public keys. It does not impact the interactive establishment of secure communications (e.g. TLS), but the use of static public keys for asynchronous applications (e.g. email) is rendered dangerous. Understanding and improving the decoding of QCMDPC is thus of interest for cryptographic applications. In particular, finding parameters for which the failure rate is provably negligible (typically as low as $2^{-64}$ or $2^{-128}$) would allow static keys and increase the applicability of the mentioned cryptosystems. We study here a simple variant of bit-flipping decoding, which we call step-by-step decoding. It has a higher DFR but its evolution can be modelled by a Markov chain, within the theoretical framework of Julia Chaulet's PhD thesis. We study two other, more efficient, decoders. One is the textbook algorithm. The other is (close to) the BIKE decoder. For all those algorithms we provide simulation results, and, assuming an evolution similar to the step-by-step decoder, we extrapolate the value of the DFR as a function of the block length. This will give an indication of how much the code parameters must be increased to ensure resistance to the GJS attack.

ARPA Whitepaper

Wed, 12/19/2018 - 14:56
We propose a secure computation solution for blockchain networks. The correctness of computation is verifiable even under malicious majority condition using information-theoretic Message Authentication Code (MAC), and the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty computation protocol and a layer2 solution, our privacy-preserving computation guarantees data security on blockchain, cryptographically, while reducing the heavy-lifting computation job to a few nodes. This breakthrough has several implications on the future of decentralized networks. First, secure computation can be used to support Private Smart Contracts, where consensus is reached without exposing the information in the public contract. Second, it enables data to be shared and used in trustless network, without disclosing the raw data during data-at-use, where data ownership and data usage is safely separated. Last but not least, computation and verification processes are separated, which can be perceived as computational sharding, this effectively makes the transaction processing speed linear to the number of participating nodes. Our objective is to deploy our secure computation network as an layer2 solution to any blockchain system. Smart Contracts\cite{smartcontract} will be used as bridge to link the blockchain and computation networks. Additionally, they will be used as verifier to ensure that outsourced computation is completed correctly. In order to achieve this, we first develop a general MPC network with advanced features, such as: 1) Secure Computation, 2) Off-chain Computation, 3) Verifiable Computation, and 4)Support dApps' needs like privacy-preserving data exchange.

Cryptanalysis of a code-based one-time signature

Wed, 12/19/2018 - 14:54
In 2012, Lyubashevsky introduced a new framework for building lattice-based signature schemes without resorting to any trapdoor (such as GPV [6] or NTRU [8]). The idea is to sample a set of short lattice elements and construct the public key as a Short Integer Solution (SIS for short) instance. Signatures are obtained using a small subset sum of the secret key, hidden by a (large) gaussian mask. (Information leakage is dealt with using rejection sampling.) In this paper, we show that this framework cannot be adapted to coding theory. In particular, we show that any code-based signature obtained through a direct translation from the lattice setting is doomed to fail, due to an inherent difference between bounds in Hamming and Euclidean metrics. The attack consists in rewriting a signature as a noisy syndrome decoding problem, which can be handled efficiently using the extended bit flipping decoding algorithm.We illustrate our results by breaking Persichetti’s one-time signature scheme built upon this approach [13]: using a single signature, we recover the secret (signing) key in about the same amount of time as required for a couple of signature verifications.

Threshold Kleptographic Attacks on Discrete Logarithm Based Signatures

Wed, 12/19/2018 - 14:18
In an $\ell$ out of $n$ threshold scheme, $\ell$ out of $n$ members must cooperate to recover a secret. A kleptographic attack is a backdoor which can be implemented in an algorithm and further used to retrieve a user's secret key. We combine the notions of threshold scheme and kleptographic attack to construct the first $\ell$ out of $n$ threshold kleptographic attack on discrete logarithm based digital signatures and prove its security in the standard and random oracle models.

Adaptively Secure MPC with Sublinear Communication Complexity

Wed, 12/19/2018 - 12:10
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: 1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. 2) Under the same assumptions, we construct a "Bob-optimized" 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice's input size. We prove impossibility of "Alice-optimized" protocols. 3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size only depending on the witness size, and independent of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS'18) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity.

How to Correct More Errors in a Secure Sketch

Wed, 12/19/2018 - 05:40
Secure sketch produces public information of its input $w$ without revealing it, yet, allows the exact recovery of $w$ given another value $w'$ that is close to $w$. Therefore, it can be used to reliably reproduce any error-prone biometric data stored in a database, without jeopardizing the user privacy. In addition to this, secure sketch enables fuzzy extractor, by using a randomness extractor to convert the noisy reading $w'$ of its original value $w$ into the same uniform key $R$. Standard secure sketch should work on all type of available input sources. However, some sources have lower entropy compared to the error itself, formally called ``more error than entropy", a standard secure sketch cannot show its security promise perfectly to these kinds of sources. Besides, when same input is reused for multiple sketches generation, the complex error process of the input further results to security uncertainty, and offer no security guarantee. Fuller et al., (Asiacrypt 2016) defined the fuzzy min-entropy is necessary to show security for different kind of sources over different distributions. This paper focuses on secure sketch. We propose a new technique to generate re-usable secure sketch. We show security to low entropy sources and enable error correction up to Shannon bound. Our security defined information theoretically with min-entropy under distribution uncertain setting. In particular, our new technique offers security guarantee for all family of input distributions, as long as the sources possessing ``meaningful amount" of min-entropy that is equivalent to the min-entropy of some random distributions over a larger metric space, parametrized by a chosen error correction code.

Ramanujan graphs in cryptography

Tue, 12/18/2018 - 22:56
In this paper we study the security of a proposal for Post-Quantum Cryptography from both a number theoretic and cryptographic perspective. Charles-Goren-Lauter in 2006 proposed two hash functions based on the hardness of finding paths in Ramanujan graphs. One is based on Lubotzky--Phillips--Sarnak (LPS) graphs and the other one is based on Supersingular Isogeny Graphs. A 2008 paper by Petit-Lauter-Quisquater breaks the hash function based on LPS graphs. On the Supersingular Isogeny Graphs proposal, recent work has continued to build cryptographic applications on the hardness of finding isogenies between supersingular elliptic curves. A 2011 paper by De Feo-Jao-Pl\^ut proposed a cryptographic system based on Supersingular Isogeny Diffie--Hellman as well as a set of five hard problems. In this paper we show that the security of the SIDH proposal relies on the hardness of the SIG path-finding problem introduced in [CGL06]. In addition, similarities between the number theoretic ingredients in the LPS and Pizer constructions suggest that the hardness of the path-finding problem in the two graphs may be linked. By viewing both graphs from a number theoretic perspective, we identify the similarities and differences between the Pizer and LPS graphs.

The Lord of the Shares: Combining Attribute-Based Encryption and Searchable Encryption for Flexible Data Sharing

Tue, 12/18/2018 - 15:42
Secure cloud storage is considered one of the most important issues that both businesses and end-users are considering before moving their private data to the cloud. Lately, we have seen some interesting approaches that are based either on the promising concept of Symmetric Searchable Encryption (SSE) or on the well-studied field of Attribute-Based Encryption (ABE). In the first case, researchers are trying to design protocols where users' data will be protected from both \textit{internal} and \textit{external} attacks without paying the necessary attention to the problem of user revocation. On the other hand, in the second case existing approaches address the problem of revocation. However, the overall efficiency of these systems is compromised since the proposed protocols are solely based on ABE schemes and the size of the produced ciphertexts and the time required to decrypt grows with the complexity of the access formula. In this paper, we propose a protocol that combines \textit{both} SSE and ABE in a way that the main advantages of each scheme are used. The proposed protocol allows users to directly search over encrypted data by using an SSE scheme while the corresponding symmetric key that is needed for the decryption is protected via a Ciphertext-Policy Attribute-Based Encryption scheme.

Pages

IACR Eprint