Subscribe to IACR Eprint feed
Updated: 3 hours 27 min ago

Automated Penalization of Data Breaches using Crypto-augmented Smart Contracts

Thu, 11/01/2018 - 22:05
This work studies the problem of automatically penalizing intentional or unintentional data breach (APDB) by a receiver/custodian receiving confidential data from a sender. We solve this problem by augmenting a blockchain on-chain smart contract between the sender and receiver with an off-chain cryptographic protocol, such that any significant data breach from the receiver is penalized through a monetary loss. Towards achieving the goal, we develop a natural extension of oblivious transfer called doubly oblivious transfer (DOT) which, when combined with robust watermarking and a claim-or-refund blockchain contract provides the necessary framework to realize the APDB protocol in a provably secure manner. In our APDB protocol, a public data breach by the receiver leads to her Bitcoin (or other blockchain) private signing key getting revealed to the sender, which allows him to penalize the receiver by claiming the deposit from the claim-or-refund contract. Interestingly, the protocol also ensures that the malicious sender cannot steal the deposit, even as he knows the original document or releases it in any form. We implement our APDB protocol, develop the required smart contract for Bitcoin and observe our system to be efficient and easy to deploy in practice. We analyze our DOT-based design against partial adversarial leakages and observe it to be robust against even small leakages of data.

Ouroboros-BFT: A Simple Byzantine Fault Tolerant Consensus Protocol

Thu, 11/01/2018 - 22:04
We present a simple, deterministic protocol for ledger consensus that tolerates Byzantine faults. The protocol is executed by $n$ servers over a synchronous network and can tolerate any number $t$ of Byzantine faults with $t<n/3$. Furthermore, the protocol can offer (i) transaction processing at full network speed, in the optimistic case where no faults occur, (ii) instant confirmation: the client can be assured in a single round-trip time that a submitted transaction will be settled, (iii) instant proof of settlement: the client can obtain a receipt that a submitted transaction will be settled. A derivative, equally simple, binary consensus protocol can be easily derived as well. We also analyze the protocol in case of network splits and temporary loss of synchrony arguing the safety of the protocol when synchrony is restored. Finally, we examine the covert adversarial model showing that Byzantine resilience is increased to $t<n/2$.

Proof-of-Work Sidechains

Thu, 11/01/2018 - 22:02
During the last decade, the blockchain space has exploded with a plethora of new cryptocurrencies, covering a wide array of different features, performance and security characteristics. Nevertheless, each of these coins functions in a stand-alone manner, independently. Sidechains have been envisioned as a mechanism to allow blockchains to communicate with one another and, among other applications, allow the transfer of value from one chain to another, but so far there have been no decentralized constructions. In this paper, we put forth the first sidechains construction that allows communication between proof-of-work blockchains without trusted intermediaries. Our construction is generic in that it allows the passing of any information between blockchains. It gives rise to two illustrative examples: the ``remote ICO,'' in which an investor pays in currency on one blockchain to receive tokens in another, and the ``two-way peg,'' in which an asset can be transferred from one chain to another and back. We pinpoint the features needed for two chains to communicate: On the source side, a proof-of-work blockchain that has been interlinked, potentially with a velvet fork; on the destination side, a blockchain with any consensus mechanism that has sufficient expressibility to implement verification. We model our construction mathematically and give a formal proof of security. In the heart of our construction, we use a recently introduced cryptographic primitive, Non-Interactive Proofs of Proof-of-Work (NIPoPoWs). Our security proof uses a standard reduction from our new proof-of-work sidechains protocol to the security of NIPoPoWs, which has, in turn, been shown to be secure in previous work. Our working assumption is honest majority in each of the communicating chains. We demonstrate the feasibility of our construction by providing a pseudocode implementation in the form of a Solidity smart contract.

Constructing Infinite Families of Low Differential Uniformity $(n,m)$-Functions with $m>n/2$

Thu, 11/01/2018 - 22:01
Little theoretical work has been done on $(n,m)$-functions when $\frac {n}{2}<m<n$, even though these functions can be used in Feistel ciphers, and actually play an important role in several block ciphers. Nyberg has shown that the differential uniformity of such functions is bounded below by $2^{n-m}+2$ if $n$ is odd or if $m>\frac {n}{2}$. In this paper, we first characterize the differential uniformity of those $(n,m)$-functions of the form $F(x,z)=\phi(z)I(x)$, where $I(x)$ is the $(m,m)$-Inverse function and $\phi(z)$ is an $(n-m,m)$-function. Using this characterization, we construct an infinite family of differentially $\Delta$-uniform $(2m-1,m)$-functions with $m\geq 3$ achieving Nyberg's bound with equality, which also have high nonlinearity and not too low algebraic degree. We then discuss an infinite family of differentially $4$-uniform $(m+1,m)$-functions in this form, which leads to many differentially $4$-uniform permutations. We also present a method to construct infinite families of $(m+k,m)$-functions with low differential uniformity and construct an infinite family of $(2m-2,m)$-functions with $\Delta\leq2^{m-1}-2^{m-6}+2$ for any $m\geq 8$. The constructed functions in this paper may provide more choices for the design of Feistel ciphers.

MPC Joins the Dark Side

Thu, 11/01/2018 - 21:57
We consider the issue of securing dark pools/markets in the financial services sector. These markets currently are executed via trusted third parties, leading to potential fraud being able to be conducted by the market operators. We present a potential solution to this problem by using Multi-Party Computation to enable a trusted third party to be emulated in software. Our experiments show that whilst the standard market clearing mechanism of Continuous Double Auction in lit markets is not currently viable when executed using MPC, a popular mechanism for evaluating dark markets, namely the volume matching methodology, is viable. We present experimental validation of this conclusion by presenting the expected throughputs for such markets in two popular MPC paradigms; namely the two party dishonest majority setting and the honest majority three party setting.

Strongly Unforgeable Signatures Resilient to Polynomially Hard-to-Invert Leakage under Standard Assumptions

Thu, 11/01/2018 - 21:57
A signature scheme is said to be weakly unforgeable, if it is hard to forge a signature on a message not signed before. A signature scheme is said to be strongly unforgeable, if it is hard to forge a signature on any message. In some applications, the weak unforgeability is not enough and the strong unforgeability is required, e.g., the Canetti, Halevi and Katz transformation. Leakage-resilience is a property which guarantees that even if secret information such as the secret-key is partially leaked, the security is maintained. Some security models with leakage-resilience have been proposed. The hard-to-invert leakage model, a.k.a. auxiliary (input) leakage model, proposed by Dodis et al. at STOC'09 is especially meaningful one, since the leakage caused by a function which information-theoretically reveals the secret-key, e.g., one-way permutation, is considered. In this work, we propose a generic construction of digital signature strongly unforgeable and resilient to polynomially hard-to-invert leakage which can be instantiated under standard assumptions such as the decisional linear assumption. We emphasize that our instantiated signature is not only the first one resilient to polynomially hard-to-invert leakage under standard assumptions, but also the first one which is strongly unforgeable and has hard-to-invert leakage-resilience.

Improved Bootstrapping for Approximate Homomorphic Encryption

Thu, 11/01/2018 - 21:55
Since Cheon et al. introduced a homomorphic encryption scheme for approximate arithmetic (Asiacrypt ’17), it has been recognized as suitable for important real-life usecases of homomorphic encryption, including training of machine learning models over encrypted data. A follow up work by Cheon et al. (Eurocrypt ’18) described an approximate bootstrapping procedure for the scheme. In this work, we improve upon the previous bootstrapping result. We improve the amortized bootstrapping time per plaintext slot by two orders of magnitude, from &#8764; 1 second to &#8764; 0.01 second. To achieve this result, we adopt a smart level-collapsing technique for evaluating DFT-like linear transforms on a ciphertext. Also, we replace the Taylor approximation of the sine function with a more accurate and numerically stable Chebyshev approximation, and design a modified version of the Paterson-Stockmeyer algorithm for fast evaluation of Chebyshev polynomials over encrypted data.

Laser-induced Single-bit Faults in Flash Memory: Instructions Corruption on a 32-bit Microcontroller

Thu, 11/01/2018 - 21:52
Physical attacks are a known threat to secure embedded systems. Notable among these is laser fault injection, which is probably the most powerful fault injection technique. Indeed, powerful injection techniques like laser fault injection provide a high spatial accuracy, which enables an attacker to induce bit level faults. However, experience gained from attacking 8-bit targets might not be relevant on more advanced micro-architectures and these attacks become increasingly challenging on 32-bit microcontrollers. In this article, we show that the flash memory area of a 32-bit microcontroller is sensitive to laser fault injection. These faults occur during the instruction fetch process, hence the stored value remains unaltered. After a thorough characterisation of the induced faults and the associated fault model, we provide detailed examples of bit-level corruptions of instruction and demonstrate practical applications in compromising the security of real-life codes. Based on these experimental results, we formulate a hypothesis about the underlying micro-architectural features that could explain the observed fault model.

Secure Outsourced Matrix Computation and Application to Neural Networks

Thu, 11/01/2018 - 21:50
Homomorphic Encryption (HE) is a powerful cryptographic primitive to address privacy and security issues in outsourcing computation on sensitive data to an untrusted computation environment. Comparing to secure Multi-Party Computation (MPC), HE has advantages in supporting non-interactive operations and saving on communication costs. However, it has not come up with an optimal solution for modern learning frameworks, partially due to a lack of efficient matrix computation mechanisms. In this work, we present a practical solution to encrypt a matrix homomorphically and perform arithmetic operations on encrypted matrices. Our solution includes a novel matrix encoding method and an efficient evaluation strategy for basic matrix operations such as addition, multiplication, and transposition. We also explain how to encrypt more than one matrix in a single ciphertext, yielding better amortized performance. Our solution is generic in the sense that it can be applied to most of the existing HE schemes. It also achieves reasonable performance for practical use; for example, our implementation takes 9.21 seconds to multiply two encrypted square matrices of order 64 and 2.56 seconds to transpose a square matrix of order 64. Our secure matrix computation mechanism has a wide applicability to our new framework EDM, which stands for encrypted data and encrypted model. To the best of our knowledge, this is the first work that supports secure evaluation of the prediction phase based on both encrypted data and encrypted model, whereas previous work only supported applying a plain model to encrypted data. As a benchmark, we report an experimental result to classify handwritten images using convolutional neural networks (CNN). Our implementation on the MNIST dataset takes 28.59 seconds to compute ten likelihoods of 64 input images simultaneously, yielding an amortized rate of 0.45 seconds per image.

RepuCoin: Your Reputation is Your Power

Thu, 11/01/2018 - 21:25
Existing proof-of-work (PoW) cryptocurrencies cannot tolerate attackers controlling more than 50% of the network's computing power at any time, but assume that such a condition happening is "unlikely". However, recent attack sophistication, e.g., where attackers can rent mining capacity to obtain a majority of computing power temporarily (flash attacks), render this assumption unrealistic. This paper proposes RepuCoin, the first system to provide guarantees even when more than 50% of the system's computing power is temporarily dominated by an attacker. RepuCoin defines a miner's power by its reputation as a function integrated over the entire blockchain, rather than through its sheer computing power which can be obtained relatively quickly and temporarily. As an example, after a single year of operation, RepuCoin can tolerate attacks compromising 51% of the network's computing resources, even if such power stays maliciously seized for almost a whole year. Moreover, RepuCoin provides better resilience to known attacks, compared to existing PoW systems, while achieving a high throughput of 10000 transactions per second.

Linear Consistency for Proof-of-Stake Blockchains

Thu, 11/01/2018 - 18:50
Blockchain protocols achieve consistency by instructing parties to remove a suffix of a certain length from their local blockchain. The current state of the art in Proof of Stake (PoS) blockchain protocols, exemplified by Ouroboros (Crypto 2017), Ouroboros Praos (Eurocrypt 2018) and Sleepy Consensus (Asiacrypt 2017) suggests that the length of the segment should be $\Theta(k^2)$ for the consistency error to be exponentially decreasing in $k$. This is in contrast with Proof of Work (PoW) based blockchains for which it is known that a suffix of length $\Theta(k)$ is sufficient for the same type of exponentially decreasing consistency error. This quadratic gap in consistency guarantee is quite significant as the length of the suffix is a lower bound for the time required to wait for transactions to settle. Whether this is an intrinsic limitation of PoS--due to issues such as the "nothing-at-stake" problem--or it can be improved is an open question. In this work we put forth a novel and general probabilistic analysis for PoS consistency that improves the required suffix length from $\Theta(k^2)$ to $\Theta(k)$ thus showing, for the first time, how PoS protocols can match PoW blockchain protocols for exponentially decreasing consistency error. Moreover, our detailed analysis provides an explicit polynomial-time algorithm for exactly computing the (exponentially-decaying) error function which can directly inform practice.

Approximate and Probabilistic Differential Privacy Definitions

Thu, 11/01/2018 - 08:58
This technical report discusses three subtleties related to the widely used notion of differential privacy (DP). First, we discuss how the choice of a distinguisher influences the privacy notion and why we should always have a distinguisher if we consider approximate DP. Secondly, we draw a line between the very intuitive probabilistic differential privacy (with probability $1-\delta$ we have $\varepsilon$-DP) and the commonly used approximate differential privacy ($(\varepsilon,\delta)$-DP). Finally we see that and why probabilistic differential privacy (and similar notions) are not complete under post-processing, which has significant implications for notions used in the literature.

Time-space complexity of quantum search algorithms in symmetric cryptanalysis: applying to AES and SHA-2

Thu, 11/01/2018 - 00:55
Performance of cryptanalytic quantum search algorithms is mainly inferred from query complexity which hides overhead induced by an implementation. To shed light on quantitative complexity analysis removing hidden factors, we provide a framework for estimating time-space complexity, with carefully accounting for characteristics of target cryptographic functions. Processor and circuit parallelization methods are taken into account, resulting in the time-space trade-off curves in terms of depth and qubit. The method guides howto rank different circuit designs in order of their efficiency. The framework is applied to representative cryptosystems NIST referred to as a guideline for security parameters, reassessing the security strengths of AES and SHA-2.

Cryptanalysis of OCB2

Wed, 10/31/2018 - 22:16
We present practical attacks against OCB2, an ISO-standard authenticated encryption (AE) scheme. OCB2 is a highly-efficient blockcipher mode of operation. It has been extensively studied and widely believed to be secure thanks to the provable security proofs. Our attacks allow the adversary to create forgeries with single encryption query of almost-known plaintext. The source of our attacks is the way OCB2 implements AE using a tweakable blockcipher, called XEX*. We have verified our attacks using a reference code of OCB2. Our attacks do not break the privacy of OCB2, and are not applicable to the others, including OCB1 and OCB3.

Adding Distributed Decryption and Key Generation to a Ring-LWE Based CCA Encryption Scheme

Wed, 10/31/2018 - 16:37
We show how to build distributed key generation and distributed decryption procedures for the LIMA Ring-LWE based post-quantum cryptosystem. Our protocols implement the CCA variants of distributed decryption and are actively secure (with abort) in the case of three parties and honest majority. Our protocols make use of a combination of problem specific MPC protocols, generic garbled circuit based MPC and generic Linear Secret Sharing based MPC. We also, as a by-product, report on the first run-times for the execution of the SHA-3 function in an MPC system.

Non-malleable Codes against Lookahead Tampering

Wed, 10/31/2018 - 14:06
There are natural cryptographic applications where an adversary only gets to tamper a high- speed data stream on the fly based on her view so far, namely, the lookahead tampering model. Since the adversary can easily substitute transmitted messages with her messages, it is farfetched to insist on strong guarantees like error-correction or, even, manipulation detection. Dziembowski, Pietrzak, and Wichs (ICS–2010) introduced the notion of non-malleable codes that provide a useful message integrity for such scenarios. Intuitively, a non-malleable code ensures that the tampered codeword encodes the original message or a message that is entirely independent of the original message. Our work studies the following tampering model. We encode a message into k>=1 secret shares, and we transmit each share as a separate stream of data. Adversaries can perform lookahead tampering on each share, albeit, independently. We call this k-lookahead model. First, we show a hardness result for the k-lookahead model. To transmit an l-bit message, the cumulative length of the secret shares must be at least kl/(k-1). This result immediately rules out the possibility of a solution with k = 1. Next, we construct a solution for 2-lookahead model such that the total length of the shares is 3l, which is only 1.5x of the optimal encoding as indicated by our hardness result. Prior work considers stronger model of split-state encoding that creates k>=2 secret shares, but protects against adversaries who perform arbitrary (but independent) tampering on each se- cret share. The size of the secret shares of the most efficient 2-split-state encoding is l*log(l)/loglog(l) (Li, ECCC–2018). Even though k-lookahead is a weaker tampering class, our hardness result matches that of k-split-state tampering by Cheraghchi and Guruswami (TCC–2014). However, our explicit constructions above achieve much higher efficiency in encoding.

Differential Fault Attacks on Deterministic Lattice Signatures

Wed, 10/31/2018 - 09:58
In this paper, we extend the applicability of differential fault attacks to lattice-based cryptography. We show how two deterministic lattice-based signature schemes, Dilithium and qTESLA, are vulnerable to such attacks. In particular, we demonstrate that single random faults can result in a nonce-reuse scenario which allows key recovery. We also expand this to fault-induced partial nonce-reuse attacks, which do not corrupt the validity of the computed signatures and thus are harder to detect. Using linear algebra and lattice-basis reduction techniques, an attacker can extract one of the secret key elements after a successful fault injection. Some other parts of the key cannot be recovered, but we show that a tweaked signature algorithm can still successfully sign any message. We provide experimental verification of our attacks by performing clock glitching on an ARM Cortex-M4 microcontroller. In particular, we show that up to 65.2% of the execution time of Dilithium is vulnerable to an unprofiled attack, where a random fault is injected anywhere during the signing procedure and still leads to a successful key-recovery.

GIFT: A Small Present

Wed, 10/31/2018 - 03:12
In this article, we revisit the design strategy of PRESENT, leveraging all the advances provided by the research community in construction and cryptanalysis since its publication, to push the design up to its limits. We obtain an improved version, named GIFT, that provides a much increased efficiency in all domains (smaller and faster), while correcting the well-known weakness of PRESENT with regards to linear hulls. GIFT is a very simple and clean design that outperforms even SIMON or SKINNY for round-based implementations, making it one of the most energy efficient ciphers as of today. It reaches a point where almost the entire implementation area is taken by the storage and the Sboxes, where any cheaper choice of Sbox would lead to a very weak proposal. In essence, GIFT is composed of only Sbox and bit-wiring, but its natural bitslice data flow ensures excellent performances in all scenarios, from area-optimised hardware implementations to very fast software implementation on high-end platforms. We conducted a thorough analysis of our design with regards to state-of-the-art cryptanalysis, and we provide strong bounds with regards to differential/linear attacks.

Constrained PRFs for Bit-fixing from OWFs with Constant Collusion Resistance

Wed, 10/31/2018 - 01:31
Constrained pseudorandom functions (CPRFs) allow learning `constrained' PRF keys that can evaluate the PRF on a subset of the input space, or based on some sort of predicate. First introduced by Boneh and Waters [AC'13], Kiayias et al. [CCS'13] and Boyle et al. [PKC'14], they have been shown to be a useful cryptographic primitive with many applications. The full security definition of CPRFs requires the adversary to learn multiple constrained keys, a requirement for all of these applications. Unfortunately, existing constructions of CPRFs satisfying this security notion are only known from exceptionally strong cryptographic assumptions, such as indistinguishability obfuscation (IO) and the existence of multilinear maps, even for very weak predicates. CPRFs from more standard assumptions only satisfy security for a single constrained key query. In this work, we give the first construction of a CPRF that can issue a constant number of constrained keys for bit-fixing predicates, only requiring the existence of one-way functions (OWFs). This is a much weaker assumption compared with all previous constructions. In addition, we prove that the new scheme satisfies \(1\)-key privacy (otherwise known as constraint-hiding), and that it also achieves fully adaptive security. This is the only construction to achieve adaptive security outside of the random oracle model, and without sub-exponential security losses. Our technique represents a noted departure from existing CPRF constructions. We hope that it may lead to future constructions that can expose a greater number of keys, or consider more expressive predicates (such as bounded-depth circuit constraints).

Aggregate Cash Systems: A Cryptographic Investigation of Mimblewimble

Tue, 10/30/2018 - 15:54
Mimblewimble is an electronic cash system proposed by an anonymous author in 2016. It combines several privacy-enhancing techniques initially envisioned for Bitcoin, such as Confidential Transactions (Maxwell, 2015), non-interactive merging of transactions (Saxena, Misra, Dhar, 2014), and cut-through of transaction inputs and outputs (Maxwell, 2013). As a remarkable consequence, coins can be deleted once they have been spent while maintaining public verifiability of the ledger, which is not possible in Bitcoin. This results in tremendous space savings for the ledger and efficiency gains for new users, who must verify their view of the system. In this paper, we provide a provable-security analysis for Mimblewimble. We give a precise syntax and formal security definitions for an abstraction of Mimblewimble that we call an aggregate cash system. We then formally prove the security of Mimblewimble in this definitional framework. Our results imply in particular that two natural instantiations (with Pedersen commitments and Schnorr or BLS signatures) are provably secure against inflation and coin theft under standard assumptions.

Pages

IACR Eprint