Updated: 5 hours 44 min ago

### Micro-Architectural Power Simulator for Leakage Assessment of Cryptographic Software on ARM Cortex-M3 Processors

Sat, 12/30/2017 - 14:31
Masking is a common technique to protect software implementations of symmetric cryptographic algorithms against Differential Power Analysis (DPA) attacks. The development of a properly masked version of a block cipher is an incremental and time-consuming process since each iteration of the development cycle involves a costly leakage assessment. To achieve a high level of DPA resistance, the architecture-specific leakage properties of the target processor need to be taken into account. However, for most embedded processors, a detailed description of these leakage properties is lacking and often not even the HDL model of the micro-architecture is openly available. Recent research has shown that power simulators for leakage assessment can significantly speed up the development process. Unfortunately, few such simulators exist and even fewer take target-specific leakages into account. To fill this gap, we present MAPS, a micro-architectural power simulator for the M3 series of ARM Cortex processors, one of today's most widely-used embedded platforms. MAPS is fast, easy to use, and able to model the Cortex-M3 pipeline leakages, in particular the leakage introduced by the pipeline registers. The leakages are inferred from an analysis of the HDL source code, and therefore MAPS does not need a complicated and expensive profiling phase. Taking first-order masked Assembler implementations of the lightweight cipher Simon as example, we study how the pipeline leakages manifest and discuss some guidelines on how to avoid them.

### Breakdown Resilience of Key Exchange Protocols and the Cases of NewHope and TLS 1.3

Sat, 12/30/2017 - 14:30
Broken cryptographic algorithms and hardness assumptions are a constant threat to real-world protocols. Prominent examples are hash functions for which collisions become known, or number-theoretic assumptions which are threatened by advances in quantum computing. Especially when it comes to key exchange protocols, the switch to quantum-resistant primitives has begun and aims to protect today's secrets against future developments, moving from common Diffie-Hellman-based solutions to Learning-With-Errors-based approaches. Remarkably, the authentication step in such protocols is usually still carried out with quantum-vulnerable signature schemes. The intuition here is that the adversary would need to break this protocol primitive today, without having quantum power yet. The question we address here is if this intuition is justified, and if so, if we can show this rigorously. To this date there exists no security notion for key exchange protocols that could capture the scenario of breakdowns of arbitrary cryptographic primitives to argue security of prior sessions. In this work we introduce an extension to the common Bellare-Rogaway model that can provide security guarantees in what we call the breakdown scenario and we term the resulting security notion breakdown resilience. The model allows to make security claims even in case of unexpected failure of primitives in the protocol, may it be hash functions, signature schemes, key derivation functions, etc. To validate the proposed security model with respect to real-world protocols we show that breakdown resilience for certain primitives is achieved by both an authenticated variant of the recently introduced post-quantum secure key exchange protocol NewHope (Alkim et al., USENIX Security 2016), as well as by TLS 1.3, which is currently being developed by the Internet Engineering Task Force.

### A toolbox for software optimization of QC-MDPC code-based cryptosystems

Sat, 12/30/2017 - 14:29
The anticipated emergence of quantum computers in the foreseeable future drives the cryptographic community to start considering cryptosystems, which are based on problems that remain intractable even with strong quantum computers. One example is the family of code-based cryptosystems that relies on the Syndrome Decoding Problem (SDP). Recent work by Misoczki et al. [34] showed a variant of McEliece encryption which is based on Quasi Cyclic - Moderate Density Parity Check (MDPC) codes, and has signifi cantly smaller keys than the original McEliece encryption. It was followed by the newly proposed QC-MDPC based cryptosystems CAKE [9] and Ouroboros [13]. These motivate dedicated new software optimizations. This paper lists the cryptographic primitives that QC-MDPC cryptosystems commonly employ, studies their software optimizations on modern processors, and reports the achieved speedups. It also assesses methods for side channel protection of the implementations, and their performance costs. These optimized primitives offer a useful toolbox that can be used, in various ways, by designers and implementers of QC-MDPC cryptosystems.

### Non-Interactive Delegation for Low-Space Non-Deterministic Computation

Sat, 12/30/2017 - 14:26
We construct a delegation scheme for verifying non-deterministic computations, with complexity proportional only to the non-deterministic space of the computation. Specifi cally, letting $n$ denote the input length, we construct a delegation scheme for any language veri fiable in non-deterministic time and space $(T (n); S(n))$ with communication complexity $poly(S(n))$, verifi er runtime $n polylog(T (n)) + poly(S(n))$, and prover runtime $poly(T (n))$. Our scheme consists of only two messages and has adaptive soundness, assuming the existence of a sub-exponentially secure private information retrieval (PIR) scheme, which can be instantiated under standard (albeit, sub-exponential) cryptographic assumptions, such as the sub-exponential LWE assumption. Specifi cally, the verifi er publishes a (short) public key ahead of time, and this key can be used by any prover to non-interactively prove the correctness of any adaptively chosen non-deterministic computation. Such a scheme is referred to as a noninteractive delegation scheme. Our scheme is privately veri fiable, where the veri fier needs the corresponding secret key in order to verify proofs. Prior to our work, such results were known only in the Random Oracle Model, or under knowledge assumptions. Our results yield succinct non-interactive arguments based on subexponential LWE, for many natural languages believed to be outside of P.

### Quantum Cryptanalysis on Some Generalized Feistel Schemes

Sat, 12/30/2017 - 14:25
Post-quantum cryptography has attracted much attention from worldwide cryptologists. In ISIT 2010, Kuwakado and Morii gave a quantum distinguisher with polynomial time against 3-round Feistel networks. However, generalized Feistel schemes (GFS) have not been systematically investigated against quantum attacks. In this paper, we study the quantum distinguishers about some generalized Feistel schemes. For $d$-branch Type-1 GFS (CAST256-like Feistel structure), we introduce ($2d-1$)-round quantum distinguishers with polynomial time. For $2d$-branch Type-2 GFS (RC6/CLEFIA-like Feistel structure), we give ($2d+1$)-round quantum distinguishers with polynomial time. Classically, Moriai and Vaudenay proved that a 7-round $4$-branch Type-1 GFS and 5-round $4$-branch Type-2 GFS are secure pseudo-random permutations. Obviously, they are no longer secure in quantum setting. Using the above quantum distinguishers, we introduce generic quantum key-recovery attacks by applying the combination of Simon's and Grover's algorithms recently proposed by Leander and May. We denote $n$ as the bit length of a branch. For $(d^2-d+2)$-round Type-1 GFS with $d$ branches, the time complexity is $2^{(\frac{1}{2}d^2-\frac{3}{2}d+2)\cdot \frac{n}{2}}$, which is better than the quantum brute force search (Grover search) by a factor $2^{(\frac{1}{4}d^2+\frac{1}{4}d)n}$. For $4d$-round Type-2 GFS with $2d$ branches, the time complexity is $2^{{\frac{d^2 n}{2}}}$, which is better than the quantum brute force search by a factor $2^{{\frac{3d^2 n}{2}}}$.

### Foundations of Homomorphic Secret Sharing

Sat, 12/30/2017 - 14:06
Homomorphic secret sharing (HSS) is the secret sharing analogue of homomorphic encryption. An HSS scheme supports a local evaluation of functions on shares of one or more secret inputs, such that the resulting shares of the output are short. Some applications require the stronger notion of additive HSS, where the shares of the output add up to the output over a finite Abelian group. While strong feasibility results for HSS are known under specific cryptographic assumptions, many natural questions remain open. We initiate a systematic study of HSS, making the following contributions. * A definitional framework. We present a general framework for defining HSS schemes that unifies and extends several previous notions from the literature, and cast known results within this framework. * Limitations. We establish limitations on information-theoretic multi-input HSS with short output shares via a relation with communication complexity. We also show that additive HSS for non-trivial functions, even the AND of two input bits, implies non-interactive key exchange, and is therefore unlikely to be implied by public-key encryption or even oblivious transfer. * Applications. We present two types of applications of HSS. First, we construct 2-round protocols for secure multiparty computation from a simple constant-size instance of HSS. As a corollary, we obtain 2-round protocols with attractive asymptotic efficiency features under the Decision Diffie Hellman (DDH) assumption. Second, we use HSS to obtain nearly optimal worst-case to average-case reductions in P. This in turn has applications to fine-grained average-case hardness and verifiable computation.

### Block encryption of quantum messages

Sat, 12/30/2017 - 14:05
In modern cryptography, block encryption is a fundamental cryptographic primitive. However, it is impossible for block encryption to achieve the same security as one-time pad. Quantum mechanics has changed the modern cryptography, and lots of researches have shown that quantum cryptography can outperform the limitation of traditional cryptography. This article focuses on block encryption of quantum data. Based on pseudorandom functions, we construct a quantum block encryption (QBE) scheme, and prove it has indistinguishable encryption under chosen plaintext attack. Moreover, the combination of the QBE and quantum message authentication scheme has indistinguishable encryption under chosen ciphertext attack. In addition, QBE can achieve perfect security in a particular case. Comparing with quantum one-time pad (QOTP), QBE scheme can be the same secure as QOTP, and the secret key can be reused (no matter whether the eavesdropping exists or not). Thus, block encryption based on quantum mechanics can break the limitation of perfectly secure encryption, and can be used as the new cryptographic primitive instead of QOTP. In order to physically implement the QBE scheme, we only need to implement two kinds of single-qubit gates (Pauli $X$ gate and Hadamard gate), so it is within reach of current quantum technology.

### Verification of FPGA-augmented trusted computing mechanisms based on Applied Pi Calculus

Sat, 12/30/2017 - 13:57
Trusted computing technologies may play a key role for cloud security as they enable users to relax the trustworthiness assumptions about the provider that operates the physical cloud infrastructure. This work focuses on the possibility of embodying Field-Programmable Gate Array (FPGA) devices in cloud-based infrastructures, where they can benefit compute-intensive workloads like data compression, machine learning, data encoding, etc. The focus is on the implications for cloud applications with security requirements. We introduce a general architecture model of a CPU+FPGA platform pinpointing key roles and specific assumptions that are relevant for the trusted computing mechanisms and the associated security properties. In addition, we formally verified the proposed solution based on Applied Pi Calculus, a descendant of Pi Calculus, that introduces constructs allowing the symbolic description of cryptographic primitives. The verification phase was automated by means of ProVerif, a tool taking as input a model expressed in Applied Pi Calculus along with some queries and annotations that define security properties to be proved or denied. The results of the analysis confirmed that the properties defined in our work hold under the Dolev Yao attacker model.

### IntegriKey: End-to-End Integrity Protection of User Input

Sat, 12/30/2017 - 13:56
Networked critical systems, such as Programmable Logic Controllers in a factory plant, are often remotely configurable by administrators through web-based interfaces. However, administrative host machines have been compromised in recent incidents, allowing attackers to covertly alter user commands or configurations to disrupt the proper function of remote controllers. While most existing approaches focus on securing field devices from malicious programs, the integrity of configuration commands remains to be explored. In this paper, we consider the presence of an untrusted host machine and aim to ensure the integrity of user input to a web server directly from a peripheral, such as a keyboard. We propose IntegriKey, an end-to-end integrity protection system that leverages a user-side trusted device (the IntegriKey device) and a small server-side software component to ensure the integrity of the user's input. Based on our solution, we also identify a new form of attack, the (user interface) UI input integrity manipulation attack, where a compromised host alters the UI to mislead the user into entering incorrect data. We provide a comprehensive analysis of these attacks and the corresponding solutions. IntegriKey allows the server to accept only authentic user input even when the attacker compromises both the host machines and the network. IntegriKey requires no additional software on the user's host and does not significantly affect the way the user interacts with the system. We implement IntegriKey in the context of remotely configuring Programmable Logic Controllers and our evaluation shows that it incurs minimal overhead in securing user input integrity.