We construct two identity-based encryption (IBE) schemes. The first one is IBE satisfying key dependent message (KDM) security for user secret keys. The second one is IBE satisfying simulation-based receiver selective opening (RSO) security. Both schemes are secure against adaptive-ID attacks and do not have any a-priori bound on the number of challenge identities queried by adversaries in the security games. They are the first constructions of IBE satisfying such levels of security.
Our constructions of IBE are very simple. We construct our KDM secure IBE by transforming KDM secure secret-key encryption using IBE satisfying only ordinary indistinguishability against adaptive-ID attacks (IND-ID-CPA security). Our simulation-based RSO secure IBE is based only on IND-ID-CPA secure IBE.
We also demonstrate that our construction technique for KDM secure IBE is used to construct KDM secure public-key encryption. More precisely, we show how to construct KDM secure public-key encryption from KDM secure secret-key encryption and public-key encryption satisfying only ordinary indistinguishability against chosen plaintext attacks.

Cryptosystems based on supersingular isogenies have been
proposed recently for use in post-quantum cryptography. Three problems have
emerged related to their hardness: computing an isogeny between two
curves, computing the endomorphism ring of a curve, and computing
a maximal order associated to it. While some of these problems
are believed to be polynomial-time equivalent based on heuristics,
their relationship is still unknown. We give the first reduction
between these problems, with the aid of one more problem which we
call Action-on-$\ell$-Torsion. We show that computing $\ell$-power
isogenies reduces to computing maximal orders and
Action-on-$\ell$-Torsion.
We also define the notion of a compact representation of an
endomorphism, and use this to show that endomorphism rings always
have polynomial representation size. We then reduce the
endomorphism ring problem to computing maximal orders and
Action-on-$\ell$-Torsion, thus laying the foundation for analysis of
the hardness of endomorphism ring computation. This identifies
these last two problems as one possible way to attack some systems,
such as hash functions based on the $\ell$-isogeny graph of
supersingular elliptic curves. This gives the potential to use
algebraic tools in quaternion algebras to solve the problems.
We also discuss how these reductions apply to attacks on a
hash function of Charles, Goren, and Lauter.

Ed25519 is an instance of the Elliptic Curve based signature scheme EdDSA that was recently introduced to solve an inconvenience of the more established ECDSA. Namely, both schemes require the generation of a random value (scalar of the ephemeral key pair) during the signature generation process and the secrecy of this random value is critical for security: knowledge of one such a random value, or partial knowledge of a series of them, allows reconstructing the signer's private key. In ECDSA it is not specified how to generate this random value and hence implementations critically rely on the quality of random number generators and are challenging to implement securely. EdDSA removes this dependence by deriving the secret deterministically from the message and a long-term auxiliary key using a cryptographic hash function. The feature of determinism has received wide support as enabling secure implementations and in particular deployment of Ed25519 is spectacular. Today Ed25519 is used in numerous security protocols, networks and both software and hardware security products e.g. OpenSSH, Tor, GnuPG etc.
In this paper we show that in use cases where power or electromagnetic leakage can be exploited, exactly the mechanism that makes EdDSA deterministic complicates its secure implementation. In particular, we break an Ed25519 implementation in WolfSSL, which is a suitable use case for IoT applications. We apply differential power analysis (DPA) on the underlying hash function, SHA-512, requiring only 4000 traces.
Finally, we present a tweak to the EdDSA protocol that is cheap and effective against the described attack while keeping the claimed advantage of EdDSA over ECDSA in terms of featuring less things that can go wrong e.g. the required high-quality randomness. However, we do argue with our countermeasure that some randomness (that need not be perfect) might be hard to avoid.

We put forward the notion of self-guarding cryptographic protocols as a countermeasure to algorithm substitution attacks. Such self-guarding protocols can prevent undesirable leakage by subverted algorithms if one has the guarantee that the system has been properly working in an initialization phase. Unlike detection-based solutions they thus proactively thwart attacks, and unlike reverse firewalls they do not assume an online external party.
We present constructions of basic primitives for (public-key and private-key) encryption and for signatures. We also argue that the model captures attacks with malicious hardware tokens and show how to self-guard a PUF-based key exchange protocol.

Attribute-based encryption (ABE) is a cryptographic primitive which supports fine-grained
access control on encrypted data, making it an appealing building block for many applications. In this
paper, we propose, implement, and evaluate fully automated methods for proving security of ABE in
the Generic Bilinear Group Model (Boneh, Boyen, and Goh, 2005, Boyen, 2008), an idealized model
which admits simpler and more efficient constructions, and can also be used to find attacks. Our
method is applicable to Rational-Fraction Induced ABE, a large class of ABE that contains most of
the schemes from the literature, and relies on a Master Theorem, which reduces security in the GGM
to a (new) notion of symbolic security, which is amenable to automated verification using constraint-
based techniques. We relate our notion of symbolic security for Rational-Fraction Induced ABE to
prior notions for Pair Encodings. Finally, we present several applications, including automated proofs
for new schemes.

Secure messaging apps have enjoyed huge uptake, and with the headline figure of one billion active WhatsApp users there has been a corresponding burst of academic research on the topic. One might therefore wonder: how far is the academic community from providing concrete, applicable guarantees about the apps that are currently in widespread use?
We argue that there are still significant gaps between the security properties that users might expect from a communication app, and the security properties that have been formally proven. These gaps arise from dubious technical assumptions, tradeoffs in the name of reliability, or simply features out of scope of the analyses. We survey these gaps, and discuss where the academic community can contribute. In particular, we encourage more transparency about analyses' restrictions: the easier they are to understand, the easier they are to solve.

In this paper we present new fundamental properties of SPNs. These properties turn out to be particularly useful in the adaptive chosen ciphertext/plaintext setting and we show this by introducing for the first time key-independent yoyo-distinguishers for 3- to 5-rounds of AES. All of our distinguishers beat previous records and require respectively $3, 4$ and $2^{25.8}$ data and essentially zero computation except for observing differences. In addition, we present the first key-independent distinguisher for 6-rounds AES based on yoyos that preserve impossible zero differences in plaintexts and ciphertexts. This distinguisher requires an impractical amount of $2^{122.83}$ plaintext/ciphertext pairs and essentially no computation apart from observing the corresponding differences. We then present a very favorable key-recovery attack on 5-rounds of AES that requires only $2^{11.3}$ data complexity and $2^{31}$ computational complexity, which as far as we know is also a new record. All our attacks are in the adaptively chosen plaintext/ciphertext scenario. Our distinguishers for AES stem from new and fundamental properties of generic SPNs, including generic SAS and SASAS, that can be used to preserve zero differences under the action of exchanging values between existing ciphertext and plaintext pairs. We provide a simple distinguisher for 2 generic SP-rounds that requires only 4 adaptively chosen ciphertexts and no computation on the adversaries side. We then describe a generic and deterministic yoyo-game for 3 generic SP-rounds which preserves zero differences in the middle but which we are not capable of exploiting in the generic setting.

Linear regression with 2-norm regularization (i.e., ridge regression) is an important statistical technique that models the relationship between some explanatory values and an outcome value using a linear function. In many current applications (e.g., predictive modelling in personalized health-care), these values represent sensitive data owned by several different parties who are unwilling to share them. In this setting, training a linear regression model becomes challenging and needs specific cryptographic solutions. This problem was elegantly addressed by Nikolaenko et al. in S&P (Oakland) 2013. They suggested a two-server system that uses linearly-homomorphic encryption (LHE) and Yao’s two-party protocol (garbled circuits). In this work, we propose a novel system that can train a ridge linear regression model using only linearly-homomorphic encryption (i.e., without using Yao’s protocol). This greatly improves the overall performance (both in computation and communications) as Yao’s protocol was the main bottleneck in the previous solution. The efficiency of the proposed system is validated both on synthetically-generated and real-world datasets.

Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and
Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a
secret key holder can also prove validity of the function evaluation
relative to a statistically binding commitment.
Prior works have approached the problem of constructing VRFs by
proposing a candidate under specific number theoretic setting ---
mostly in bilinear groups --- and then grapple with the challenges of
proving security in the VRF environments. These constructions achieved
different results and tradeoffs in practical efficiency, tightness of
reductions and cryptographic assumptions.
In this work we take a different approach. Instead of tackling the VRF
problem as a whole we demonstrate a simple and generic way of building
Verifiable Random Functions from more basic and narrow cryptographic
primitives. Then we can turn to exploring solutions to these
primitives with a more focused mindset. In particular, we show that
VRFs can be constructed generically from the ingredients of: (1) a
1-bounded constrained pseudo random function for a functionality that
is ``admissible hash friendly" , (2) a non-interactive statistically
binding commitment scheme (without trusted setup) and (3) a
non-interactive witness indistinguishable proofs or NIWIs. The first
primitive can be replaced with a more basic puncturable PRF constraint
if one is willing to settle for selective security or assume
sub-exponential hardness of assumptions.
In the second half of our work we support our generic approach by
giving new constructions of the underlying primitives. We first provide
new constructions of perfectly binding commitments from the
Learning with Errors (LWE) and Learning Parity with Noise (LPN)
assumptions. Second, we give give two new constructions of 1-bounded
constrained PRFs for admissible hash friendly constructions. Our first
construction is from the $\nddh$ assumption. The next is from the $\phi$
hiding assumption.

Lin and Tessaro (ePrint 2017) recently proposed indistinguishability obfuscation (IO) and functional encryption (FE) candidates and proved their security based on two assumptions: a standard assumption on bilinear maps and a non-standard assumption on ``Goldreich-like'' pseudorandom generators. In a nutshell, their second assumption requires the existence of pseudorandom generators $G:[q]^n \rightarrow \{0,1\}^m$ for some $\mathsf{poly}(n)$-size alphabet $q$, each of whose output bits depend on at most two input alphabet symbols, and which achieve sufficiently large stretch.
We show polynomial-time attacks against such generators, invalidating the security of the IO and FE candidates. Our attack uses tools from the literature on two-source extractors (Chor and Goldreich, SICOMP 1988) and efficient refutation of random $\mathsf{2}$-$\mathsf{XOR}$ instances (Charikar and Wirth, FOCS 2004).

The success rate is the classical metric for evaluating the
performance of side-channel attacks. It is generally computed empirically from measurements for a particular device or using simulations. Closed-form expressions of success rate are desirable because they provide an explicit functional dependence on relevant parameters such as number of measurements and signal-to-noise ratio which help to understand the effectiveness of a given attack and how one can mitigate its threat by countermeasures. However, such closed-form expressions involve high-dimensional complex statistical functions that are hard to estimate.
In this paper, we define the success exponent (SE) of an arbitrary side-channel distinguisher as the first-order exponent of the success rate as the number of measurements increases. Under fairly general assumptions such as soundness, we give a general simple formula for any arbitrary distinguisher and derive closed-form expressions of it for DoM, CPA, MIA and the optimal distinguisher when the model is known (template attack). For DoM and CPA our results are in line with the literature.
Experiments confirm that the theoretical closed-form expression of the
SE coincides with the empirically computed one, even for reasonably
small numbers of measurements. Finally, we highlight that our study
raises many new perspectives for comparing and evaluating side-channel
attacks, countermeasures and implementations.

We present “Ouroboros Praos”, a new proof-of-stake blockchain protocol that provides, for the first time, a robust distributed ledger that is provably secure in the semi-synchronous adversarial setting, i.e., assuming a delay \Delta in message delivery which is unknown to protocol participants, and fully adaptively secure, i.e., the adversary can choose to corrupt any participant of an ever evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake at any given time. To achieve that, our protocol puts to use forward secure digital signatures and a new type of verifiable random functions that maintains unpredictability under malicious key generation, a property we introduce and instantiate in the random oracle model. Our security proof entails a combinatorial analysis of a class of forkable strings tailored to semi-synchronous blockchains that may be of independent interest in the context of security analysis of blockchain protocols.

A covering system of congruences can be defined as a set of congruence
relations of the form: $\{r_1 \pmod{m_1}, r_2 \pmod{m_2}, \dots,
r_t \pmod{m_t}\}$ for $m_1, \dots, m_t \in \mathbb{N}$ satisfying the property that for
every integer $k$ in $\mathbb{Z}$, there exists at least an index $i \in \{1, \dots,
t\}$ such that $k \equiv r_i \pmod{m_i}$. First, we show that most existing
scalar multiplication algorithms can be formulated in terms of covering
systems of congruences. Then, using a special form of covering systems called
exact \mbox{$n$-covers}, we present a novel uniformly randomized scalar
multiplication algorithm with built-in protections against various types of
side-channel attacks. This algorithm can be an alternative to Coron's scalar
blinding technique for elliptic curves, in particular when the choice of a
particular finite field tailored for speed compels to use a large random
factor.

Extensions of linear cryptanalysis making use of multiple approximations, such as multiple and multidimensional linear cryptanalysis, are an important tool in symmetric-key cryptanalysis, among others being responsible for the best known attacks on ciphers such as Serpent and PRESENT. At CRYPTO~2015, Huang et al. provided a refined analysis of the key-dependent capacity leading to a refined key equivalence hypothesis, however at the cost of additional assumptions. Their analysis was recently extended by Blondeau and Nyberg to also cover an updated wrong key randomization hypothesis, using similar assumptions. However, a recent result by Nyberg shows the equivalence of linear and statistical dependence of linear approximations, which essentially invalidates a crucial assumption on which all these multidimensional models are based.
In this paper, we develop a model for linear cryptanalysis using multiple linearly independent approximations which takes key dependence into account and complies with Nyberg's result. Our model considers an arbitrary multivariate joint distribution of the correlations, and in particular avoids any assumptions regarding normality. The analysis of this distribution is then tailored to concrete ciphers in a practically feasible way by combining a signal/noise decomposition approach for the linear hulls with a profiling of the actual multivariate distribution of the signal correlations for a large number of keys, thereby entirely avoiding assumptions regarding the shape of this distribution.
As an application of our model, we provide an attack on 26 rounds of PRESENT which is faster and requires less data than previous attacks, while using more realistic assumptions and far fewer approximations. We successfully extend the attack to present the first 27 round attack which takes key-dependence into account.

In this paper, we report on our implementation of a lattice-based Key-Policy Attribute-Based Encryption (KP-ABE) scheme, which uses short secret keys. The particular KP-ABE scheme can be used directly for Attribute-Based Access Control (ABAC) applications, as well as a building block in more involved applications and cryptographic schemes such as audit log encryption, targeted broadcast encryption, functional encryption, and program obfuscation. We adapt a recently proposed KP-ABE scheme based on the Learning With Errors (LWE) problem to a more efficient scheme based on the Ring Learning With Errors (RLWE) problem, and demonstrate an implementation that can be used in practical applications. Our state-of-the-art GPU implementation shows that the homomorphic public key and ciphertext evaluation operations, which dominate the execution time of the KP-ABE scheme, can be performed in a reasonably short amount of time. Our practicality results also hold when scaled to a relatively large number of attributes. To the best of our knowledge, this is the first KP-ABE implementation that supports both ciphertext and public key homomorphism and the only experimental practicality results reported in the literature.

Recently, Döttling and Garg (CRYPTO 2017) showed how to build identity-based encryption (IBE) from a novel primitive termed Chameleon Encryption, which can, in turn, be realized from simple number theoretic hardness assumptions such as the computational Diffie-Hellman assumption (in groups without pairings) or the factoring assumption. In a follow-up work (TCC 2017), the same authors showed that IBE can also be constructed from a slightly weaker primitive called One-Time Signatures with Encryption (OTSE).
In this work, we show that OTSE can be instantiated from hard learning problems such as the Learning With Errors (LWE) and the Learning Parity with Noise (LPN) problems. This immediately yields the first IBE construction from the LPN problem and a construction based on a weaker LWE assumption compared to previous works.
Finally, we show that the notion of one-time signatures with encryption is also useful for the construction of key-dependent-message (KDM) secure public-key encryption. In particular, our results imply that a KDM-secure public key encryption can be constructed from any KDM-secure secret-key encryption scheme and any public-key encryption scheme.

In this paper, quantum attacks against symmetric-key schemes are presented in which adversaries only make classical queries but use quantum computers for offline computations.
Our attacks are not as efficient as polynomial-time attacks making quantum superposition queries, while our attacks use the realistic model and overwhelmingly improve the classical attacks.
Our attacks convert a type of classical meet-in-the-middle attacks into quantum ones. The attack cost depends on the number of available qubits and the way to realize the quantum hardware. The tradeoff between data complexity $D$ and time complexity $T$ against the problem of cardinality $N$ is $D^2 \cdot T^2 =N$ and $D \cdot T^6 = N^3$ in the best and worst case scenarios to the adversary respectively, while the classic attack requires $D\cdot T = N$.
This improvement is meaningful from an engineering aspect because several existing schemes claim beyond-birthday-bound security for $T$ by limiting the maximum $D$ to be below $2^{n/2}$ according to the classical tradeoff $D\cdot T = N$. Those schemes are broken if quantum offline computations are performed by adversaries.
The attack can be applied to many schemes such as a tweakable block-cipher construction TDR, a dedicated MAC scheme Chaskey, an on-line authenticated encryption scheme McOE-X, a hash function based MAC H$^2$-MAC and a permutation based MAC keyed-sponge.
The idea is then applied to the FX-construction to discover new tradeoffs in the classical query model.

Garbled circuits have been highly optimized for practice over the last several years. Today's most efficient constructions treat different types of gates (e.g., AND vs XOR) differently; as such, they leak the type of each gate. In many applications of garbled circuits, the circuit itself is public, so such leakage is tolerable. In other settings, however, it is desirable to hide the type of each gate.
In this paper we consider optimizing garbled circuits for the gate-hiding case. We observe that the best state-of-the-art constructions support only a limited class of gate functions, which turns out to undermine their improvements in several settings. These state-of-the-art constructions also require a non-minimal hardness assumption.
We introduce two new gate-hiding constructions of garbled circuits. Both constructions achieve the same communication complexity as the best state-of-the-art schemes, but support a more useful class of boolean gates and use only the minimal assumption of a secure PRF.

Deterministic signature schemes are becoming more popular, as illustrated by the deterministic variant of ECDSA and the popular EdDSA scheme, since eliminating the need for high-quality randomness might have some advantages in certain use-cases. In this paper we outline a range of differential fault attacks and a differential power analysis attack against such deterministic schemes. This shows, contrary to some earlier works, that such signature schemes are not naturally protected against such advanced attacks. We discuss different countermeasures and propose to include entropy for low-cost protection against these attacks in scenarios where these attack vectors are a real threat: this does not require to change the key generation or the verification methods and results in a signature scheme which offers high performance and security for a wide range of use-cases.

Bitcoin provides only pseudo-anonymous transactions, which can be exploited to link payers and payees – defeating the goal of anonymous payments. To thwart such attacks, several Bitcoin mixers have been proposed, with the objective of providing unlinkability between payers and payees. However, existing Bitcoin mixers are not under widespread use, and can be regarded as either insecure or inefficient.
We present Obscuro, a highly efficient and secure Bitcoin mixer that utilizes trusted execution environments (TEEs). With the TEE’s confidentiality and integrity guarantees for code and data, our mixer design ensures the correct mixing operations and the protection of sensitive data (i.e., private keys and mixing logs), ruling out coin theft and de-anonymization attacks by a malicious operator. TEE-based implementation does not necessarily prevent the manipulation of
inputs (e.g., deposit submissions, blockchain feeds, TEE’s execution states) to the mixer, hence Obscuro is designed to overcome such limitations: it (1) offers an indirect deposit mechanism to prevent a malicious operator from rejecting benign user deposits; and (2) removes the need for storing any operation states outside of the TEE, thereby denying the possibility of state-rewind in conjunction with eclipse attacks. Obscuro provides several unique anonymity features (e.g., minimum mixing set size guarantee, resistant to dropping user deposits) that are not available in existing centralized and decentralized mixers.
Our prototype of Obscuro is built using Intel SGX, and we demonstrate its effectiveness in the Bitcoin Testnet. Our implementation mixes 1000 inputs in just 6.49 seconds, which vastly outperforms all of the existing decentralized mixers.