In conventional PKI, CAs are assumed to be fully trusted. However, in practice, CAs' absolute responsibility for providing trustworthiness caused major security and privacy issues. To prevent such issues, Google introduced the concept of Certificate Transparency (CT) in 2013. Later, several new PKI models (e.g., AKI, ARPKI, and DTKI) are proposed to reduce the level of trust to the CAs. However, all of these proposals are still vulnerable to split-world attacks if the adversary is capable of showing different views of the log to the targeted victims. In this paper, we propose a new PKI architecture with certificate transparency based on blockchain, what we called CertLedger, to eliminate the split-world attacks and to provide certificate/revocation transparency. All TLS certificates' validation, storage, and entire revocation process are conducted in CertLedger as well as Trusted CA certificate management. During a TLS connection, TLS clients get an efficient proof of existence of the certificate directly from its domain owners. Hence, privacy is now perfectly preserved by eliminating the traceability issue of OCSP servers. It also provides a unique, efficient, and trustworthy certificate validation process eliminating the conventional inadequate and incompatible certificate validation processes implemented by different software vendors. TLS clients in CertLedger also do not require to make certificate validation and store the trusted CA certificates anymore. We analyze the security and performance of CertLedger and provide a comparison with the previous proposals.

CSIDH is a recent proposal by Castryck, Lange, Martindale, Panny and Renes for post-quantum non-interactive key-exchange, to be presented at ASIACRYPT~2018. It is similar in design to a scheme by Couveignes, Rostovtsev and Stolbunov, but it replaces ordinary elliptic curves by supersingular elliptic curves, in order to make significant gains in time and key lengths.
Isogeny-based key-exchange on ordinary elliptic curves can be targeted by a quantum subexponential hidden shift algorithm found by Childs, Jao and Soukharev. Although CSIDH uses supersingular curves, it is analog to the case of ordinary curves, hence this algorithm applies.
In the proposal, the authors suggest a choice of parameters that should ensure security against this.
In this paper, we reassess these security parameters. Our result relies on two steps: first, we propose a new quantum algorithm for the hidden shift problem and analyze precisely its complexity. This reduces the number of group actions to compute w.r.t the authors' estimation; second, we show how to compute efficiently this group action.
For example, we show that only $2^{35}$ quantum equivalents of a key-exchange are sufficient to break the 128-bit classical, 64-bit quantum security parameters proposed, instead of $2^{62}$.
Finally, we extend our analysis to ordinary isogeny computations, and show that an instance proposed by De Feo, Kieffer and Smith (also accepted at ASIACRYPT 2018) and expected to offer $56$ bits of quantum security can be attacked in $2^{38}$ quantum evaluations of a key exchange.

Biometrics exhibit noise between repeated readings. Due to the noise, devices store a plaintext template of the biometric. This stored template is an appetizing target for an attacker. Due to this risk, the primary use case for biometrics is mobile device authentication (templates are stored within the mobile device’s secure processor). There has been little adoption in client-server applications.
Fuzzy extractors derive a stable cryptographic key from biometrics (Dodis et al., Eurocrypt 2004). In this work we describe an iris key derivation system with 32 bits of security even when multiple keys are derived from the same iris.
We are fully aware that 32 bits of security is insufficient for a secure system. The goal of this work is to inspire researchers to design multi-factor authentication systems that uses our scheme as one component. Our system is based on repeated hashing which simplifies incorporating multiple factors (such as a password).
Our starting point a recent fuzzy extractor due to Canetti et al.(Eurocrypt 2016). Achieving satisfactory parameters requires modifying and coupling the image processing and cryptographic algorithms. Our scheme is implemented in C and Python and is open-sourced. On a moderately powerful server, authentication usually completes within .30s.

The keyed sponge is a well-accepted method for message authentication. It processes data at a certain rate by sequential evaluation of an underlying permutation. If the key size $k$ is smaller than the rate, currently known bounds are tight, but if it exceeds the rate, state of the art only dictates security up to $2^{k/2}$. We take closer inspection at the key prediction security of the sponge and close the remaining gap in the existing security analysis: we confirm key security up to close to $2^k$, regardless of the rate. The result impacts all applications of the keyed sponge and duplex that process at a rate smaller than the key size, including the STROBE protocol framework, as well as the related constructions such as HMAC-SHA-3 and the sandwich sponge.

We present PanORAMa, the first Oblivious RAM construction that achieves communication overhead $O(\log N \cdot \log \log N)$ for a database of $N$ blocks and for any block size $B=\Omega(\log N)$ while requiring client memory of only a constant number of memory blocks. Our scheme can be instantiated in the ``balls and bins" model in which Goldreich and Ostrovsky [JACM 96] showed an $\Omega(\log N)$ lower bound for ORAM communication.
Our construction follows the hierarchical approach to ORAM design and relies on two main building blocks of independent interest: a \emph{new oblivious hash table construction} with improved amortized $O\left( \log N + \text{poly}(\log \log \lambda) \right)$ communication overhead for security parameter $\lambda$ and $N = \text{poly}(\lambda)$, assuming its input is randomly shuffled; and a complementary \emph{new oblivious random multi-array shuffle construction}, which shuffles $N$ blocks of data with communication $O(N \log\log \lambda + \frac{N\log N}{\log \lambda})$ when the input has a certain level of entropy. We combine these two primitives to improve the shuffle time in our hierarchical ORAM construction by avoiding heavy oblivious shuffles and leveraging entropy remaining in the merged levels from previous shuffles. As a result, the amortized shuffle cost is asymptotically the same as the lookup complexity in our construction.

We introduce a new type of cryptanalytic algorithm on the obfuscations based on the branching programs. Applying this algorithm to two recent general-purpose obfuscation schemes one by Chen et al. (CRYPTO 2018) and the other by Bartusek et al. (TCC 2018), we can show that they do not have the desired security. In other words, there exist two functionally equivalent branching programs whose obfuscated programs can be distinguished in polynomial time. Our strategy is to reduce the security problem of indistinguishability obfuscation into the distinguishing problem of two distributions where polynomially many samples are given. More precisely, we perform the obfuscating process ourselves with randomly chosen secret values to obtain identical and independent samples according to the distribution of evaluations of obfuscations. We then use the variance of samples as a new distinguisher of two functionally equivalent obfuscated programs. This statistical attack gives a new perspective on the security of the indistinguishability obfuscations: We should consider the shape of distributions of the evaluations of obfuscations to ensure the security. In other words, while most of the previous (weak) security proofs have been studied with respect to algebraic attack model or ideal model, our attack shows that this algebraic security is not enough to achieve indistinguishability obfuscation.
Disclaimer: The authors of BGMZ obfuscation (TCC'18) report that there are flaws of cryptanalysis of BGMZ obfuscation in Section 5. In particular, the current optimal parameter choice of BGMZ obfuscation is robust against our attack, while the attack lies outside the provable security of BGMZ obfuscation.

This paper demonstrates how to achieve simulation-based strong attribute hiding against adaptive adversaries
for predicate encryption (PE) schemes supporting expressive predicate families under standard
computational assumptions in bilinear groups. Our main result is a simulation-based adaptively strongly
partially-hiding PE (PHPE) scheme for predicates computing arithmetic branching programs (ABP) on
public attributes, followed by an inner-product predicate on private attributes. This simultaneously generalizes
attribute-based encryption (ABE) for boolean formulas and ABP’s as well as strongly attribute-hiding
PE schemes for inner products. The proposed scheme is proven secure for any a priori bounded
number of ciphertexts and an unbounded (polynomial) number of decryption keys, which is the best possible
in the simulation-based adaptive security framework. This directly implies that our construction
also achieves indistinguishability-based strongly partially-hiding security against adversaries requesting an
unbounded (polynomial) number of ciphertexts and decryption keys. The security of the proposed scheme
is derived under (asymmetric version of) the well-studied decisional linear (DLIN) assumption. Our work
resolves an open problem posed by Wee in TCC 2017, where his result was limited to the semi-adaptive
setting. Moreover, our result advances the current state of the art in both the fields of simulation-based
and indistinguishability-based strongly attribute-hiding PE schemes. Our main technical contribution lies
in extending the strong attribute hiding methodology of Okamoto and Takashima [EUROCRYPT 2012,
ASIACRYPT 2012] to the framework of simulation-based security and beyond inner products.

We introduce a new form of encryption that we name matchmaking encryption (ME). Using ME, sender S and receiver R, each characterized by its own attributes, can both specify policies the other party must satisfy in order for the message to be revealed. The main security guarantee is that of privacy-preserving policy matching: During decryption nothing is leaked beyond the fact that a match occurred/did not occur.
ME opens up new and innovative ways of secretly communicating, and enables several new applications where both participants can specify fine-grained access policies to encrypted data. For instance, in social matchmaking, S can encrypt a file containing his/her personal details and specify a policy so that the file can be decrypted only by his/her ideal partner. On the other end, a receiver R will be able to decrypt the file only if S corresponds to his/her ideal partner defined through a policy.
On the theoretical side, we put forward formal security definitions for ME, as well as generic frameworks for constructing ME from functional encryption. These constructions need to face the main technical challenge of simultaneously checking the policies established by S and R to avoid any leakage.
On the practical side, we construct an efficient scheme for the identity-based setting, with provable security in the random oracle model under the standard BDH assumption. We implement and evaluate our scheme and provide experimental evidence that our construction is practical. We also apply identity-based ME to a concrete use case, in particular for creating an anonymous bulletin board over a Tor network.

Threshold Implementations are well-known as a provably firstorder secure Boolean masking scheme even in the presence of glitches. A precondition for their security proof is a uniform input distribution at each round function, which may require an injection of fresh randomness or an increase in the number of shares. However, it is unclear whether violating
the uniformity assumption causes exploitable leakage in practice. Recently, Daemen undertook a theoretical study of lossy mappings to extend the understanding of uniformity violations. We complement his work by entropy simulations and practical measurements of Keccak’s round function. Our findings shed light on the necessity of mixing operations in addition to bit-permutations in a cipher’s linear layer to propagate
randomness between S-boxes and prevent exploitable leakage. Finally, we argue that this result cannot be obtained by current simulation methods, further stressing the continued need for practical leakage measurements.

OCB2 is a widely standardized mode of operation of a blockcipher that aims at providing authenticated encryption. A recent report by Inoue and Minematsu (IACR EPRINT report 2018/1040) indicates that OCB2 does not meet this goal. Concretely, by describing simple forging attacks the authors evidence that the (sub)goal of authenticity is not reached. The report does not question the confidentiality offered by OCB2.
In this note we show how the attacks of Inoue and Minematsu can be extended to also break the confidentiality of OCB2. We do this by constructing both IND-CCA and plaintext recovering adversaries, all of which require minimal resources and achieve overwhelming success rates.

We concentrate on machine learning techniques used for profiled side-channel analysis in the presence of imbalanced data. Such scenarios are realistic and often occurring, for instance in the Hamming weight or Hamming distance leakage models.
In order to deal with the imbalanced data, we use various balancing techniques and we show that most of them help in mounting successful attacks when the data is highly imbalanced. Especially, the results with the SMOTE technique are encouraging, since we observe some scenarios where it reduces the number of necessary measurements more than 8 times.
Next, we provide extensive results on comparison of machine learning and side-channel metrics, where we show that machine learning metrics (and especially accuracy as the most often used one) can be extremely deceptive. This finding opens a need to revisit the previous works and their results in order to properly assess the performance of machine learning in side-channel analysis.

In this paper we investigate the impact of decryption failures on the chosen-ciphertext security of (Ring/Module)-Learning With Errors and (Ring/Module)-Learning with Rounding based primitives. Our analysis is split in three parts: First, we introduce a technique to increase the failure rate of these schemes called failure boosting. Based on this technique we investigate the minimal effort for an adversary to obtain a failure in 3 cases: when he has access to a quantum computer, when he mounts a multi-target attack or when he can only perform a limited number of oracle queries. Secondly, we examine the amount of information that an adversary can derive from failing ciphertexts. Finally, these techniques are combined in an attack on (Ring/Module)-Learning with Errors and (Ring/Module)-Learning with Rounding based schemes with decryption failures. We provide both a theoretical analysis as well as an implementation to calculate the security impact and show that an attacker can significantly reduce the security of several candidates of the NIST post-quantum standardization process if sufficient oracle queries can be performed.

Order-preserving encryption (OPE) and order-revealing encryption (ORE) are among the core ingredients for encrypted database (EDB) systems as secure cloud storage. In this work, we study the leakage of OPE and ORE and their forward security.
We propose generic yet powerful file-injection attacks (FIAs) on OPE/ORE, aimed at the situations of possessing order by and range queries. The FIA schemes only exploit the ideal leakage of OPE/ORE (in particular, no need of data denseness or frequency). We also improve its efficiency with the frequency statistics using a hierarchical idea such that the high-frequency values will be recovered more quickly. Compared with other attacks against OPE/ORE proposed in recent years, our FIA attacks rely upon less demanding conditions and are more effective for attacking the systems with the function of data sharing or transferring like encrypted email system. We executed some experiments on real datasets to test the performance, and the results show that our FIA attacks can cause an extreme hazard on most of the existing OPE and ORE schemes with high efficiency and 100% recovery rate.
In order to resist the perniciousness of FIA, we propose a practical compilation framework for achieving forward secure ORE. The compilation framework only uses some simple cryptographical tools like pseudo-random function, hash function and trapdoor permutation. It can transform most of the existing OPE/ORE schemes into forward secure ORE schemes, with the
goal of minimizing the extra burden incurred on computation and storage. We also present its security proof and execute some experiments to analyze its performance.

In the situation where there are one sender and multiple receivers, a receiver selective opening (RSO) attack for a public key encryption (PKE) scheme considers adversaries that can corrupt some of the receivers and get their secret keys and plaintexts. Security against RSO attacks for a PKE scheme ensures confidentiality of ciphertexts of uncorrupted receivers. Simulation-based RSO security against chosen ciphertext attacks (SIM-RSO-CCA) is the strongest security notion in all RSO attack scenarios. Jia, Lu, and Li (INDOCRYPT 2016) proposed the first SIM-RSO-CCA secure PKE scheme. However, their scheme used indistinguishablility obfuscation, which is not known to be constructed from any standard computational assumption. In this paper, we give two contributions for constructing SIM-RSO-CCA secure PKE from standard computational assumptions. Firstly, we propose a generic construction of SIM-RSO-CCA secure PKE using an IND-CPA secure PKE scheme and a non-interactive zero-knowledge proof system satisfying one-time simulation soundness. Secondly, we propose an efficient and concrete construction of SIM-RSO-CCA secure PKE based on the decisional Diffie-Hellman (DDH) assumption. Moreover, we give a method for efficiently expanding the plaintext space of the DDH-based construction. By applying this method to the construction, we obtain the first DDH-based SIM-RSO-CCA secure PKE scheme supporting a super-polynomially large plaintext space with compact ciphertexts.

Inoue and Minematsu [Cryptology ePrint Archive: Report 2018/1040] presented efficient forgery attacks against OCB2, and Poettering [Cryptology ePrint Archive: Report 2018/1087] presented a distinguishing attack. In this short note, based on these results, we show a plaintext recovery attack against OCB2 in the chosen plaintext and ciphertext setting.

Enumeration of cryptographic keys in order of likelihood based on side-channel leakages has a significant importance in cryptanalysis. Previous algorithms enumerate the keys in optimal order, however their space complexity is $\Omega(n^{d/2})$ when there are d subkeys and n candidate values per subkey. We propose a new key enumeration algorithm that has a space complexity bounded by $O(d^2 w+dn)$, when w is a design parameter, which allows the enumeration of many more keys without exceeding the available space. The trade-off is that the enumeration order is only near-optimal, with a bounded ratio between optimal and near-optimal ranks.
Before presenting our algorithm we provide bounds on the guessing entropy of the full key in terms of the easy-to-compute guessing entropies of the individual subkeys. We use these results to quantify the near-optimality of our algorithm's ranking, and to bound its guessing entropy.
We evaluated our algorithm through extensive simulations. We show that our algorithm continues its near-optimal-order enumeration far beyond the rank at which the optimal algorithm fails due to insufficient memory, on realistic SCA scenarios. Our simulations utilize a new model of the true rank distribution, based on long tail Pareto distributions, that is validated by empirical data and may be of independent interest.

We introduce a new class of irreducible pentanomials over ${\mathbb F}_{2^m}$ of the form
$f(x) = x^{2b+c} + x^{b+c} + x^b + x^c + 1$. Let $m=2b+c$ and use $f$ to
define the finite field extension of degree $m$. We give the exact number of
operations required for computing the reduction modulo $f$. We also provide
a multiplier based on Karatsuba algorithm in $\mathbb{F}_2[x]$
combined with our reduction process. We give the total cost of the multiplier and found that the
bit-parallel multiplier defined by this new class of polynomials
has improved XOR and AND complexity. Our multiplier has comparable time delay when compared to
other multipliers based on Karatsuba algorithm.

We present TOPPSS, the most efficient Password-Protected Secret Sharing (PPSS) scheme to date. A (t; n)-threshold PPSS, introduced
by Bagherzandi et al, allows a user to share a secret among n servers so that the secret can later be reconstructed by the user from any subset of t+1 servers with the sole knowledge of a password. It is guaranteed that any coalition of up to t corrupt servers learns nothing about the secret (or the password). In addition to providing strong protection to secrets stored online, PPSS schemes give rise to efficient Threshold PAKE (T-PAKE) protocols that armor single-server password authentication against the inherent vulnerability to offline dictionary attacks in
case of server compromise.
TOPPSS is password-only, i.e. it does not rely on public keys in reconstruction, and enjoys remarkable efficiency: A single communication round, a single exponentiation per server and just two exponentiations per client regardless of the number of servers. TOPPSS satises threshold security under the (Gap) One-More Diffie-Hellman (OMDH) assumption in the random-oracle model as in several prior efficient realizations of PPSS/TPAKE. Moreover, we show that TOPPSS realizes the Universally Composable PPSS notion of Jarecki et al under a generalization of OMDH, the Threshold One-More Diffie-Hellman (T-OMDH) assumption. We show that the T-OMDH and OMDH assumptions are both hard in the generic group model.
The key technical tool we introduce is a universally composable Threshold Oblivious PRF which is of independent interest and applicability.

In this paper, we construct a Lattice-based one-time Linkable Ring Signature (L2RS) scheme, which enables the public to verify if two or more signatures were generated by same signatory, whilst still preserving the anonymity of the signatory. The L2RS provides unconditional anonymity and security guarantees under the Ring Short Integer Solution (Ring-SIS) lattice hardness assumption. The proposed L2RS scheme is extended to be applied in a protocol that we called Lattice Ring Condential transaction (Lattice RingCT) RingCT v1.0, which forms the foundation of the privacy-preserving protocol in any post-quantum secure cryptocurrency such as Hcash.

This paper describes two FPGA implementations for the encryption and authentication of data, based on the AES algorithm running in Galois/Counter mode (AES-GCM). Both architectures are protected against side-channel analysis attacks through the use of a threshold implementation (TI). The first architecture is fully unrolled and optimized for throughput. The second architecture uses a round-based structure, fits on a relatively small FPGA board, and is evaluated for side-channel attack resistance. We perform a Test Vector Leakage Assessment (TVLA), which shows no first-order leakage in the power consumption of the FPGA. To the best of our knowledge, our work is (1) the first to describe a throughput-optimized FPGA architecture of AES-GCM, protected against first-order side-channel information leakage, and (2) the first to evaluate the side-channel attack resistance of a TI-protected AES-GCM implementation.