We introduce a new type of cryptanalytic algorithm on the obfuscations based on the branching programs. Applying this algorithm to two recent general-purpose obfuscation schemes one by Chen et al. (CRYPTO 2018) and the other by Bartusek et al. (TCC 2018), we can show that they do not have the desired security. In other words, there exist two functionally equivalent branching programs whose obfuscated programs can be distinguished in polynomial time. Our strategy is to reduce the security problem of indistinguishability obfuscation into the distinguishing problem of two distributions where polynomially many samples are given. More precisely, we perform the obfuscating process ourselves with randomly chosen secret values to obtain identical and independent samples according to the distribution of evaluations of obfuscations. We then use the variance of samples as a new distinguisher of two functionally equivalent obfuscated programs. This statistical attack gives a new perspective on the security of the indistinguishability obfuscations: We should consider the shape of distributions of the evaluations of obfuscations to ensure the security. In other words, while most of the previous (weak) security proofs have been studied with respect to algebraic attack model or ideal model, our attack shows that this algebraic security is not enough to achieve indistinguishability obfuscation.
Disclaimer: The authors of BGMZ obfuscation (TCC'18) report that there are flaws of cryptanalysis of BGMZ obfuscation in Section 5. In particular, the current optimal parameter choice of BGMZ obfuscation is robust against our attack, while the attack lies outside the provable security of BGMZ obfuscation.

This paper demonstrates how to achieve simulation-based strong attribute hiding against adaptive adversaries
for predicate encryption (PE) schemes supporting expressive predicate families under standard
computational assumptions in bilinear groups. Our main result is a simulation-based adaptively strongly
partially-hiding PE (PHPE) scheme for predicates computing arithmetic branching programs (ABP) on
public attributes, followed by an inner-product predicate on private attributes. This simultaneously generalizes
attribute-based encryption (ABE) for boolean formulas and ABP’s as well as strongly attribute-hiding
PE schemes for inner products. The proposed scheme is proven secure for any a priori bounded
number of ciphertexts and an unbounded (polynomial) number of decryption keys, which is the best possible
in the simulation-based adaptive security framework. This directly implies that our construction
also achieves indistinguishability-based strongly partially-hiding security against adversaries requesting an
unbounded (polynomial) number of ciphertexts and decryption keys. The security of the proposed scheme
is derived under (asymmetric version of) the well-studied decisional linear (DLIN) assumption. Our work
resolves an open problem posed by Wee in TCC 2017, where his result was limited to the semi-adaptive
setting. Moreover, our result advances the current state of the art in both the fields of simulation-based
and indistinguishability-based strongly attribute-hiding PE schemes. Our main technical contribution lies
in extending the strong attribute hiding methodology of Okamoto and Takashima [EUROCRYPT 2012,
ASIACRYPT 2012] to the framework of simulation-based security and beyond inner products.

We introduce a new form of encryption that we name matchmaking encryption (ME). Using ME, sender S and receiver R, each characterized by its own attributes, can both specify policies the other party must satisfy in order for the message to be revealed. The main security guarantee is that of privacy-preserving policy matching: During decryption nothing is leaked beyond the fact that a match occurred/did not occur.
ME opens up new and innovative ways of secretly communicating, and enables several new applications where both participants can specify fine-grained access policies to encrypted data. For instance, in social matchmaking, S can encrypt a file containing his/her personal details and specify a policy so that the file can be decrypted only by his/her ideal partner. On the other end, a receiver R will be able to decrypt the file only if S corresponds to his/her ideal partner defined through a policy.
On the theoretical side, we put forward formal security definitions for ME, as well as generic frameworks for constructing ME from functional encryption. These constructions need to face the main technical challenge of simultaneously checking the policies established by S and R to avoid any leakage.
On the practical side, we construct an efficient scheme for the identity-based setting, with provable security in the random oracle model under the standard BDH assumption. We implement and evaluate our scheme and provide experimental evidence that our construction is practical. We also apply identity-based ME to a concrete use case, in particular for creating an anonymous bulletin board over a Tor network.

Threshold Implementations are well-known as a provably firstorder secure Boolean masking scheme even in the presence of glitches. A precondition for their security proof is a uniform input distribution at each round function, which may require an injection of fresh randomness or an increase in the number of shares. However, it is unclear whether violating
the uniformity assumption causes exploitable leakage in practice. Recently, Daemen undertook a theoretical study of lossy mappings to extend the understanding of uniformity violations. We complement his work by entropy simulations and practical measurements of Keccak’s round function. Our findings shed light on the necessity of mixing operations in addition to bit-permutations in a cipher’s linear layer to propagate
randomness between S-boxes and prevent exploitable leakage. Finally, we argue that this result cannot be obtained by current simulation methods, further stressing the continued need for practical leakage measurements.

OCB2 is a widely standardized mode of operation of a blockcipher that aims at providing authenticated encryption. A recent report by Inoue and Minematsu (IACR EPRINT report 2018/1040) indicates that OCB2 does not meet this goal. Concretely, by describing simple forging attacks the authors evidence that the (sub)goal of authenticity is not reached. The report does not question the confidentiality offered by OCB2.
In this note we show how the attacks of Inoue and Minematsu can be extended to also break the confidentiality of OCB2. We do this by constructing both IND-CCA and plaintext recovering adversaries, all of which require minimal resources and achieve overwhelming success rates.

We concentrate on machine learning techniques used for profiled side-channel analysis in the presence of imbalanced data. Such scenarios are realistic and often occurring, for instance in the Hamming weight or Hamming distance leakage models.
In order to deal with the imbalanced data, we use various balancing techniques and we show that most of them help in mounting successful attacks when the data is highly imbalanced. Especially, the results with the SMOTE technique are encouraging, since we observe some scenarios where it reduces the number of necessary measurements more than 8 times.
Next, we provide extensive results on comparison of machine learning and side-channel metrics, where we show that machine learning metrics (and especially accuracy as the most often used one) can be extremely deceptive. This finding opens a need to revisit the previous works and their results in order to properly assess the performance of machine learning in side-channel analysis.

In this paper we investigate the impact of decryption failures on the chosen-ciphertext security of (Ring/Module)-Learning With Errors and (Ring/Module)-Learning with Rounding based primitives. Our analysis is split in three parts: First, we introduce a technique to increase the failure rate of these schemes called failure boosting. Based on this technique we investigate the minimal effort for an adversary to obtain a failure in 3 cases: when he has access to a quantum computer, when he mounts a multi-target attack or when he can only perform a limited number of oracle queries. Secondly, we examine the amount of information that an adversary can derive from failing ciphertexts. Finally, these techniques are combined in an attack on (Ring/Module)-Learning with Errors and (Ring/Module)-Learning with Rounding based schemes with decryption failures. We provide both a theoretical analysis as well as an implementation to calculate the security impact and show that an attacker can significantly reduce the security of several candidates of the NIST post-quantum standardization process if sufficient oracle queries can be performed.

Order-preserving encryption (OPE) and order-revealing encryption (ORE) are among the core ingredients for encrypted database (EDB) systems as secure cloud storage. In this work, we study the leakage of OPE and ORE and their forward security.
We propose generic yet powerful file-injection attacks (FIAs) on OPE/ORE, aimed at the situations of possessing order by and range queries. The FIA schemes only exploit the ideal leakage of OPE/ORE (in particular, no need of data denseness or frequency). We also improve its efficiency with the frequency statistics using a hierarchical idea such that the high-frequency values will be recovered more quickly. Compared with other attacks against OPE/ORE proposed in recent years, our FIA attacks rely upon less demanding conditions and are more effective for attacking the systems with the function of data sharing or transferring like encrypted email system. We executed some experiments on real datasets to test the performance, and the results show that our FIA attacks can cause an extreme hazard on most of the existing OPE and ORE schemes with high efficiency and 100% recovery rate.
In order to resist the perniciousness of FIA, we propose a practical compilation framework for achieving forward secure ORE. The compilation framework only uses some simple cryptographical tools like pseudo-random function, hash function and trapdoor permutation. It can transform most of the existing OPE/ORE schemes into forward secure ORE schemes, with the
goal of minimizing the extra burden incurred on computation and storage. We also present its security proof and execute some experiments to analyze its performance.

In the situation where there are one sender and multiple receivers, a receiver selective opening (RSO) attack for a public key encryption (PKE) scheme considers adversaries that can corrupt some of the receivers and get their secret keys and plaintexts. Security against RSO attacks for a PKE scheme ensures confidentiality of ciphertexts of uncorrupted receivers. Simulation-based RSO security against chosen ciphertext attacks (SIM-RSO-CCA) is the strongest security notion in all RSO attack scenarios. Jia, Lu, and Li (INDOCRYPT 2016) proposed the first SIM-RSO-CCA secure PKE scheme. However, their scheme used indistinguishablility obfuscation, which is not known to be constructed from any standard computational assumption. In this paper, we give two contributions for constructing SIM-RSO-CCA secure PKE from standard computational assumptions. Firstly, we propose a generic construction of SIM-RSO-CCA secure PKE using an IND-CPA secure PKE scheme and a non-interactive zero-knowledge proof system satisfying one-time simulation soundness. Secondly, we propose an efficient and concrete construction of SIM-RSO-CCA secure PKE based on the decisional Diffie-Hellman (DDH) assumption. Moreover, we give a method for efficiently expanding the plaintext space of the DDH-based construction. By applying this method to the construction, we obtain the first DDH-based SIM-RSO-CCA secure PKE scheme supporting a super-polynomially large plaintext space with compact ciphertexts.

Inoue and Minematsu [Cryptology ePrint Archive: Report 2018/1040] presented efficient forgery attacks against OCB2, and Poettering [Cryptology ePrint Archive: Report 2018/1087] presented a distinguishing attack. In this short note, based on these results, we show a plaintext recovery attack against OCB2 in the chosen plaintext and ciphertext setting.

The distributed discrete logarithm (DDL) problem was introduced by Boyle et al. at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs.
Let $g$ be a generator of a multiplicative group $\mathbb{G}$. Given a random group element $g^{x}$ and an unknown integer $b \in [-M,M]$ for a small $M$, two parties $A$ and $B$ (that cannot communicate) successfully solve DDL if $A(g^{x}) - B(g^{x+b}) = b$. Otherwise, the parties err. In the DDL protocol of Boyle et al., $A$ and $B$ run in time $T$ and have error probability that is roughly linear in $M/T$. Since it has a significant impact on the HSS scheme's performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of $T$.
In this paper we devise a new DDL protocol that substantially reduces the error probability to $O(M \cdot T^{-2})$. Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size $S$ from $O(S^2)$ to $O(S^{3/2})$. We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a \emph{short} interval of length $R$ in time $o(\sqrt{R})$.
Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.

Enumeration of cryptographic keys in order of likelihood based on side-channel leakages has a significant importance in cryptanalysis. Previous algorithms enumerate the keys in optimal order, however their space complexity is $\Omega(n^{d/2})$ when there are d subkeys and n candidate values per subkey. We propose a new key enumeration algorithm that has a space complexity bounded by $O(d^2 w+dn)$, when w is a design parameter, which allows the enumeration of many more keys without exceeding the available space. The trade-off is that the enumeration order is only near-optimal, with a bounded ratio between optimal and near-optimal ranks.
Before presenting our algorithm we provide bounds on the guessing entropy of the full key in terms of the easy-to-compute guessing entropies of the individual subkeys. We use these results to quantify the near-optimality of our algorithm's ranking, and to bound its guessing entropy.
We evaluated our algorithm through extensive simulations. We show that our algorithm continues its near-optimal-order enumeration far beyond the rank at which the optimal algorithm fails due to insufficient memory, on realistic SCA scenarios. Our simulations utilize a new model of the true rank distribution, based on long tail Pareto distributions, that is validated by empirical data and may be of independent interest.

We introduce a new class of irreducible pentanomials over ${\mathbb F}_{2^m}$ of the form
$f(x) = x^{2b+c} + x^{b+c} + x^b + x^c + 1$. Let $m=2b+c$ and use $f$ to
define the finite field extension of degree $m$. We give the exact number of
operations required for computing the reduction modulo $f$. We also provide
a multiplier based on Karatsuba algorithm in $\mathbb{F}_2[x]$
combined with our reduction process. We give the total cost of the multiplier and found that the
bit-parallel multiplier defined by this new class of polynomials
has improved XOR and AND complexity. Our multiplier has comparable time delay when compared to
other multipliers based on Karatsuba algorithm.

We present TOPPSS, the most efficient Password-Protected Secret Sharing (PPSS) scheme to date. A (t; n)-threshold PPSS, introduced
by Bagherzandi et al, allows a user to share a secret among n servers so that the secret can later be reconstructed by the user from any subset of t+1 servers with the sole knowledge of a password. It is guaranteed that any coalition of up to t corrupt servers learns nothing about the secret (or the password). In addition to providing strong protection to secrets stored online, PPSS schemes give rise to efficient Threshold PAKE (T-PAKE) protocols that armor single-server password authentication against the inherent vulnerability to offline dictionary attacks in
case of server compromise.
TOPPSS is password-only, i.e. it does not rely on public keys in reconstruction, and enjoys remarkable efficiency: A single communication round, a single exponentiation per server and just two exponentiations per client regardless of the number of servers. TOPPSS satises threshold security under the (Gap) One-More Diffie-Hellman (OMDH) assumption in the random-oracle model as in several prior efficient realizations of PPSS/TPAKE. Moreover, we show that TOPPSS realizes the Universally Composable PPSS notion of Jarecki et al under a generalization of OMDH, the Threshold One-More Diffie-Hellman (T-OMDH) assumption. We show that the T-OMDH and OMDH assumptions are both hard in the generic group model.
The key technical tool we introduce is a universally composable Threshold Oblivious PRF which is of independent interest and applicability.

In this paper, we construct a Lattice-based one-time Linkable Ring Signature (L2RS) scheme, which enables the public to verify if two or more signatures were generated by same signatory, whilst still preserving the anonymity of the signatory. The L2RS provides unconditional anonymity and security guarantees under the Ring Short Integer Solution (Ring-SIS) lattice hardness assumption. The proposed L2RS scheme is extended to be applied in a protocol that we called Lattice Ring Condential transaction (Lattice RingCT) RingCT v1.0, which forms the foundation of the privacy-preserving protocol in any post-quantum secure cryptocurrency such as Hcash.

This paper describes two FPGA implementations for the encryption and authentication of data, based on the AES algorithm running in Galois/Counter mode (AES-GCM). Both architectures are protected against side-channel analysis attacks through the use of a threshold implementation (TI). The first architecture is fully unrolled and optimized for throughput. The second architecture uses a round-based structure, fits on a relatively small FPGA board, and is evaluated for side-channel attack resistance. We perform a Test Vector Leakage Assessment (TVLA), which shows no first-order leakage in the power consumption of the FPGA. To the best of our knowledge, our work is (1) the first to describe a throughput-optimized FPGA architecture of AES-GCM, protected against first-order side-channel information leakage, and (2) the first to evaluate the side-channel attack resistance of a TI-protected AES-GCM implementation.

We study the problem of discrete distribution testing in the two-party setting. For example, in the standard closeness testing
problem, Alice and Bob each have $t$ samples from, respectively,
distributions $a$ and $b$ over $[n]$, and they need to test whether
$a=b$ or $a,b$ are $\varepsilon$-far (in the $\ell_1$ distance) for some fixed $\varepsilon>0$. This is in contrast to the well-studied one-party case, where the tester has unrestricted access to samples of both distributions, for which optimal bounds are known for a number of variations. Despite being a natural constraint in applications, the two-party setting has evaded attention so far.
We address two fundamental aspects of the two-party setting: 1) what is the communication complexity, and 2) can it be accomplished securely, without Alice and Bob learning extra information about each other's input. Besides closeness testing, we also study the independence testing problem, where Alice and Bob have $t$ samples from distributions $a$ and $b$ respectively, which may be correlated; the question is whether $a,b$ are independent of $\epsilon$-far from being independent.
Our contribution is three-fold:
$\bullet$ Communication: we show how to gain communication efficiency as we have more samples, beyond the information-theoretic bound on $t$. Furthermore, the gain is polynomially better than what one may obtain by adapting one-party algorithms.
For the closeness testing, our protocol has communication $s =
\tilde{\Theta}_{\varepsilon}\left(n^2/t^2\right)$ as long as $t$ is at least the information-theoretic minimum number of samples. For the independence testing over domain $[n] \times [m]$, where $n\ge m$, we obtain $s = \tilde{O}_{\varepsilon}(n^2 m/t^2 + n m/t + \sqrt{m})$.
$\bullet$ Lower bounds: we prove tightness of our trade-off for the closeness testing, as well as that the independence testing requires tight $\Omega(\sqrt{m})$ communication for unbounded number of samples. These lower bounds are of independent interest as, to the best of our knowledge, these are the first 2-party communication lower bounds for testing problems, where the inputs represent a set of i.i.d. samples.
$\bullet$ Security: we define the concept of secure distribution testing and argue that it must leak at least some minimal information when the promise is not satisfied.
We then provide secure versions of the above protocols with an
overhead that is only polynomial in the security parameter.

This paper presents the complete description of the best differentials and linear hulls in 2-round Kuznyechik. We proved that 2-round $\mathrm{MEDP}=2^{-86.66...}$, $\mathrm{MELP}=2^{-76.739...}$. A comparison is made with similar results for the AES cipher.

This paper studies a fundamental problem regarding the security of blockchain on how the existence of multiple misbehaving pools influences the profitability of selfish mining. Each selfish miner maintains a private chain and makes it public opportunistically for the purpose of acquiring more rewards incommensurate to his Hashrate. We establish a novel Markov chain model to characterize all the state transitions of public and private chains. The minimum requirement of Hashrate together with the minimum delay of being profitable is derived in close-form. The former reduces to 21.48% with the symmetric selfish miners, while their competition with asymmetric Hashrates puts forward a higher requirement of the profitable threshold. The
profitable delay increases with the decrease of the Hashrate of selfish miners, making the mining pools more cautious on performing selfish mining.

Private information retrieval (PIR) is a fundamental tool for preserving query privacy when accessing outsourced data. All previous PIR constructions have significant costs preventing widespread use. In this work, we present private stateful information retrieval (PSIR), an extension of PIR, allowing clients to be stateful and maintain information between multiple queries. Our design of the PSIR primitive maintains three important properties of PIR: multiple clients may simultaneously query without complex concurrency primitives, query privacy should be maintained if the server colludes with other clients, and new clients should be able to enroll into the system by exclusively interacting with the server.
We present a PSIR framework that reduces an online query to performing one single-server PIR on a sub-linear number of database records. All other operations beyond the single-server PIR consist of cryptographic hashes or plaintext operations. In practice, the dominating costs of resources occur due to the public-key operations involved with PIR. By reducing the input database to PIR, we are able to limit expensive computation and avoid transmitting large ciphertexts. We show that various instantiations of PSIR reduce server CPU by up to 10x and online network costs by up to 10x over the previous best PIR construction.