Randomized moduli in Residue Number System (RNS) generate effectively large noise and
make quite difficult to attack a secret key $K$ from only few observations of Hamming distances
$H=(H_0, ..., H_{d-1})$ that result from the changes on the state variable. Since Hamming distances have gaussian distribution and most of the statistic tests, like NIST's ones, evaluate discrete and uniform distribution, we choose to use side-channel attacks as a tool in order to evaluate randomisation of Hamming distances . This paper analyses the resilience against Correlation Power Analysis (CPA), Differential Power Analysis (DPA) when the cryptographic system is protected against Simple Power Analysis (SPA) by a Montgomery Powering Ladder (MPL). While both analysis use only information on the current state, DPA Square crosses the information of all the states. We emphasize that DPA Square performs better than DPA and CPA and we show that the number of observations $S$ needed to perform an attack increases with respect to the number of moduli $n$. For Elliptic Curves Cryptography (ECC) and using a Monte Carlo simulation, we conjecture that $S = O((2n)!/(n!)^2)$.

Decision of whether a Boolean equation system has a solution is an NPC problem and finding a solution is NP hard. In this paper, we present a quantum algorithm to decide whether a Boolean equation system F has a solution and compute one if F does have solutions with any given success probability. The complexity of the algorithm is polynomial in the size of F and the condition number of F. As a consequence, we have achieved exponential speedup for solving sparse Boolean equation systems if their condition numbers are small. We apply the quantum algorithm to the cryptanalysis of the stream cipher Trivum, the block cipher AES, the hash function SHA-3/Keccak, and the multivariate public key cryptosystems, and show that they are secure under quantum algebraic attack only if the condition numbers of the corresponding equation systems are large.

How to efficiently search over encrypted data is an important and interesting problem in the cloud era. To solve it, Boneh et al. introduced the notion of public key encryption with keyword search
(PEKS), in 2004. However, in almost all the PEKS schemes an inside adversary may recover the keyword from a given trapdoor by exhaustively guessing the keywords offline. How to resist the inside keyword guessing attack in PEKS remains a hard problem. In this paper we propose introduce the notion of Public-key Authenticated Encryption with Keyword Search (PAEKS) to solve the problem, in which the data sender not only encrypts a keyword, but also authenticates it, so that a verifier would be convinced that the encrypted keyword can only be generated by the sender. We propose a concrete and efficient construction of PAEKS, and prove its security based on simple and static assumptions in the random oracle model under the given security models. Experimental results show that our scheme enjoys a comparable efficiency with Boneh et al.'s scheme.

Masking and hiding schemes represent a well-researched and successful option to follow when considering side-channel countermeasures. Still, such measures increase the implementation cost in term of power consumption, clock cycles, and random numbers generation. In fact, the higher the order of protection against side-channel adversaries, the higher the implementation cost of countermeasures.
S-boxes represent the most vulnerable part in an implementation when considering side-channel adversary. In this paper, we investigate how to generate S-boxes that have improved resilience against varying orders of side-channel attacks while minimising the implementation costs. We examine whether S-boxes generated against a certain order of attack also represent a good solution when considering different order of attacks.
We demonstrate that we successfully generated S-boxes resilient against a certain physical attack order but the improvements are small. As a result, S-boxes that are resilient against first order attacks stay resilient against higher-order attacks, which saves computational power during the design of higher-order side-channel attacks resilient S-boxes.

We show a protocol for two-server oblivious RAM (ORAM) that is simpler and more efficient than the best prior work. Our construction combines any tree-based ORAM with an extension of a two-server private information retrieval scheme by Boyle et al., and is able to avoid recursion and thus use only one round of interaction. In addition, our scheme has a very cheap initialization phase, making it well suited for applications to RAM-based secure computation.
A practical instantiation of our protocol has excellent concrete parameters: for storing an $N$-element array of arbitrary size data blocks with statistical security parameter $\lambda$, the servers each store $4N$ encrypted blocks, the client stores $\lambda+2 \log N$ blocks, and the total communication per logical access is roughly $10\log N$ encrypted blocks.

Profiled side-channel attacks represent the most powerful category of side-channel attacks. There we have a number of methods promising to work well in a number of different scenarios. Still, the area is constantly improving: we started with template attack and then went into different machine learning techniques that outperformed template attack in certain settings. Recently, deep learning techniques brought promise of even better results.
In this paper, we ask a question whether deep learning is actually better than machine learning, and if yes, in what situations exactly. To this end, we compare several machine learning techniques and a well-known deep learning technique -- convolutional neural networks in a number of scenarios.
Our results point that convolutional neural networks indeed outperforms machine learning in several scenarios but that often there is no compelling reason to use such a complex technique. In fact, if comparing techniques without extra steps like pre-processing, we see obvious advantage for deep learning only when the level of noise is small, the number of measurements is high, and the number of features is high. All other tested situations actually show that machine learning, for a significantly lower computational cost, performs the same or even better. Finally, we conduct a small experiment that opens the question whether convolutional neural networks are actually the best choice in SCA context.

Bad choices of passwords were and are a pervasive problem. Most password alternatives (such as two-factor authentication) may increase cost and arguably hurt the usability of the system. This is of special significance for low cost IoT devices.
Users choosing weak passwords do not only compromise themselves, but the whole eco system. For example, common and default passwords in IoT devices were exploited by hackers to create botnets and mount severe attacks on large Internet services, such as the Mirai botnet DDoS attack.
We present a method to help protect the Internet from such large scale attacks. We enable a server to identify popular passwords (heavy hitters), and publish a list of over-popular passwords that must be avoided. This filter ensures that no single password can be used to comprise a large percentage of the users. The list is dynamic and can be changed as new users are added or when current users change their passwords. We apply maliciously secure two-party computation and differential privacy to protect the users' password privacy. Our solution does not require extra hardware or cost, and is transparent to the user.
The construction is secure even against a malicious coalition of devices which tries to manipulate the protocol in order to hide the popularity of some password that the attacker is exploiting. We show a proof-of-concept implementation and analyze its performance.
Our construction can also be used in any setting where one would desire to privately learn heavy hitters in the presence of an active malicious adversary. For example, learning the most popular sites accessed by the TOR network.

The multiplicative complexity of a Boolean function is the minimum number of AND gates that are necessary and sufficient to implement the function over the basis (AND, XOR, NOT). Finding the multiplicative complexity of a given function is computationally intractable, even for functions with small number of inputs. Turan et al. showed that $n$-variable Boolean functions can be implemented with at most $n-1$ AND gates for $n\leq 5$. A counting argument can be used to show that, for $n \geq 7$, there exist $n$-variable Boolean functions with multiplicative complexity of at least $n$. In this work, we propose a method to find the multiplicative complexity of Boolean functions by analyzing circuits with a particular number of AND gates and utilizing the affine equivalence of functions. We use this method to study the multiplicative complexity of 6-variable Boolean functions, and calculate the multiplicative complexities of all 150357 affine equivalence classes. We show that any 6-variable Boolean function can be implemented using at most 6 AND gates.
Additionally, we exhibit specific 6-variable Boolean functions which have multiplicative complexity 6.

Consider the following secret-sharing problem. Your goal is to distribute a long file $s$ between $n$ servers such that $(d-1)$-subsets cannot recover the file, $(d+1)$-subsets can recover the file, and $d$-subsets should be able to recover $s$ if and only if they appear in some predefined list $L$. How small can the information ratio (i.e., the number of bits stored on a server per each bit of the secret) be?
We initiate the study of such $d$-uniform access structures, and view them as a useful scaled-down version of general access structures. Our main result shows that, for constant $d$, any $d$-uniform access structure admits a secret sharing scheme with a *constant* asymptotic information ratio of $c_d$ that does not grow with the number of servers $n$. This result is based on a new construction of $d$-party Conditional Disclosure of Secrets (Gertner et al., JCSS '00) for arbitrary predicates over $n$-size domain in which each party communicates at most four bits per secret bit.
In both settings, previous results achieved non-constant information ratio which grows asymptotically with $n$ even for the simpler (and widely studied) special case of $d=2$. Moreover, our results provide a unique example for a natural class of access structures $F$ that can be realized with information rate smaller than its bit-representation length $\log |F|$ (i.e., $\Omega( d \log n)$ for $d$-uniform access structures) showing that amortization can beat the representation size barrier.
Our main result applies to exponentially long secrets, and so it should be mainly viewed as a barrier against amortizable lower-bound techniques. We also show that in some natural simple cases (e.g., low-degree predicates), amortization kicks in even for quasi-polynomially long secrets. Finally, we prove some limited lower-bounds, point out some limitations of existing lower-bound techniques, and describe some applications to the setting of private simultaneous messages.

We investigate the relationships between theoretical studies of leaking cryptographic devices and concrete security evaluations with standard side-channel attacks. Our contributions are in four parts. First, we connect the formal analysis of the masking countermeasure proposed by Duc et al. (Eurocrypt 2014) with the Eurocrypt 2009 evaluation framework for side-channel key recovery attacks. In particular, we re-state their main proof for the masking countermeasure based on a mutual information metric, which is frequently used in concrete physical security evaluations. Second, we discuss the tightness of the Eurocrypt 2014 bounds based on experimental case studies. This allows us to conjecture a simplified link between the mutual information metric and the success rate of a side-channel adversary, ignoring technical parameters and proof artifacts. Third, we introduce heuristic (yet well-motivated) tools for the evaluation of the masking countermeasure when its independent leakage assumption is not perfectly fulfilled, as it is frequently encountered in practice. Thanks to these tools, we argue that masking with non-independent leakages may provide improved security levels in certain scenarios. Eventually, we consider the tradeoff between the measurement complexity and the key enumeration time complexity in divide-and-conquer side-channel attacks, and show that these complexities can be lower bounded based on the mutual information metric, using simple and efficient algorithms. The combination of these observations enables significant reductions of the evaluation costs for certification bodies.

Cryptocurrencies allow users to securely transfer money without relying on a trusted intermediary, and the transparency of their underlying ledgers also enables public verifiability. This openness, however, comes at a cost to privacy, as even though the pseudonyms users go by are not linked to their real-world identities, all movement of money among these pseudonyms is traceable. In this paper, we present M{\"o}bius, an Ethereum-based tumbler or mixing service. M{\"o}bius achieves strong notions of anonymity, as even malicious senders cannot identify which pseudonyms belong to the recipients to whom they sent money, and is able to resist denial-of-service attacks. It also achieves a much lower off-chain communication complexity than all existing tumblers, with senders and recipients needing to send only two initial messages in order to engage in an arbitrary number of transactions.

The introduction of summation polynomials for elliptic curves by Semaev has opened
up new avenues of investigation in
index calculus type algorithms for the elliptic curve discrete logarithm problem, and several recent
papers have explored their use.
Most papers use Grobner basis computations at some point. We question if Grobner
bases are needed at all, and we propose a faster algorithm to solve the ECDLP that does not involve
Grobner basis computations, and does not involve a linear algebra step either.
We further propose an even faster algorithm that does not involve
Grobner basis computations, or a linear algebra step, or summation polynomials.
Our algorithms are aimed at prime order fields, although they are valid for any finite field.
We give a complexity analysis of our algorithms and provide
extensive computational data.

In Attribute-Based Signatures (ABS; first defined by Maji, Prabhakaran and Rosulek, CT-RSA 2011) an authority can generate multiple signing keys, where each key is associated with an attribute $x$. Messages are signed with respect to a constraint $f$, such that a key for $x$ can sign messages respective to $f$ only if $f(x) = 0$. The security requirements are unforgeability and key privacy (signatures should not expose the specific signing key used). In (single-hop) Homomorphic Signatures (HS; first defined by Boneh and Freeman, PKC 2011), given a signature for a data-set $x$, one can evaluate a signature for the pair $(f(x),f)$, for functions $f$. In context-hiding HS, evaluated signatures do not reveal information about the original (pre-evaluated) signatures.
In this work we start by showing that these two notions are in fact equivalent. The first implication of this equivalence is a new lattice-based ABS scheme for polynomial-depth circuits, based on the HS construction of Gorbunov, Vaikuntanathan and Wichs (GVW; STOC 2015).
We then construct a new ABS candidate from a worst case lattice assumption (SIS), with different parameters. Using our equivalence again, now in the opposite direction, our new ABS implies a new lattice-based HS scheme with different parameter trade-off, compared to the aforementioned GVW.

Authenticated encryption with Associated Data (AEAD) plays a significant role in cryptography because of its ability to provide integrity, confidentiality and authenticity at the same time. Due to the emergence of security at the edge of computing fabric, such as, sensors and smartphone devices, there is a growing need of lightweight AEAD ciphers. Currently, a worldwide contest, titled CAESAR, is being held to decide on a set of AEAD ciphers, which are distinguished by their security, run-time performance, energy-efficiency and low area budget. For accurate evaluation of CAESAR candidates, it is of utmost importance to have independent and thorough optimization for each of the ciphers both for their corresponding hardware and software implementations.
In this paper, we have carried out an evaluation of the optimized hardware implementation of AEAD ciphers selected in CAESAR third round. We specifically focus on manual optimization of the micro-architecture, evaluations for ASIC technology libraries and the effect of CAESAR APIs on the performances. While these has been studied for FPGA platforms and standalone cipher implementation - to the best of our knowledge, this is the first detailed ASIC benchmarking of CAESAR candidates including manual optimization. In this regard, we benchmarked all prior reported designs, including the code generated by high-level synthesis flows.
Detailed optimization studies are reported for NORX, CLOC and Deoxys-I. Our pre-layout results using commercial ASIC technology library and synthesis tools show that optimized NORX is 40.81% faster and 18.02% smaller, optimized CLOC is 38.30% more energy efficient and 20.65% faster and optimized Deoxys-I is 35.16% faster, with respect to the best known results. Similar or better performance results are also achieved for FPGA platforms.

The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even cryptomania tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. In this paper, we answer this question affirmatively by showing that CRH is implied by (the two most common variants of) LPN. More specifically, for any constant $\epsilon>0$, assume that
(1) the low-noise LPN problem (i.e., at noise rate $1/\sqrt{n}$) is $2^{4\sqrt{n}/\log n}$-hard given $q=n^{3+\epsilon}$ samples;
(2) or that the constant-noise LPN problem is $2^{n^{0.5+\epsilon}}$-hard;
then there exists CRH functions with constant (resp., poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN$\rightarrow$bSVP$\rightarrow$CRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE$\rightarrow$SIS$\rightarrow$CRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai's CRH functions based on the Short Integer Solution (SIS) problem.
In addition to the feasibility established, we discuss also the practical relevance of the CRH functions constructed (from the hardness of LPN). Interestingly, the SHA-3 proposal Fast Syndrome Based (FSB) hash resembles a concrete (but aggressive) instantiation of the LPN-based CRH construction. Furthermore, we show how to implement the polynomially shrinking CRH functions more efficiently using idealized heuristics such as a block cipher (keyed by a public random string) behaves like a random permutation.

Very recently, a key exchange scheme called HK17 was submitted to NIST as a candidate of the standard of post-quantum cryptography. The HK17 scheme employs some hypercomplex numbers as the basic objects, such as quaternions and octonions. In this paper, we show that HK17 is insecure since a passive adversary can recover the shared key in polynomial time.

In November 2017, Juan edro Hecht and Jorge Alejandro Kamlofsky submitted a quaternions/octonions based Diffie-Hellman key agreement protocol HK17 to NIST post quantum cryptography project. Daniel J. Bernstein and Tanja Lange showed how to break the scheme in O(p) steps where p is the modulo used in the scheme. One may wonder whether the scheme could be secure if p is sufficiently large (e.g., p is 1000 bits long)? In this note, we show that the scheme could be broken by solving a homogeneous quadratic equation system of eight equations in four unknowns. Thus no matter how big the p it is, it could be trivailly broken using Kipnis and Shamir’s relinearization techniques.

DPA attacks usually exhibit a "divide-and-conquer" property: the adversary needs to enumerate only a small space of the key (a key sub-space) when performing the DPA attack. This is achieved trivially in the outer rounds of a cryptographic implementation since intermediates depend on only few key bits. In the inner rounds, however, intermediates depend on too many key bits to make DPA practical or even to pose an advantage over cryptanalysis. For this reason, DPA countermeasures may be deployed only to outer rounds if performance or efficiency are critical. This paper shows a DPA attack exploiting leakage from the third round of a Feistel cipher, such as DES. We require the ability of fixing inputs, but we do not place any special restriction on the leakage model. The complexity of the attack is that of two to three DPA attacks on the first round of DES plus some minimal differential cryptanalysis.

The security of almost any real-world distributed system today depends on the participants having some "reasonably accurate" sense of current real time. Indeed, to name one example, the very authenticity of practically any communication on the Internet today hinges on the ability of the parties to accurately detect revocation of certificates, or expiration of passwords or shared keys.
However, as recent attacks show, the standard protocols for determining time are subvertible, resulting in wide-spread security loss. Worse yet, we do not have security notions for network time protocols that (a) can be rigorously asserted and (b) rigorously guarantee security of applications that require a sense of real time.
We propose such notions, within the universally composable (UC) security framework. That is, we formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols, and how they can be used to assert security of time-reliant applications --- specifically, certificates with revocation and expiration times. This allows for relatively clear and modular treatment of the use of time in security-sensitive systems.
Our modeling and analysis are done within the existing UC framework, in spite of its asynchronous, event-driven nature. This allows incorporating the use of real time within the existing body of analytical work done in this framework. In particular it allows for rigorous incorporation of real time within cryptographic tools and primitives.

Selfish mining is a well-known mining attack strategy discovered by Eyal and Sirer in 2014. After that, the attackers' strategy space has been extended by many works. These works only analyze the strategy and behavior of one single attacker. The extension of the strategy space is based on the assumption that there is only one attacker in the blockchain network. However, a proof of work blockchain is likely to have several attackers. The attackers can be independent of other attackers instead of sharing information and attacking the blockchain as a whole. During this problem, we are the team who for the first time analyze the miners' behavior in a proof of work blockchain with several attackers by establishing a new model. Based on our model, we extend the attackers' strategy space by proposing a new strategy set publish-n. Meanwhile, we revisit other attacking strategies such as selfish mining and stubborn mining in our model to explore whether these strategies work or not when there are several attackers. We compare the performance of different strategies through relative stale block rate of the attackers. In a proof of work blockchain model with two attackers, strategy publish-n can beat selfish mining by up to 26.3%.