We describe a new Schnorr-based multi-signature scheme (i.e., a protocol which allows a group of signers to produce a short, joint signature on a common message) called MuSig, provably secure in the plain public-key model (meaning that signers are only required to have a public key, but do not have to prove knowledge of the private key corresponding to their public key to some certification authority or to other signers before engaging the protocol), which improves over the state-of-art scheme of Bellare and Neven (ACM-CCS 2006) and its variants by Bagherzandi et al. (ACM-CCS 2008) and Ma et al. (Des. Codes Cryptogr., 2010) in two respects: (i) it is simple and efficient, having the same key and signature size as standard Schnorr signatures; (ii) it allows key aggregation, which informally means that the joint signature can be verified exactly as a standard Schnorr signature with respect to a single ``aggregated'' public key which can be computed from the individual public keys of the signers. To the best of our knowledge, this is the first multi-signature scheme provably secure in the plain public-key model which allows key aggregation. As an application, we explain how our new multi-signature scheme could improve both performance and user privacy in Bitcoin.

In this work, we significantly improve the efficiency of non-malleable codes in the split state model, by constructing a code with codeword length $|s|+O(k)$, where $|s|$ is the length of the message, and $k$ is the security parameter. This is a substantial improvement over previous constructions, both asymptotically and concretely.
Our construction relies on a new primitive which we define and study, called
$\ell$-more extractable hash functions. This notion, which may be of
independent interest, guarantees that any adversary that is given access to
$\ell \in \mathbb{N}$ precomputed hash values $v_{1},\dots, v_{\ell}$, and
produces a new valid hash value $\tilde v$, then it must know a pre-image of
$\tilde v$. This is a stronger notion that the one by Bitansky et al. (Eprint
'11) and Goldwasser et al. (ITCS '12, Eprint '14), which considers adversaries
that get no access to precomputed hash values prior to producing their own
value. By appropriately relaxing the extractability requirement
(without hurting the applicability of the primitive)
we instantiate $\ell$-more extractable hash functions under the same
assumptions used for the previous extractable hash functions by Bitansky et al. and Goldwasser et al. (a variant of the
Knowledge of Exponent Assumption).

In this paper, we ask a question whether convolutional neural networks are more suitable for SCA scenarios than some other machine learning techniques, and if yes, in what situations.
Our results point that convolutional neural networks indeed outperforms machine learning in several scenarios when considering accuracy. Still, often there is no compelling reason to use such a complex technique. In fact, if comparing techniques without extra steps like preprocessing, we see an obvious advantage for convolutional neural networks only when the level of noise is small, and the number of measurements and features is high. The other tested settings show that simpler machine learning techniques, for a significantly lower computational cost, perform similar or even better. The experiments with the guessing entropy metric indicate that simpler methods like Random forest or XGBoost perform better than convolutional neural networks for the datasets we investigated. Finally, we conduct a small experiment that opens the question whether convolutional neural networks are actually the best choice in side-channel analysis context since there seems to be no advantage in preserving the topology of measurements.

In August 2015 the U.S. National Security Agency (NSA)
released a major policy statement on the need for post-quantum cryptography (PQC). This announcement will be a great stimulus to the
development, standardization, and commercialization of new quantumsafe
algorithms. However, certain peculiarities in the wording and timing
of the statement have puzzled many people and given rise to much
speculation concerning the NSA, elliptic curve cryptography (ECC), and
quantum-safe cryptography. Our purpose is to attempt to evaluate some
of the theories that have been proposed.

This paper presents an FPGA implementation
of the Niederreiter cryptosystem
using binary Goppa codes,
including modules for encryption, decryption,
and key generation.
We improve over previous implementations in terms of
efficiency (time-area product and raw performance)
and security level.
Our implementation is constant time
in order to protect against timing side-channel analysis.
The design is fully parameterized,
using code-generation scripts,
in order to support a wide range of parameter choices
for security, including binary field size,
the degree of the Goppa polynomial,
and the code length.
The parameterized design allows us to choose
design parameters for time-area trade-offs
in order to support a wide variety of applications
ranging from smart cards
to server accelerators.
For parameters that are considered to provide
‘’128-bit post-quantum security’‘,
our time-optimized implementation
requires 966,400 cycles
for the generation of both
public and private portions of a key
and 14,291 cycles to decrypt a ciphertext.
The time-optimized design uses
only 121,806 ALMs (52~% of the available logic)
and 961 RAM blocks (38~% of the available memory),
and results in a design that runs at about 250~MHz
on a medium-size Stratix V FPGA.

T-310 is an important Cold War cipher. It was the principal encryption algorithm used to protect various state communication lines in Eastern Germany throughout the 1980s. The cipher seems to be quite robust,
and until now, no cryptography researcher has proposed an attack on T-310. In this paper we provide a detailed analysis of T-310 in the context of modern cryptography research and other important or similar ciphers developed in the same period. We introduce new notations which show the peculiar internal structure of this cipher in a new light.
We point out a number of significant strong and weak properties of this cipher. Finally we propose several new attacks on T-310.

Proxy Re-Encryption (PRE) allows a ciphertext encrypted under Alice's public key to be transformed to an encryption under Bob's public key without revealing either the plaintext or the decryption keys.
PRE schemes have clear applications to cryptographic access control by allowing outsourced data to be selectively shared to users via re-encryption to appropriate keys.
One concern for this application is that the server should not be able to perform unauthorised re-encryptions.
We argue that current security notions do not adequately address this concern.
We revisit existing definitions for PRE, starting by challenging the concept of unidirectionality, which states that re-encryption tokens from $A$ to $B$ cannot be used to re-encrypt from $B$ to $A$.
We strengthen this definition to reflect realistic scenarios in which adversaries may try to reverse a re-encryption by retaining information about prior ciphertexts and re-encryption tokens.
We then strengthen the adversarial model to consider malicious adversaries that may collude with corrupt users and attempt to perform unauthorised re-encryptions; this models a malicious cloud service provider aiming to subvert the re-encryption process to leak sensitive data.
Finally we revisit the notion of authenticated encryption for PRE.
This currently assumes the same party who created the message also encrypted it, which is not necessarily the case in re-encryption.
We thus introduce the notion of \emph{ciphertext origin authentication} to determine who encrypted the message (initiated a re-encryption)and show how to fufil this requirement in practice.

The introduction of summation polynomials for elliptic curves by Semaev has opened up new avenues of
investigation in index calculus type algorithms for the elliptic curve discrete logarithm problem,
and several recent papers have explored their use.
Most papers use Gr{\"o}bner basis computations at
some point.
We propose a faster algorithm to solve the Elliptic Curve Discrete Logarithm Problem
using summation polynomials that does not involve Gr{\"o}bner basis computations.
Our algorithm also does not
involve a linear algebra step, and
makes use of a technique for fast evaluation of the summation
polynomials, which we describe in the paper. We further propose an even faster algorithm that does not involve Gr{\"o}bner basis computations, or a linear algebra step, or summation polynomials.
We give a complexity analysis of our algorithms and provide extensive computational data.

Supersingular isogeny-based cryptography is one of the more recent families of post-quantum
proposals. An interesting feature is the comparatively low bandwidth occupation in key
agreement protocols, which stems from the possibility of key compression. However, compression
and decompression introduce a significant overhead to the overall processing cost despite
recent progress. In this paper we address the main processing bottlenecks involved in key
compression and decompression, and suggest substantial improvements for each of them. Some of
our techniques may have an independent interest for other, more conventional areas of elliptic
curve cryptography as well.

Bad choices of passwords were and are a pervasive problem. Most password alternatives (such as two-factor authentication) may increase cost and arguably hurt the usability of the system. This is of special significance for low cost IoT devices. Users choosing weak passwords do not only compromise themselves, but the whole ecosystem. E.g, common and default passwords in IoT devices were exploited by hackers to create botnets and mount severe attacks on large Internet services, such as the Mirai botnet DDoS attack.
We present a method to help protect the Internet from such large scale attacks. We enable a server to identify popular passwords (heavy hitters), and publish a list of over-popular passwords that must be avoided. This filter ensures that no single password can be used to compromise a large percentage of the users. The list is dynamic and can be changed as new users are added or when current users change their passwords. We apply maliciously secure two-party computation and differential privacy to protect the users' password privacy. Our solution does not require extra hardware or cost, and is transparent to the user.
Our private heavy hitters construction is secure even against a malicious coalition of devices which tries to manipulate the protocol in order to hide the popularity of some password that the attacker is exploiting. It also ensures differential privacy under continues observation of the blacklist as it changes over time.
As a reality check we verified the following three tests: computed the guarantees that the system provides with respect to a few publicly available data points, ran simulations on various distributions, and implemented and analyzed a proof-of-concept on an IoT device.
Our construction can also be used in other settings to privately learn heavy hitters in the presence of an active malicious adversary. E.g., learning the most popular sites accessed by the TOR network.

The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even asymmetric tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. Based on the recent work of Applebaum et al. (ITCS 2017), we introduce a general framework for constructing CRH from LPN for various parameter choices. We show that, just to mention a few notable ones, under any of the following hardness assumptions (for the two most common variants of LPN)
1) constant-noise LPN is $2^{n^{0.5+\epsilon}}$-hard for any constant $\epsilon>0$;
2) constant-noise LPN is $2^{\Omega(n/\log n)}$-hard given $q=poly(n)$ samples;
3) low-noise LPN (of noise rate $1/\sqrt{n}$) is $2^{\Omega(\sqrt{n}/\log n)}$-hard given $q=poly(n)$ samples.
there exists CRH functions with constant (or even poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN$\rightarrow$bSVP$\rightarrow$CRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE$\rightarrow$SIS$\rightarrow$CRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai's CRH functions based on the Short Integer Solution (SIS) problem.
Furthermore, under additional (arguably minimal) idealized assumptions such as small-domain random functions or random permutations (that trivially imply collision resistance), we still salvage a simple and elegant collision-resistance-preserving domain extender that is (asymptotically) more parallel and efficient than previously known. In particular, assume $2^{n^{0.5+\epsilon}}$-hard constant-noise LPN or $2^{n^{0.25+\epsilon}}$-hard low-noise LPN, we obtain a polynomially shrinking collision resistant hash function that evaluates in parallel only a single layer of small-domain random functions (or random permutations) and produces their XOR sum as output.

In [AJPS17], Aggarwal, Joux, Prakash & Santha described
an elegant public-key cryptosystem (AJPS-1) mimicking NTRU over the
integers. This algorithm relies on the properties of Mersenne primes
instead of polynomial rings.
A later ePrint [BCGN17] by Beunardeau et al. revised AJPS-1’s initial
security estimates. While lower than initially thought, the best known
attack on AJPS-1 still seems to leave the defender with an exponential
advantage over the attacker [dBDJdW17]. However, this lower exponential
advantage implies enlarging AJPS-1’s parameters. This, plus the fact that
AJPS-1 encodes only a single plaintext bit per ciphertext, made AJPS-1
impractical. In a recent update, Aggarwal et al. overcame this limitation
by extending AJPS-1’s bandwidth. This variant (AJPS-ECC) modifies the
definition of the public-key and relies on error-correcting codes.
This paper presents a different high-bandwidth construction. By opposition
to AJPS-ECC, we do not modify the public-key, avoid using errorcorrecting
codes and use backtracking to decrypt. The new algorithm
is orthogonal to AJPS-ECC as both mechanisms may be concurrently
used in the same ciphertext and cumulate their bandwidth improvement
effects. Alternatively, we can increase AJPS-ECC’s information rate by a
factor of 26 for the parameters recommended in [AJPS17].
The obtained bandwidth improvement and the fact that encryption and
decryption are reasonably efficient, make our scheme an interesting postquantum
candidate.

One of the most important tools for assessing hardness assumptions in cryptography is the Generic Group Model (GGM). Over the past two decades, numerous assumptions have been analyzed within this model. While a proof in the GGM can certainly provide some measure of confidence in an assumption, its scope is rather limited since it does not capture group-specific algorithms that make use of the representation of the group.
To overcome this limitation, we propose the Algebraic Group Model (AGM), a model that lies in between the standard model and the GGM. It is the first restricted model of computation covering group-specific algorithms yet allowing to derive simple and meaningful security statements.
We show that several important assumptions, among them the Computational Diffie-Hellman, the Strong Diffie-Hellman, and the interactive LRSW assumptions, are equivalent to the Discrete Logarithm (DLog) assumption in the AGM. On the more practical side, we prove tight security reductions for two important schemes in the AGM to DLog or a variant thereof: the BLS signature scheme and Groth's zero-knowledge SNARK (Eurocrypt '16), which is the most efficient SNARK for which only a proof in the GGM was known.
Moreover, in combination with known lower bounds on the Discrete Logarithm assumption in the GGM, our results can be used to derive lower bounds for all the above-mentioned results in the GGM.

The Fiat-Shamir construction (Crypto 1986) is an efficient
transformation in the random oracle model for creating non-interactive
proof systems and signatures from sigma-protocols. In classical
cryptography, Fiat-Shamir is a zero-knowledge proof of knowledge
assuming that the underlying sigma-protocol has the zero-knowledge and
special soundness properties. Unfortunately, Ambainis, Rosmanis, and
Unruh (FOCS 2014) ruled out non-relativizing proofs under those
conditions in the quantum setting.
In this paper, we show under which strengthened conditions the
Fiat-Shamir proof system is still post-quantum secure. Namely, we show
that if we require the sigma-protocol to have computational
zero-knowledge and statistical soundness, then Fiat-Shamir is a
zero-knowledge simulation-sound proof system (but not a proof of
knowledge!). Furthermore, we show that Fiat-Shamir leads to a
post-quantum secure unforgeable signature scheme when additionally
assuming a "dual-mode hard instance generator" for generating key
pairs.
Finally, we study the extractability (proof of knowledge) property of
Fiat-Shamir. While we have no proof of the extractability itself, we
show that if we can prove extractability, then other desired
properties such as simulation-sound extractability (i.e.,
non-malleability), and unforgeable signatures follow.

In this paper, we propose a series of techniques that can be used to
determine the missing IV terms of a complex multivariable Boolean polynomial. Using these techniques, we revisit the dynamic cube attack
on Grain-128. Based on choosing one more nullified state bit and one
more dynamic bit, we are able to obtain the IV terms of degree $43$, combined with various of reduction techniques, fast discarding monomial techniques and IV representation technique for polynomials, so that the missing IV terms can be determined. As a result, we improve the time complexity of the best previous attack on Grain-128 by a factor of $2^{16}$. Moreover, our attack applies to all keys.

We propose a detailed construction of Collision Resistance Preimage Sampleable Functions $($CRPSF$)$ over any cyclotomic field based on NTRU, hence give a provably secure NTRU Signature scheme $($NTRUSign$)$, which is strongly existentially unforgeable under adaptive chosen-message attacks in the random oracle module. The security of CRPSF $($NTRUSign$)$ is reduced to the corresponding small integer solution problem over rings $($Ring-SIS$)$. More precisely, the security of our scheme is based on the worst-case approximate shortest independent vectors problem $($SIVP$_\gamma$$)$ over ideal lattices. For any fixed cyclotomic field, we give a probabilistic polynomial time $($PPT$)$ key generation algorithm which shows how to extend the secret key of NTRUEncrypt to the secret key of NTRUSign. This conversion is important for constructions of many cryptographic primitives based on NTRU, for example, CRPSF, NTRUSign, identity-based encryption and identity-based signature. We also delve back into former construction of NTRUEncrypt and enrich the choices of the module $q$. Some useful results about $q$-ary lattices, regularity and uniformity of distribution of the public key of NTRUEncrypt are also generalized to more general algebraic field $K$, as long as $K$ is Galois over $\mathbb{Q}$.

Oblivious transfer (OT) is a fundamental primitive in cryptography. Halevi-Kalai OT (Halevi, S. and Y. Kalai (2012), Journal of Cryptology 25(1)), which is based on smooth projective hash(SPH), is a famous and the most efficient framework for $1$-out-of-$2$ oblivious transfer ($\mbox{OT}^{2}_{1}$) against malicious adversaries in plain model. However, it does not provide simulation-based security. Thus, it is harder to use it as a building block in secure multiparty computation (SMPC) protocols. A natural question however, which so far has not been answered, is whether it can be can be made fully-simulatable. In this paper, we give a positive answer. Further, we present a fully-simulatable framework for general $\mbox{OT}^{n}_{t}$ ($n,t\in \mathbb{N}$ and $n>t$). Our framework can be interpreted as a constant-round blackbox reduction of $\mbox{OT}^{n}_{t}$ (or $\mbox{OT}^{2}_{1}$) to SPH. To our knowledge, this is the first such reduction. Combining Kilian's famous completeness result, we immediately obtain a black-box reduction of SMPC to SPH.

Fully homomorphic encryption (FHE) is a powerful notion of encryption which allows data to be encrypted in such a way that anyone can perform arbitrary computations over the encrypted data without decryption or knowledge of the secret key. Traditionally, FHE only allows for computations over data encrypted under a single public key. Lopez-Alt et al. (STOC 2012) introduced a new notion of FHE, called multi-key FHE (MFHE), which permits joint computations over data encrypted under multiple independently-generated (unrelated) keys such that any evaluated ciphertext could be (jointly) decrypted by the parties involved in the computation. Such MFHE schemes could be readily used to delegate computation to cloud securely.
Recently a number of works have studied the problem of constructing quantum homomorphic encryption (QHE) which is to perform quantum computations over encrypted quantum data. In this work we initiate the study of quantum multi-key homomorphic encryption (QMHE) and obtain the following results:
1) We formally define the notion of quantum multi-key homomorphic encryption and construct such schemes from their classical counterpart. Building on the framework of Broadbent and Jeffery (Crypto 2015) and Dulek et al. (Crypto 2016), we show that any classical multi-key leveled homomorphic encryption can be used to build a quantum multi-key leveled homomorphic encryption if we also have certain suitable error-correcting quantum gadgets. The length of the evaluation key grows linearly with the number of $T$-gates in the quantum circuit, thereby giving us a quantum multi-key leveled homomorphic encryption for circuits with polynomial but bounded number of $T$-gates.
2) To enable a generic transformation from any classical multi-key scheme, we introduce and construct a new cryptographic primitive which we call conditional oblivious quantum transform (COQT). A COQT is a distributed non-interactive encoding scheme that captures the essence of error-correcting gadgets required for quantum homomorphic encryption in the multi-key setting. We then build COQTs themselves from any classical multi-key leveled homomorphic encryption with $\boldsymbol{\mathrm{NC}}^1$ decryption. We believe that COQTs might be an object of independent interest.
3) We also show that our quantum multi-key homomorphic encryption schemes support distributed decryption of multi-key ciphertexts as well as allows ciphertext re-randomizability (thereby achieves quantum circuit privacy) if the underlying classical scheme also supports distributed decryption and satisfies classical circuit privacy. We show usefulness of distributed decryption and ciphertext re-randomizability for QMHE by providing efficient templates for building multi-party delegated/server-assisted quantum computation protocols from QMHE.
Additionally, due to our generic transformation, our quantum multi-key HE scheme inherits various features of the underlying classical scheme such as: identity/attribute-based, multi-hop, etc.

Constraint-hiding constrained PRFs (CHCPRFs), initially studied by Boneh, Lewi and Wu [PKC 2017], are constrained PRFs where the constrained key hides the description of the constraint. Envisioned with powerful applications such as searchable encryption, private-detectable watermarking and symmetric deniable encryption, the only known candidates of CHCPRFs are based on indistinguishability obfuscation or multilinear maps with strong security properties.
In this paper we construct CHCPRFs for all NC1 circuits from the Learning with Errors assumption. The construction draws heavily from the graph-induced multilinear maps by Gentry, Gorbunov and Halevi [TCC 2015], as well as the existing lattice-based PRFs. Our construction gives an instance of the GGH15 applications with a security reduction to LWE.
We also show how to build from CHCPRFs reusable garbled circuits (RGC), or equivalently private-key function-hiding functional encryptions with 1-key security. This provides a different approach of constructing RGC from that of Goldwasser et al. [STOC 2013].

With the advancement of Cloud computing, people now store their
data on remote Cloud servers for larger computation and storage resources. However,
users’ data may contain sensitive information of users and should not be
disclosed to the Cloud servers. If users encrypt their data and store the encrypted
data in the servers, the search capability supported by the servers will be significantly
reduced because the server has no access to the data content. In this paper,
we propose a Fine-grained Multi-keyword Ranked Search (FMRS) scheme over
encrypted Cloud data. Specifically, we leverage novel techniques to realize multikeyword
ranked search, which supports both mixed “AND”, “OR” and “NO”
operations of keywords and ranking according to the preference factor and relevance
score. Through security analysis, we can prove that the data confidentiality,
privacy protection of index and trapdoor, and the unlinkability of trapdoor can
be achieved in our FMRS. Besides, Extensive experiments show that the FMRS
possesses better performance than existing schemes in terms of functionality and
efficiency.