We study the isogeny graphs of supersingular elliptic curves over finite
fields, with an emphasis on the vertices corresponding to elliptic
curves of $j$-invariant 0 and 1728.

Lattice-based cryptography is a highly potential candidate that protects against the threat of quantum attack. At Usenix Security 2016, Alkim, Ducas, Pöpplemann, and Schwabe proposed a post-quantum key exchange scheme called NewHope, based on a variant of lattice problem, the ring-learning-with-errors (RLWE) problem.
In this work, we propose a high performance hardware architecture for NewHope. Our implementation requires 6,680 slices, 9,412 FFs, 18,756 LUTs, 8 DSPs and 14 BRAMs on Xilinx Zynq-7000 equipped with 28mm Artix-7 7020 FPGA. In our hardware design of NewHope key exchange, the three phases of key exchange costs 51.9, 78.6 and 21.1microseconds, respectively. It achieves more than 4.8 times better in terms of area-time product comparing to previous results of hardware implementation of NewHope-Simple from Oder and Güneysu at Latincrypt 2017.

In 2014, Smart and Vercauteren introduced a packing technique for homomorphic encryption schemes by decomposing the plaintext space using the Chinese Remainder Theorem. This technique allows to encrypt multiple data values simultaneously into one ciphertext and execute Single Instruction Multiple Data operations homomorphically. In this paper we improve and generalize their results by introducing a flexible Laurent polynomial encoding technique and by using a more fine-grained CRT decomposition of the plaintext space. The Laurent polynomial encoding provides a convenient common framework for all conventional ways in which input data types can be represented, e.g. finite field elements, integers, rationals, floats and complex numbers. Our methods greatly increase the packing capacity of the plaintext space, as well as one’s flexibility in optimizing the system parameters with respect to efficiency and/or security.

Sampling integers with Gaussian distribution is a fundamental problem that arises in almost every application of lattice cryptography, and it can be both time consuming and challenging to implement. Most previous work has focused on the optimization and implementation of integer Gaussian sampling in the context of specific applications, with fixed sets of parameters. We present new algorithms for discrete Gaussian sampling that are both generic (application independent), efficient, and more easily implemented in constant time without incurring a substantial slow-down, making them more resilient to side-channel (e.g., timing) attacks. As an additional contribution, we present new analytical techniques that can be used to simplify the precision/security evaluation of floating point cryptographic algorithms, and an experimental comparison of our algorithms with previous algorithms from the literature.

A hash function family is called correlation intractable if for all sparse relations, it is hard to find, given a random function from the family, an input-output pair that satisfies the relation (Canetti et al., STOC 98). Correlation intractability (CI) captures a strong Random-Oracle-like property of hash functions. In particular, when security holds for all sparse relations, CI suffices for guaranteeing the soundness of the Fiat-Shamir transformation from any constant round, statistically sound interactive proof to a non-interactive argument. However, to date, the only CI hash function for all sparse relations (Kalai et al., Crypto 17) is based on general program obfuscation with exponential hardness properties.
We construct a simple CI hash function for arbitrary sparse relations, from any symmetric encryption scheme that satisfies some natural structural properties, and in addition guarantees that key recovery attacks mounted by polynomial-time adversaries have only exponentially small success probability - even in the context of key-dependent messages (KDM). We then provide parameter settings where ElGamal encryption and Regev encryption plausibly satisfy the needed properties. Our techniques are based on those of Kalai et al., with the main contribution being substituting a statistical argument for the use of obfuscation, therefore greatly simplifying the construction and basing security on better-understood intractability assumptions.
In addition, we extend the definition of correlation intractability to handle moderately sparse relations so as to capture the properties required in proof-of-work applications (e.g. Bitcoin). We also discuss the applicability of our constructions and analyses in that regime.

Satisfiability modulo theories or SMT can be stated as a generalization of Boolean satisfiability problem or SAT. The core idea behind the introduction of SMT solvers is to reduce the complexity through providing more information about the problem environment.
In this paper, we take advantage of a similar idea and feed the SMT solver itself, by extra information provided through middle state Cube characteristics, to introduce a new method which we call SMT-based Cube Attack, and apply it to improve the success of the solver in attacking reduced-round versions of the Simeck32/64 lightweight block cipher.
We first propose a new algorithm to find cubes with most number of middle state characteristics. Then, we apply these obtained cubes and their characteristics as extra information in the SMT definition of the cryptanalysis problem, to evaluate its effectiveness. Our cryptanalysis results in a full key recovery attack by 64 plaintext/ciphertext pairs on 12 rounds of the cipher in just 122.17 seconds. This is the first practical attack so far presented against the reduced-round versions of Simeck32/64.
We also conduct the cube attack on the Simeck32/64 to compare with the SMT-based cube attack. The results indicate that the proposed attack is more powerful than the cube attack.

In the past years, the security of Bitcoin-like protocols has been
intensively studied. However, previous investigations are mainly
focused on the single-mode version of Bitcoin protocol, where the
protocol is running among full nodes (miners). In this paper, we
initiate the study of multi-mode cryptocurrency protocols. We generalize the recent framework by Garay et al. (Eurocrypt 2015) with
new security definitions that capture the security of realistic cryptocurrency systems (e.g. Bitcoin with full and lightweight nodes).
We provide the first rigorous security model for addressing the
"blockchain bloat" issue. As an immediate application of our new
framework, we analyze the security of existing blockchain pruning
proposals for Bitcoin aiming to improve the storage efficiency
of network nodes by pruning unnecessary information from the
ledger.

We study instantiating the random permutation of the block-cipher mode of operation IAPM (Integrity-Aware Parallelizable Mode) with the public random permutation of Keccak, on which the draft standard SHA-3 is built. IAPM and the related mode OCB are single-pass highly parallelizable authenticated-encryption modes, and while they were
originally proven secure in the private random permutation model, Kurosawa has shown that they are also secure in the public random permutation model assuming the whitening keys are uniformly chosen with double the usual entropy. In this paper, we show a general composability result that shows that the whitening key can be obtained from the usual entropy source by a key-derivation function which is itself built on Keccak. We stress that this does not follow directly from the usual indifferentiability of key-derivation function constructions from Random Oracles. We also show that a simple and general construction, again employing Keccak, can also be used to make the
IAPM scheme key-dependent-message secure. Finally, implementations on
modern AMD-64 architecture supporting 128-bit SIMD instructions, and not supporting the native AES instructions,
show that IAPM with Keccak runs three times faster than IAPM with AES.

A new paradigm in secure protocol design is to hold parties accountable for
misbehaviour instead of postulating that they are trustworthy. Recent
approaches in defining this property, called accountability, have highlighted
the difficulty of characterising malicious behaviour. So far, no satisfactory
solution has been found. Consequently, existing definitions are either not
truly protocol agnostic or require complete monitoring of all parties.
To our knowledge, this work is the first to formalize misbehavior in the
following sense: a deviation from the behaviour prescribed by the protocol that
caused a security violation. We propose a definition for the case where it is
known which parties deviated in which respect, and extend this definition to
the case where neither these deviations are known, nor the complete trace of
the protocol. We point out that, under realistic assumptions, it is impossible
to determine all misbehaving parties, however, we show that completeness can be
relaxed to exclude spurious causal dependencies. We demonstrate the use of our
definition with two case studies, a delegation protocol with a central trusted
authority, and an actual accountability protocol from the literature. In both
cases, we discover accountability violations and apply our definition to the
fixed protocols.

Nested symmetric encryption is a well-known
technique for low-latency communication privacy. But
just what problem does this technique aim to solve?
In answer, we provide a provable-security treatment for
onion authenticated-encryption (onion-AE). Extending
the conventional notion for authenticated-encryption,
we demand indistinguishability from random bits and
time-of-exit authenticity verification. We show that the
encryption technique presently used in Tor does not
satisfy our definition of onion-AE security, but that a
construction by Mathewson (2012), based on a strong,
tweakable, wideblock PRP, does do the job. We go on
to discuss three extensions of onion-AE, giving defini-
tions to handle inbound flows, immediate detection of
authenticity errors, and corrupt ORs.

Ransomware has become one of the major threats nowadays due to its
huge impact and increased rate of infections around the world. CryptoWall 3, was responsible for damages of over 325 millions
of dollars, since its discovery in 2015. Recently, another family of ransomware appeared in the cyber space which is called WannaCry over
230.000 computers around the world, in over 150 countries were infected. Ransomware usually uses the RSA algorithm to protect the encryption key and AES for encrypting the files. If these algorithms are correctly implemented then it is impossible to recover the encrypted information. Some attacks, nonetheless, work against the implementation of RSA. These attacks are not against the basic algorithm, but against the protocol. In the following sections we present the fully analysis on three representative
ransomware: Spora, DMA Locker and WannaCry.

Protocols for securely comparing private values are among the most fundamental building blocks of multiparty computation. Introduced by Yao under the name millionaire's problem, they have found numerous applications in a variety of privacy-preserving protocols; however, due to their inherent non-arithmetic structure, existing construction often remain an important bottleneck in large-scale secure protocols.
In this work, we introduce new protocols for securely computing the greater-than and the equality predicate between two parties. Our protocols rely solely on the existence of oblivious transfer, and are UC-secure against passive adversaries. Furthermore, our protocols are well suited for use in large-scale secure computation protocols, where secure comparisons (SC) and equality tests (ET) are commonly used as basic routines: they perform particularly well in an amortized setting, and can be preprocessed efficiently (they enjoy an extremely efficient, information-theoretic online phase). We perform a detailed comparison of our protocols to the state of the art, showing that they improve over the most practical existing solutions regarding both communication and computation, while matching the asymptotic efficiency of the best theoretical constructions.

We identify a flaw in the security proof and a flaw in the concrete security analysis of the WOTS-PRF variant of the Winternitz one-time signature scheme, and discuss the implications to its concrete security.

In 2008 Satoshi Nakamoto invented the basis for what would come to be known as blockchain technology. The core concept of this system is an open and anonymous network of nodes, or miners, which together maintain a public ledger of transactions. The ledger takes the form of a chain of blocks, the blockchain, where each block is a batch of new transactions collected from users.
One primary problem with Satoshi's blockchain is its highly limited scalability. The security of Satoshi's longest chain rule, more generally known as the Bitcoin protocol, requires that all honest nodes be aware of each other's blocks in real time. To this end, the throughput is artificially suppressed so that each block fully propagates before the next one is created, and that no ``orphan blocks'' that fork the chain be created spontaneously.
In this paper we present PHANTOM, a protocol for transaction confirmation that is secure under any throughput that the network can support. PHANTOM thus does not suffer from the security-scalability tradeoff which Satoshi's protocol suffers from. PHANTOM utilizes a Directed Acyclic Graph of blocks, aka blockDAG, a generalization of Satoshi's chain which better suits a setup of fast or large blocks. PHANTOM uses a greedy algorithm on the blockDAG to distinguish between blocks mined properly by honest nodes and those mined by non-cooperating nodes that deviated from the DAG mining protocol. Using this distinction, PHANTOM provides a full order on the blockDAG in a way that is eventually agreed upon by all honest nodes.

We study the problem of encrypting and authenticating quantum data in the presence of adversaries making adaptive chosen plaintext and chosen ciphertext queries. Classically, security games use string copying and comparison to detect adversarial cheating in such scenarios. Quantumly, this approach would violate no-cloning. We develop new techniques to overcome this problem: we use entanglement to detect cheating, and rely on recent results for characterizing quantum encryption schemes. We give denitions for (i.) ciphertext unforgeability , (ii.) indistinguishability under adaptive chosen-ciphertext attack, and (iii.) authenticated encryption. The restriction of each denition to the classical setting is at least as strong as the corresponding classical notion: (i) implies INT-CTXT, (ii) implies IND-CCA2, and (iii) implies AE. All of our new notions also imply QIND-CPA privacy. Combining one-time authentication and classical pseudorandomness, we construct schemes for each of these new quantum security notions, and provide several separation examples. Along the way, we also give a new denition of one-time quantum authentication which, unlike all previous approaches, authenticates ciphertexts rather than plaintexts.

We discuss the design rationale and analysis of the SIMON and SPECK lightweight block ciphers.

We design a group key exchange protocol where most of the participants remain offline until they wish to compute the key. This is well suited to a cloud storage environment where users are often offline, but have online access to the server which can assist in key exchange. We define and instantiate a new primitive, a blinded KEM, which we show can be used in a natural way as part of our generic protocol construction. Our new protocol has a security proof based on a well-known model for group key exchange. Our protocol provides a restricted form of forward secrecy which we argue is as strong as can be achieved in practice. Our protocol is efficient, requiring Diffie--Hellman with a handful of standard public key operations per user in our concrete instantiation.

In this paper, we consider the indistinguishability of XTS in some security models for both full final block and partial final block cases. Firstly, some evaluations of the indistinguishability up-to-block are presented. Then, we present a new security model in which the adversary can not control sector number, based on an $\epsilon$-collision resistant function. In this model, we give a bound of the distinguishing advantage that the adversary can get when attacks on XTS. The received results is an extension of \cite{6}.

In this paper, we consider the implications of parallelizing time-memory tradeoff attacks using a large number of distributed processors. It is shown that Hellman’s original tradeoff method and the Biryukov-Shamir attack on stream ciphers, which incorporates data into the tradeoff, can be effectively distributed to reduce both time and memory, while other approaches are less advantaged in a distributed approach. Distributed tradeoff attacks are specifically discussed as applied to stream ciphers and the counter mode operation of block ciphers, where their feasibility is considered in relation to distributed exhaustive key search. In particular, for counter mode with an unpredictable initial count, we show that distributed tradeoff attacks are applicable, but can be made infeasible if the entropy of the initial count is at least as large as the key. In general, the analyses of this paper illustrate the effectiveness of a distributed tradeoff approach and show that, when enough processors are involved in the attack, it is possible some systems, such as lightweight cipher implementations, may be practically susceptible to attack.

Linear regression with 2-norm regularization (i.e., ridge regression) is an important statistical technique that models the relationship between some explanatory values and an outcome value using a linear function. In many applications (e.g., predictive modelling in personalised health care), these values represent sensitive data owned by several different parties who are unwilling to share them. In this setting, training a linear regression model becomes challenging and needs specific cryptographic solutions. This problem was elegantly addressed by Nikolaenko et al. in S&P (Oakland) 2013. They suggested a two-server system that uses linearly-homomorphic encryption (LHE) and Yao’s two-party protocol (garbled circuits). In this work, we propose a novel system that can train a ridge linear regression model using only LHE (i.e., without using Yao’s protocol). This greatly improves the overall performance (both in computation and communication) as Yao’s protocol was the main bottleneck in the previous solution. The efficiency of the proposed system is validated both on synthetically-generated and real-world datasets.