In this paper we show how some recent ideas regarding the discrete logarithm problem (DLP) in finite fields of small characteristic may be applied to compute logarithms in some very large fields extremely efficiently. By combining the polynomial time relation generation from the authors' CRYPTO 2013 paper, an improved degree two elimination technique, and an analogue of Joux's recent small-degree elimination method, we solved a DLP in the record-sized finite field of $2^{6120}$ elements, using just a single core-month. Relative to the previous record set by Joux in the field of $2^{4080}$ elements, this represents a $50\%$ increase in the bitlength, using just $5\%$ of the core-hours. We also show that for the fields considered, the parameters for Joux's $L_Q(1/4 + o(1))$ algorithm may be optimised to produce an $L_Q(1/4)$ algorithm.

While there has been a lot of progress in designing efficient custom protocols for computing Private Set Intersection (PSI), there has been less research on using generic Multi-Party Computation (MPC) protocols for this task. However, there are many variants of the set intersection functionality that are not addressed by the existing custom PSI solutions and are easy to compute with generic MPC protocols (e.g., comparing the cardinality of the intersection with a threshold or measuring ad conversion rates).
Generic PSI protocols work over circuits that compute the intersection. For sets of size $n$, the best known circuit constructions conduct $O(n \log n)$ or $O(n \log n / \log\log n)$ comparisons (Huang et al., NDSS'12 and Pinkas et al., USENIX Security'15). In this work, we propose new circuit-based protocols for computing variants of the intersection with an almost linear number of comparisons. Our constructions are based on new variants of Cuckoo hashing in two dimensions.
We present an asymptotically efficient protocol as well as a protocol with better concrete efficiency. For the latter protocol, we determine the required sizes of tables and circuits experimentally, and show that the run-time is concretely better than that of existing constructions.
The protocol can be extended to a larger number of parties. The proof technique for analyzing Cuckoo hashing in two dimensions is new and can be generalized to analyzing standard Cuckoo hashing as well as other new variants of it.

We propose a simple and general framework for constructing oblivious
transfer (OT) protocols that are \emph{efficient}, \emph{universally
composable}, and \emph{generally realizable} from a variety of
standard number-theoretic assumptions, including the decisional
Diffie-Hellman assumption, the quadratic residuosity assumption, and
\emph{worst-case} lattice assumptions.
Our OT protocols are round-optimal (one message each way), quite
efficient in computation and communication, and can use a single
common string for an unbounded number of executions. Furthermore, the
protocols can provide \emph{statistical} security to either the sender
or receiver, simply by changing the distribution of the common string.
For certain instantiations of the protocol, even a common
\emph{random} string suffices.
Our key technical contribution is a simple abstraction that we call a
\emph{dual-mode} cryptosystem. We implement dual-mode cryptosystems
by taking a unified view of several cryptosystems that have what we
call ``messy'' public keys, whose defining property is that a
ciphertext encrypted under such a key carries \emph{no information}
(statistically) about the encrypted message.
As a contribution of independent interest, we also provide a multi-bit
version of Regev's lattice-based cryptosystem (STOC 2005) whose time
and space efficiency are improved by a linear factor in the security
parameter $n$. The amortized encryption and decryption time is only
$\tilde{O}(n)$ bit operations per message bit, and the ciphertext
expansion can be made as small as a constant; the public key size and
underlying lattice assumption remain essentially the same.

We describe obfuscation schemes for matrix-product branching programs that are purely algebraic and employ matrix groups and tensor algebra over a finite field. In contrast to the obfuscation schemes of Garg et al (SICOM 2016) which were based on multilinear maps, these schemes do not use noisy encodings. We prove that there is no efficient attack on our scheme based on re-linearization techniques of Kipnis-Shamir (CRYPTO 99) and its generalization called XL-methodology (Courtois et al, EC2000). We also provide analysis to claim that general Grobner-basis computation attacks will be inefficient. In a generic colored matrix model our construction leads to a virtual-black-box obfuscator for NC$^1$ circuits. We also provide cryptanalysis based on computing tangent spaces of the underlying algebraic sets.

In this note we describe some general-purpose, high-efficiency elliptic curves tailored for security levels beyond $2^{128}$. For completeness, we also include legacy-level curves at standard security levels. The choice of curves was made to facilitate state-of-the-art implementation techniques.

Blocking microarchitectural (digital) side channels is one of the most pressing challenges in hardware security today. Recently, there has been a surge of effort that attempts to block these leakages by writing programs data obliviously. In this model, programs are written to avoid placing sensitive data-dependent pressure on shared resources. Despite recent efforts, however, running data oblivious programs on modern machines today is insecure and low performance. First, writing programs obliviously assumes certain instructions in today's ISAs will not leak privacy, whereas today's ISAs and hardware provide no such guarantees. Second, writing programs to avoid data-dependent behavior is inherently high performance overhead.
This paper tackles both the security and performance aspects of this problem by proposing a Data Oblivious ISA extension (OISA). On the security side, we present ISA design principles to block microarchitectural side channels, and embody these ideas in a concrete ISA capable of safely executing existing data oblivious programs. On the performance side, we design the OISA with support for efficient memory oblivious computation, and with safety features that allow modern hardware optimizations, e.g., out-of-order speculative execution, to remain enabled in the common case.
We provide a complete hardware prototype of our ideas, built on top of the RISC-V out-of-order, speculative BOOM processor, and prove that the OISA can provide the advertised security through a formal analysis of an abstract BOOM-style machine. We evaluate area overhead of hardware mechanisms needed to support our prototype, and provide performance experiments showing how the OISA speeds up a variety of existing data oblivious codes (including ``constant time'' cryptography and memory oblivious data structures), in addition to improving their security and portability.

We investigate the security properties of the three deterministic random bit generator (DRBG) mechanisms in the NIST SP 800-90A standard [2]. This standard received a considerable amount of negative attention, due to the controversy surrounding the now retracted DualEC-DRBG, which was included in earlier versions. Perhaps because of the attention paid to the DualEC, the other algorithms in the standard have received surprisingly patchy analysis to date, despite widespread deployment. This paper addresses a number of these gaps in analysis, with a particular focus on HASH-DRBG and HMAC-DRBG. We uncover a mix of positive and less positive results. On the positive side, we prove (with a caveat) the robustness [16] of HASH-DRBG and HMAC-DRBG in the random oracle model (ROM). Regarding the caveat, we show that if an optional input is omitted, then – contrary to claims in the standard — HMAC-DRBG does not even achieve the (weaker) property of forward security. We also conduct a more informal and practice-oriented exploration of flexibility in implementation choices permitted by the standard. Specifically, we argue that these DRBGs have the property that partial state leakage may lead security to break down in unexpected ways. We highlight implementation choices allowed by the overly flexible standard that exacerbate both the likelihood, and impact, of such attacks. While our attacks are theoretical, an analysis of two open source implementations of CTR-DRBG shows that potentially problematic implementation choices are made in the real world.

Bitcoin, the most popular blockchain system, does not scale even under very optimistic assumptions. Lightning networks, a layer on top of Bitcoin, composed of one-to-one lightning channels make it scale to up to 105 Million users.
Recently, Duplex Micropayment Channel factories have been proposed based on opening multiple one-to-one payment channels at once. Duplex Micropayment Channel factories rely on time-locks to update and close their channels. This mechanism yields to situation where users funds time-locking for long periods increases with the lifetime of the factory and the number of users. This makes DMC factories not applicable in real-life scenarios.
In this paper, we propose the first channel factory construction, the Lightning Factory that offers a constant collateral cost, independent of the lifetime of the channel and members of the factory.
We compare our proposed design with Duplex Micropayment Channel factories, obtaining better performance results by a factor of more than 3000 times in terms of the worst-case constant collateral cost incurred when malicious users use the factory. The message complexity of our factory is $n$ where Duplex Micropayment Channel factories need $n^2$ messages where $n$ is the number of users. Moreover, our factory copes with an infinite number of updates while in Duplex Micropayment Channel factories the number of updates is bounded by the initial time-lock.
Finally, we discuss the necessity for our Lightning Factories of BNN, a non-interactive aggregate signature cryptographic scheme, and compare it with Schnorr and ECDSA schemes used in Bitcoin and Duplex Micropayment Channels.

RC4 is the most widely used stream cipher around. So, it is important that it runs cost effectively, with minimum encryption time. In other words, it should give higher throughput. In this paper, a mechanism is proposed to improve the throughput of RC4 algorithm in multicore processors using multithreading. The proposed mechanism does not parallelize RC4, instead it introduces a way that multithreading can be used in encryption when the plaintext is in the form of a text file. In this particular research, the source codes were written in Java (JDK version: 1.6.0_21) in Windows environments. Experiments to analyze the throughput were done separately in an Intel® P4 machine (O/S: Windows XP), Core 2 Duo machine (O/S: Windows XP) and Core i3 machine (O/S: Windows 7).
Outcome of the research: Higher throughput of RC4 algorithm can be achieved in multicores when using the proposed mechanism in this research. Effective use of multithreading in encryption can be achieved in multicores using this technique.

De Bruijn sequences are a class of nonlinear recurring sequences that have wide applications in cryptography and modern communication systems. One main method for constructing them is to join the cycles of a feedback shift register (FSR) into a full cycle, which is called the cycle joining method. Jansen et al. (IEEE Trans on Information Theory 1991) proposed an algorithm for joining cycles of an arbitrary FSR. This classical algorithm is further studied in this paper. Motivated by their work, we propose a new algorithm for joining cycles, which doubles the efficiency of the classical cycle joining algorithm. Since both algorithms need FSRs that only generate short cycles, we also propose efficient ways to construct short-cycle FSRs. These FSRs are nonlinear and are easy to obtain. As a result, a large number of de Bruijn sequences are constructed from them. We explicitly determine the size of these de Bruijn sequences. Besides, we show a property of the pure circulating register, which is important for searching for short-cycle FSRs.

In a seminal paper, Dolev et al. (STOC'91) introduced the notion of non-malleable encryption (NM-CPA). This notion is very intriguing since it suffices for many applications of chosen-ciphertext secure encryption (IND-CCA), and, yet, can be generically built from semantically secure (IND-CPA) encryption, as was shown in the seminal works by Pass et al. (CRYPTO'06) and by Choi et al. (TCC'08), the latter of which provided a black-box construction. In this paper we investigate three questions related to NM-CPA security:
- Can the rate of the construction by Choi et al. of NM-CPA from IND-CPA be improved?
- Is it possible to achieve multi-bit NM-CPA security more efficiently from a single-bit NM-CPA scheme than from IND-CPA?
- Is there a notion stronger than NM-CPA that has natural applications and can be achieved from IND-CPA security?
We answer all three questions in the positive. First, we improve the rate in the construction of Choi et al. by a factor O(k), where k is the security parameter. Still, encrypting a message of size O(k) would require ciphertext and keys of size O(k^2) times that of the IND-CPA scheme, even in our improved scheme. Therefore, we show a more efficient domain extension technique for building a k-bit NM-CPA scheme from a single-bit NM-CPA scheme with keys and ciphertext of size O(k) times that of the NM-CPA one-bit scheme. To achieve our goal, we define and construct a novel type of continuous non-malleable code (NMC), called secret-state NMC, as we show that standard continuous NMCs are not enough for the natural "encode-then-encrypt-bit-by-bit" approach to work.
Finally, we introduce a new security notion for public-key encryption (PKE) that we dub non-malleability under (chosen-ciphertext) self-destruct attacks (NM-SDA). After showing that NM-SDA is a strict strengthening of NM-CPA and allows for more applications, we nevertheless show that both of our results---(faster) construction from IND-CPA and domain extension from one-bit scheme---also hold for our stronger NM-SDA security. In particular, the notions of IND-CPA, NM-CPA, and NM-SDA security are all equivalent, lying (plausibly, strictly?) below IND-CCA security.

Card-based cryptography allows to do secure multiparty computation in simple and elegant ways, using only a deck of playing cards, as first proposed by den Boer (EUROCRYPT 1989). Many protocols as of yet come with an “honest-but-curious” disclaimer. However, a central goal of modern cryptography is to provide security also in the presence of malicious attackers. At the few places where authors argue for the active security of their protocols, this is done ad-hoc and restricted to the concrete operations needed, often even using additional physical tools, such as envelopes or sliding cover boxes.
This paper provides the first systematic approach to active security in card-based protocols. We show how a large and natural class of shuffling operations, namely those which (opaquely) permute the cards according to a uniform distribution on a permutation group, can be implemented using only a linear number of helping cards. This ensures that any (information-theoretically) secure cryptographic protocol in the abstract model of Mizuki and Shizuya (Int. J. Inf. Secur., 2014), restricted to this natural class of shuffles, can be realized in an actively secure fashion. These shuffles already allow for securely computing any circuit (Mizuki and Sone, FAW 2009). In the process, we develop a more concrete model for card-based cryptographic protocols with two players, which we believe to be of independent interest.

Hard learning problems are central topics in recent cryptographic research. Many cryptographic primitives relate their security to difficult problems in lattices, such as the shortest vector problem. Such schemes include the possibility of decryption errors with some very small probability. In this paper we propose and discuss a generic attack for secret key recovery based on generating decryption errors. In a standard PKC setting, the model first consists of a precomputation phase where
special messages and their corresponding error vectors are generated. Secondly, the messages are submitted for decryption and some decryption errors are observed. Finally, a phase with a statistical analysis of the messages/errors causing the decryption errors reveals the secret key. The idea is that conditioned on certain secret keys, the decryption error probability is significantly higher than the average case used in the error probability estimation. The attack is demonstrated in detail on one
NIST Post-Quantum Proposal, ss-ntru-pke, that is attacked with complexity below the claimed security level.

This paper presents a lightweight, sponge-based authenticated encryption (AE) family called Beetle.
When instantiated with the PHOTON permutation from CRYPTO 2011, Beetle achieves the smallest footprint - consuming only a few more than 600 LUTs on FPGA while maintaining 64-bit security. This figure is significantly smaller than all known lightweight AE candidates which consume more than 1,000 LUTs, including the latest COFB-AES from CHES~2017. In order to realize such small hardware implementation, we equip Beetle with an ``extremely tight'' bound of security. The trick is to use combined feedback to create a difference between the cipher text block and the rate part of the next feedback (in traditional sponge these two values are the same). Then we are able to show that Beetle is provably secure up to $\min \{c-\log r, {b/2}, r\}$ bits, where $b$ is the permutation size and $r$ and $c$ are parameters called rate and capacity, respectively. The tight security bound allows us to select the smallest security parameters, which in turn result in the smallest footprint.

We present NTTRU -- an IND-CCA2 secure NTRU-based key encapsulation scheme that uses the number theoretic transform (NTT) over the cyclotomic ring $Z_{7681}[X]/(X^{768}-X^{384}+1)$ and produces public keys and ciphertexts of approximately $1.25$ KB at the $128$-bit security level. The number of cycles on a Skylake CPU of our constant-time AVX2 implementation of the scheme for key generation, encapsulation and decapsulation is approximately $6.4$K, $6.1$K, and $7.9$K, which is more than 30X, 5X, and 8X faster than these respective procedures in the NTRU schemes that were submitted to the NIST post-quantum standardization process. These running times are also, by a large margin, smaller than those for all the other schemes in the NIST process. We also give a simple transformation that allows one to provably deal with small decryption errors in OW-CPA encryption schemes (such as NTRU) when using them to construct an IND-CCA2 key encapsulation.

A verifiable random function (VRF) is a pseudorandom function, where outputs can be publicly verified. That is, given an output value together with a proof, one can check that the function was indeed correctly evaluated on the corresponding input. At the same time, the output of the function is computationally indistinguishable from random for all non-queried inputs.
We present the first construction of a VRF which meets the following properties at once: It supports an exponential-sized input space, it achieves full adaptive security based on a non-interactive constant-size assumption and its proofs consist of only a logarithmic number of group elements for inputs of arbitrary polynomial length.
Our construction can be instantiated in symmetric bilinear groups with security based on the decision linear assumption.
We build on the work of Hofheinz and Jager (TCC 2016), who were the first to construct a verifiable random function with security based on a non-interactive constant-size assumption. Basically, their VRF is a matrix product in the exponent, where each matrix is chosen according to one bit of the input. In order to allow verification given a symmetric bilinear map, a proof consists of all intermediary results. This entails a proof size of Omega(L) group elements, where L is the bit-length of the input.
Our key technique, which we call hunting and gathering, allows us to break this barrier by rearranging the function, which - combined with the partitioning techniques of Bitansky (TCC 2017) - results in a proof size of l group elements for arbitrary l in omega(1).

We propose an authenticated encryption scheme for the VMPC-R stream cipher. VMPC-R is an RC4-like algorithm proposed in 2013. It was created in a challenge to find a bias-free cipher within the RC4 design scope and to the best of our knowledge no security weakness in it has been published to date. The contribution of this paper is an algorithm to compute Message Authentication Codes (MACs) along with VMPC-R encryption. We also propose a simple method of transforming the MAC computation algorithm into a hash function.

Protean Signatures (PS), recently introduced by Krenn et al. (CANS '18), allow a semi-trusted third party, named the sanitizer, to modify a signed message in a controlled way.
The sanitizer can
edit signer-chosen parts to arbitrary bitstrings, while the sanitizer can also redact
admissible parts, which are also chosen by the signer. Thus, PSs generalize both redactable signature (RSS) and sanitizable signature (SSS)
into a single notion.
However, the current definition of invisibility does not prohibit that an outsider can decide which
parts of a message are redactable - only which parts can be edited are hidden. This negatively
impacts on the privacy guarantees provided by the state-of-the-art definition.
We extend PSs to be fully invisible.
This strengthened notion guarantees that an outsider can neither decide which parts of a message can be edited nor which
parts can be redacted. To achieve our goal, we introduce the new notions of Invisible RSSs and Invisible Non-Accountable SSSs (SSS'), along with a consolidated framework for aggregate signatures.
Using those building blocks, our resulting construction is significantly
more efficient than the original scheme by Krenn et al., which we demonstrate in a prototypical implementation.

Identity-based broadcast encryption (IBBE) is an effective method to protect the data security and privacy in multi-receiver scenarios, which can make broadcast encryption more practical. This paper further expands the study of scalable revocation methodology in the setting of IBBE, where a key authority releases a key update material periodically in such a way that only non-revoked users can update their decryption keys. Following the binary tree data structure approach, a concrete instantiation of revocable IBBE scheme is proposed using asymmetric pairings of prime order bilinear groups. Moreover, this scheme can withstand decryption key exposure, which is proven to be semi-adaptively secure under chosen plaintext attacks in the standard model by reduction to static complexity assumptions. In particular, the proposed scheme is very efficient both in terms of computation costs and communication bandwidth, as the ciphertext size is constant, regardless of the number of recipients. To demonstrate the practicality, it is further implemented in Charm, a framework for rapid prototyping of cryptographic primitives.

This paper presents a very practical key recovery attack on Speck32/64 reduced to 11 rounds based on a novel type of differential distinguisher using machine learning. These distinguishers exceed distinguishers based on the entire differential distribution table of Speck32/64 in accuracy, specificity and sensitivity. We show that they obtain significant gain from features of the output distribution that are invisible to the differential distribution table. The key recovery attack has been completely verified empirically and has an average runtime of approximately three minutes on a desktop computer with a fast graphics card or about 30 minutes on the same machine when not using the graphics card. This corresponds to roughly 41 bits of remaining security for 11-round Speck32/64, which is a substantial improvement over previous literature. The average data complexity of our attack is slightly lower than the best previous attack on the same number of rounds.
While our attack is based on a known input difference taken from the literature, we also show that neural networks can be used to rapidly (within a matter of minutes on our machine) find good input differences without using prior human cryptanalysis.