We revisit the factoring with known bits problem on general RSA moduli in the forms of $N=p^r q^s$ for $r,s\ge 1$, where two primes $p$ and $q$ are of the same bit-size. The relevant moduli are inclusive of $pq$, $p^r q$ for $r>1$, and $p^r q^s$ for $r,s>1$, which are used in the standard RSA scheme and other RSA-type variants. Previous works acquired the results mainly by solving univariate modular equations.
In contrast, we investigate how to efficiently factor $N=p^r q^s$ with given leakage of the primes by the integer method using the lattice-based technique in this paper. More precisely, factoring general RSA moduli with known most significant bits (MSBs) of the primes can be reduced to solving bivariate integer equations, which was first proposed by Coppersmith to factor $N=pq$ with known high bits. Our results provide a unifying solution to the factoring with known bits problem on general RSA moduli. Furthermore, we reveal that there exists an improved factoring attack via the integer method for particular RSA moduli like $p^3 q^2$ and $p^5 q^3$.

Post Quantum Lattice-Based Cryptography (LBC) schemes are increasingly gaining attention in
traditional and emerging security problems, such as encryption, digital signature, key exchange, homomorphic
encryption etc, to address security needs of both short and long-lived devices — due to their foundational
properties and ease of implementation. However, LBC schemes induce higher computational demand compared
to classic schemes (e.g., DSA, ECDSA) for equivalent security guarantees, making domain-specific acceleration a
viable option for improving security and favor early adoption of LBC schemes by the semiconductor industry.
In this paper, we present a workflow to explore the design space of domain-specific accelerators for LBC
schemes, to target a diverse set of host devices, from resource-constrained IoT devices to high-performance
computing platforms. We present design exploration results on workloads executing NewHope and BLISSB-I
schemes accelerated by our domain-specific accelerators, with respect to a baseline without acceleration.
We show that achieved performance with acceleration makes the execution of NewHope and BLISSB-I
comparable to classic key exchange and digital signature schemes while retaining some form of general
purpose programmability. In addition to 44% and 67% improvement in energy-delay product (EDP), we
enhance performance (cycles) of the sign and verify steps in BLISSB-I schemes by 24% and 47%, respectively.
Performance (EDP) improvement of server and client side of the NewHope key exchange is improved by 37%
and 33% (52% and 48%), demonstrating the utility of the design space exploration framework.

The amount of digital data that requires long-term protection of integrity, authenticity, and confidentiality grows rapidly. Examples include electronic health records, genome data, and tax data. In this paper we present the secure storage system LINCOS, whichprovides protection of integrity, authenticity, and confidentiality in the long-term, i.e., for an indefinite time period. It is the first such system. It uses the long-term integrity scheme COPRIS, which is also presented here and is the first such scheme that does not leak any information about the protected data. COPRIS uses information-theoretic hiding commitments for confidentiality-preserving integrity and authenticity protection. LINCOS uses proactive secret sharing for confidential storage of secret data. We also present implementations of COPRIS and LINCOS. A special feature of our LINCOS implementation is the use of quantum key distribution and one-time pad encryption for information-theoretic private channels within the proactive secret sharing protocol. The technological platform for this is the Tokyo QKD Network, which is one of worlds most advanced networks of its kind. Our experimental evaluation establishes the feasibility of LINCOS and shows that in view of the expected progress in quantum communication technology, LINCOS is a promising solution for protecting very sensitive data in the cloud.

By design, existing (pre-processing)
zk-SNARKs embed a secret trapdoor in a relation-dependent common reference strings (CRS). The trapdoor is exploited
by a (hypothetical) simulator to prove the scheme is zero knowledge, and the secret-dependent
structure facilitates a linear-size CRS and linear-time prover computation.
If known by a real party, however, the trapdoor can be used to subvert the
security of the system. The structured CRS that makes zk-SNARKs practical
also makes deploying zk-SNARKS problematic, as it is difficult to argue why the
trapdoor would not be available to the entity responsible for generating the
CRS. Moreover, for pre-processing zk-SNARKs a new trusted CRS needs to be
computed every time the relation is changed.
%
In this paper, we address both issues by proposing a model where a number of users can update a universal CRS. The updatable CRS model guarantees security if at least one of the users updating the CRS is honest.
We provide both a negative result, by showing that
zk-SNARKs with private secret-dependent polynomials in the CRS cannot be
updatable, and a positive result by constructing a zk-SNARK based on a CRS consisting only of secret-dependent monomials.
The CRS is of
quadratic size, is updatable, and is universal in the sense that it can be specialized into one or more relation-dependent
CRS of linear size with linear-time prover computation.

In both information-theoretic and computationally-secure Multi-Party Computation (MPC) protocols the parties are usually assumed to be connected by a complete network of secure or authenticated channels, respectively. Taking inspiration from a recent, highly efficient, three-party honest-majority computationally-secure MPC protocol of Araki et al., we show how to perform the most costly part of a computationally secure MPC protocol for an arbitrary $Q_2$ access structure over an incomplete network. We present both passive and actively secure (with abort) variants of our protocol. In all cases we require fewer communication channels for secure multiplication than Maurer's ``MPC-Made-Simple'' protocol, at the expense of requiring pre-shared secret keys for Pseudo-Random Functions (PRFs).

We study the problem of building a verifiable delay function (VDF). A VDF requires a specified number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified. VDFs have many applications in decentralized systems, including public randomness beacons, leader election in consensus protocols, and proofs of replication. We formalize the requirements for VDFs and present new candidate constructions that are the first to achieve an exponential gap between evaluation and verification time.

Private Set-Intersection (PSI) is one of the most popular and practically relevant secure two-party computation (2PC) tasks. Therefore, designing special-purpose PSI protocols (which are more efficient than generic 2PC solutions) is a very active line of research. In particular, a recent line of work has proposed PSI protocols based on oblivious transfer (OT) which, thanks to recent advances in OT-extension techniques, is nowadays a very cheap cryptographic building block.
Unfortunately, these protocols cannot be plugged into larger 2PC applications since in these protocols one party (by design) learns the output of the intersection. Therefore, it is not possible to perform secure post-processing of the output of the PSI protocol.
In this paper we propose a novel and efficient OT-based PSI protocol that produces an "encrypted" output that can therefore be later used as an input to other 2PC protocols. In particular, the protocol can be used in combination with all common approaches to 2PC including garbled circuits, secret sharing and homomorphic encryption. Thus, our protocol can be combined with the right 2PC techniques to achieve more efficient protocols for computations of the form $z=f(X\cap Y)$ for arbitrary functions $f$.

Recent developments in multi party computation (MPC) and fully homomorphic encryption (FHE) promoted the design and analysis of symmetric cryptographic schemes that minimize multiplications in one way or another. In this paper, we propose with Rasta a design strategy for symmetric encryption that has ANDdepth d and at the same time only needs d ANDs per encrypted bit. Even for very low values of d between 2 and 6 we can give strong evidence that attacks may not exist. This contributes to a better understanding of the limits of what concrete symmetric-key constructions can theoretically achieve with respect to AND-related metrics, and is to the best of our knowledge the first attempt that minimizes both metrics simultaneously. Furthermore, we can give evidence that for choices of d between 4 and 6 the resulting implementation properties may well be competitive by testing our construction in the use-case of removing the large ciphertext-expansion when using the BGV scheme.

When a group of individuals and organizations wish to compute a stable matching---for example, when medical students are matched to medical residency programs---they often outsource the computation to a trusted arbiter in order to preserve the privacy of participants' preferences. Secure multi-party computation offers the possibility of private matching processes that do not rely on any common trusted third party. However, stable matching algorithms have previously been considered infeasible for execution in a secure multi-party context on non-trivial inputs because they are computationally intensive and involve complex data-dependent memory access patterns.
We adapt the classic Gale-Shapley algorithm for use in such a context, and show experimentally that our modifications yield a lower asymptotic complexity and more than an order of magnitude in practical cost improvement over previous techniques. Our main improvements stem from designing new oblivious data structures that exploit the properties of
the matching algorithms. We apply a similar strategy to scale the Roth-Peranson instability chaining algorithm, currently in use by the National Resident Matching Program. The resulting protocol is efficient enough to be useful at the scale required for matching medical residents nationwide, taking just over 18 hours to complete an execution simulating the 2016 national resident match with more than 35,000 participants and 30,000 residency slots.

Goldreich and Izsak (Theory of Computing, 2012) initiated the research on understanding the role of negations in circuits implementing cryptographic primitives, notably, considering one-way functions and pseudo-random generators. More recently, Guo, Malkin, Oliveira and Rosen (TCC, 2014) determined tight bounds on the minimum number of negations gates (i.e., negation complexity) of a wide variety of cryptographic primitives including pseudo-random functions, error-correcting codes, hardcore-predicates and randomness extractors.
We continue this line of work to establish the following results:
1. First, we determine tight lower bounds on the negation complexity of collision-resistant and target collision-resistant hash-function families.
2. Next, we examine the role of injectivity and surjectivity on the negation complexity of one-way functions. Here we show that,
a) Assuming the existence of one-way injections, there exists a monotone one-way injection. Furthermore, we complement our result by showing that, even in the worst-case, there cannot exist a monotone one-way injection with constant stretch.
b) Assuming the existence of one-way permutations, there exists a monotone one-way surjection.
3. Finally, we show that there exists list-decodable codes with monotone decoders.
In addition, we observe some interesting corollaries to our results.

Signcryption is a public-key cryptographic primitive, originally introduced by Zheng (Crypto '97), that allows parties to establish secure communication without the need of prior key agreement. Instead, a party registers its public key at a certificate authority (CA), and only needs to retrieve the public key of the intended partner from the CA before being able to protect the communication. Signcryption schemes provide both authenticity and confidentiality of sent messages and can offer a simpler interface to applications and better performance compared to generic compositions of signature and encryption schemes.
Although introduced two decades ago, the question which security notions of signcryption are adequate in which applications has still not reached a fully satisfactory answer. To resolve this question, we conduct a constructive analysis of this public-key primitive. Similar to previous constructive studies for other important primitives, this treatment allows to identify the natural goal that signcryption schemes should achieve and to formalize this goal in a composable framework. More specifically, we capture the goal of signcryption as a gracefully-degrading secure network, which is basically a network of independent parties that allows secure communication between any two parties. However, when a party is compromised, its respective security guarantees are lost, while all guarantees for the remaining users remain unaffected. We show which security notions for signcryption are sufficient to construct this kind of secure network from a certificate authority (or key registration resource) and insecure communication. Our study does not only unveil that it is the so-called insider-security notion that enables this construction, but also that a weaker version thereof would already be sufficient. This may be of interest in the context of practical signcryption schemes that do not achieve the stronger notions.
Last but not least, we observe that the graceful-degradation property is actually an essential feature of signcryption that stands out in comparison to alternative and more standard constructions that achieve secure communication from the same assumptions. This underlines the vital importance of the insider security notion for signcryption and strongly supports, in contrast to the initial belief, the recent trend to consider the insider security notion as the standard notion for signcryption.

The ransomware nightmare is taking over the internet impacting common users,small businesses and large ones. The interest and investment which are pushed into this market each month, tells us a few things about the evolution of both technical and social engineering and what to expect in the short-coming future from them. In this paper we analyze how ransomware programs developed in the last few years and how they were released in certain market segments throughout the deep web via RaaS, exploits or SPAM, while learning from their own mistakes
to bring profit to the next level. We will also try to highlight some mistakes that were made, which allowed recovering the encrypted data, along with the ransomware authors preference for specific encryption types, how they got to distribute, the silent agreement between ransomwares, coin-miners and bot-nets and some edge cases of encryption, which may prove to be exploitable in the short-coming future.

In the problem of byzantine agreement (BA), a set of n parties wishes to agree
on a value v by jointly running a distributed protocol. The protocol is deemed
secure if it achieves this goal in spite of a malicious adversary that
corrupts a certain fraction of the parties and can make them behave in
arbitrarily malicious ways. Since its first formalization by Lamport et al.
(TOPLAS `82), the problem of BA has been extensively studied in the literature
under many different assumptions. One common way to classify protocols for BA
is by their synchrony and network assumptions. For example, some protocols
offer resilience against up to a one-half fraction of corrupted parties by
assuming a synchronized, but possibly slow network, in which parties share a
global clock and messages are guaranteed to arrive after a given time D. By
comparison, other protocols achieve much higher efficiency and work without
these assumptions, but can tolerate only a one-third fraction of corrupted
parties. A natural question is whether it is possible to combine protocols
from these two regimes to achieve the ``best of both worlds'': protocols that
are both efficient and robust. In this work, we answer this question in the
affirmative. Concretely, we make the following contributions:
* We give the first generic compilers that combine BA protocols under
different network and synchrony assumptions and preserve both the efficiency
and robustness of their building blocks. Our constructions are simple and rely
solely on a secure signature scheme.
* We prove that our constructions achieve optimal corruption bounds.
* Finally, we give the first efficient protocol for (binary) asynchronous
byzantine agreement (ABA) which tolerates adaptive corruptions and matches the
communication complexity of the best protocols in the static case.

Non-Malleable Codes (NMC) were introduced by Dziembowski, Pietrzak and Wichs in ICS 2010 as a relaxation of error correcting codes and error detecting codes. Faust, Mukherjee, Nielsen, and Venturi in TCC 2014 introduced an even stronger notion of non-malleable codes called continuous non-malleable codes where security is achieved against continuous tampering of a single codeword without re-encoding. We construct information theoretically secure CNMC resilient to bit permutations and overwrites, this is the first Continuous NMC constructed outside of the split-state model. In this work we also study relations between the CNMC and parallel CCA commitments. We show that the CNMC can be used to bootstrap a self-destruct parallel CCA bit commitment to a self-destruct parallel CCA string commitment, where self-destruct parallel CCA is a weak form of parallel CCA security. Then we can get rid of the self-destruct limitation obtaining a parallel CCA commitment, requiring only one-way functions.

In recent years, some of the most popular online chat services such as iMessage and WhatsApp have deployed end-to-end encryption to mitigate some of the privacy risks to the transmitted messages. But facilitating end-to-end encryption requires a Public Key Infrastructure (PKI), so these services still require the service provider to maintain a centralized directory of public keys. A downside of this design is placing a lot of trust in the service provider; a malicious or compromised service provider can still intercept and read users' communication just by replacing the user's public key with one for which they know the corresponding secret.
A recent work by Melara et al. builds a system called CONIKS where the service provider is required to prove that it is returning a consistent for each user. This allows each user to monitor his own key and reduces some of the risks of placing a lot of trust in the service provider.
New systems [EthIKS,Catena] are already being built on CONIKS. While these systems are extremely relevant in practice, the security and privacy guarantees of these systems are still based on some ad-hoc analysis rather than on a rigorous foundation. In addition, without modular treatment, improving on the efficiency of these systems is challenging.
In this work, we formalize the security and privacy requirements of a verifiable key service for end-to-end communication in terms of the primitive called {\em Verifiable Key Directories} (VKD). Our abstraction captures the functionality of all three existing systems: CONIKS, EthIKS and Catena.
We quantify the leakage from these systems giving us a better understanding of their privacy in concrete terms. Finally, we give a VKD construction (with concrete efficiency analysis) which improves significantly on the existing ones in terms of privacy and efficiency. Our design modularly builds from another primitive that we define as append-only zero knowledge sets (aZKS) and from append-only Strong Accumulators. By providing modular constructions, we allow for the independent study of each of these building blocks: an improvement in any of them would directly result in an improved VKD construction. Our definition of aZKS generalizes the definition of the zero knowledge set for updates, which is a secondary contribution of this work, and can be of independent interest.

Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible.
In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e.,\ without any interaction), which allows to avoid the self-destruct mechanism in some applications. Additionally, the refreshing procedure can be exploited in order to obtain security against continual leakage attacks. We give an abstract framework for building refreshable continuously non-malleable codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption.
Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient read-only RAM programs. In comparison to other tamper-resilient RAM compilers, ours has several advantages, among which the fact that, in some cases, it does not rely on the self-destruct feature.

In this paper, we propose a new type of non-recursive Mastrovito multiplier for $GF(2^m)$ using a $n$-term Karatsuba algorithm (KA), where $GF(2^m)$ is defined by an irreducible trinomial, $x^m+x^k+1, m=nk$. We show that such a type of trinomial combined with the $n$-term KA can fully exploit the spatial correlation of entries in related Mastrovito product matrices and lead to a low complexity architecture. The optimal parameter $n$ is further studied.
As the main contribution of this study, the lower bound of the space complexity of our proposal is about $O(\frac{m^2}{2}+m^{3/2})$. Meanwhile, the time complexity matches the best Karatsuba multiplier known to date. To the best of our knowledge, it is the first time that Karatsuba-based multiplier has reached such a space complexity bound while maintaining relatively low time delay.

Time-lock encryption is a method to encrypt a message such that it can only be decrypted after a certain deadline has passed. We propose a novel time-lock encryption scheme, whose main advantage over prior constructions is that even receivers with relatively weak computational resources should immediately be able to decrypt after the deadline, without any interaction with the sender, other receivers, or a trusted third party. We build our time-lock encryption on top of the new concept of computational reference clocks and an extractable witness encryption scheme. We explain how to construct a computational reference clock based on Bitcoin. We show how to achieve constant level of multilinearity for witness encryption by using SNARKs. We propose a new construction of a witness encryption scheme which is of independent interest: our scheme, based on Subset-Sum, achieves extractable security without relying on obfuscation. The scheme employs multilinear maps of arbitrary order and is independent of the implementations of multilinear maps.

Universally composable protocols provide security even in highly complex environments like the Internet. Without setup assumptions, however, UC-secure realizations of cryptographic tasks are impossible. Tamper-proof hardware tokens, e.g. smart cards and USB tokens, can be used for this purpose. Apart from the fact that they are widely available, they are also cheap to manufacture and well understood.
Currently considered protocols, however, suffer from two major drawbacks that impede their practical realization:
- The functionality of the tokens is protocol-specific, i.e. each protocol requires a token functionality tailored to its need.
- Different protocols cannot reuse the same token even if they require the same functionality from the token, because this would render the protocols insecure in current models of tamper-proof hardware.
In this paper we address these problems. First and foremost, we propose formalizations of tamper-proof hardware as an untrusted and global setup assumption. Modeling the token as a global setup naturally allows to reuse the tokens for arbitrary protocols. Concerning a versatile token functionality we choose a simple signature functionality, i.e. the tokens can be instantiated with currently available signature cards. Based on this we present solutions for a large class of cryptographic tasks.

In a functional encryption scheme, secret keys are associated with functions and ciphertexts are associated with messages. Given a secret key for a function f, and a ciphertext for a message x, a decryptor learns f(x) and nothing else about x. Inner product encryption is a special case of functional encryption where both secret keys and ciphertext are associated with vectors. The combination of a secret key for a vector x and a ciphertext for a vector y reveal <x, y> and nothing more about y. An inner product encryption scheme is function- hiding if the keys and ciphertexts reveal no additional information about both x and y beyond their inner product.
In the last few years, there has been a flurry of works on the construction of function-hiding inner product encryption, starting with the work of Bishop, Jain, and Kowalczyk (Asiacrypt 2015) to the more recent work of Tomida, Abe, and Okamoto (ISC 2016). In this work, we focus on the practical applications of this primitive. First, we show that the parameter sizes and the run-time complexity of the state-of-the-art construction can be further reduced by another factor of 2, though we compromise by proving security in the generic group model. We then show that function privacy enables a number of applications in biometric authentication, nearest-neighbor search on encrypted data, and single-key two-input functional encryption for functions over small message spaces. Finally, we evaluate the practicality of our encryption scheme by implementing our function-hiding inner product encryption scheme. Using our construction, encryption and decryption operations for vectors of length 50 complete in a tenth of a second in a standard desktop environment.