We formally define and give the first construction of (leveled) threshold fully homomorphic encryption for any access structure induced by a monotone boolean formula and in particular for the threshold access structure. Our construction is based on the learning with errors assumption and can be instantiated with any existing homomorphic encryption scheme that satisfies fairly general conditions, such as Gentry, Sahai, Waters (CRYPTO 2013) and Brakerski, Gentry, Vaikuntanathan (ITCS 2012).
From threshold homomorphic encryption, we construct function secret sharing and distributed pseudorandom functions for the aforementioned access structures. No such constructions were known prior to this work.

Blockchain protocols such as bitcoin provide decentralized consensus mechanisms based on proof-of-work (PoW). In this work we introduce and instantiate a new primitive for blockchain protocols called Noninteractive-Proofs-of-Proof-of-Work (NIPoPoWs) which can be adapted into existing proof-of-work-based cryptocurrencies. Unlike a traditional blockchain client which must verify the entire linearly-growing chain of PoWs, clients based on NIPoPoWs require resources only logarithmic in the length of the blockchain. NIPoPoWs solve two important open questions for PoW based consensus protocols. Specifically the problem of constructing efficient transaction verification clients, sometimes called ``simple payment verification'' or SPV, and the problem of constructing efficient sidechain proofs. We provide a formal model for NIPoPoWs. We prove our construction is secure in the backbone model and show that the communication is succinct in size. We provide simulations and experimental data to measure concrete communication efficiency and security. Finally, we provide two ways that our NIPoPoWs can be adopted by existing blockchain protocols, first via a soft fork, and second via a new update mechanism that we term a ``velvet fork'' and enables to harness some of the performance benefits of NIPoPoWs even with a minority upgrade.

We consider the endomorphism ring computation problem for supersingular elliptic curves, constructive versions of Deuring's correspondence, and the security of Charles-Goren-Lauter's cryptographic hash function. We show that constructing Deuring's correspondence is easy in one direction and equivalent to the endomorphism ring computation problem in the other direction.
We also provide a collision attack for special but natural parameters of the hash function, and we prove that for general parameters its preimage and collision resistance are also equivalent to the endomorphism ring computation problem. Our reduction and attack techniques are of independent interest and may find further applications in both cryptanalysis and the design of new protocols.

Password Authenticated Key Exchange (PAKE) allows a user to establish a strong cryptographic key with a server, using only knowledge of a pre-shared password. One of the basic security requirements of PAKE is to prevent offline dictionary attacks.
In this paper, we revisit zkPAKE, an augmented PAKE that has been recently proposed by Mochetti, Resende, and Aranha (SBSeg 2015). Our work shows that the zkPAKE protocol is prone to offline password guessing attack, even in the presence of an adversary that has only eavesdropping capabilities. Therefore, zkPAKE is insecure and should not be used as a key exchange mechanism.

We study the problem of encrypting and authenticating quantum data in the presence of adversaries making adaptive chosen plaintext and chosen ciphertext queries. Classically, security games use string copying and comparison to detect adversarial cheating in such scenarios. Quantumly, this approach would violate no-cloning. We develop new techniques to overcome this problem: we use entanglement to detect cheating, and rely on recent results for characterizing quantum encryption schemes. We give denitions for (i.) ciphertext unforgeability , (ii.) indistinguishability under adaptive chosen-ciphertext attack, and (iii.) authenticated encryption. The restriction of each denition to the classical setting is at least as strong as the corresponding classical notion: (i) implies INT-CTXT, (ii) implies IND-CCA2, and (iii) implies AE. All of our new notions also imply QIND-CPA privacy. Combining one-time authentication and classical pseudorandomness, we construct schemes for each of these new quantum security notions, and provide several separation examples. Along the way, we also give a new denition of one-time quantum authentication which, unlike all previous approaches, authenticates ciphertexts rather than plaintexts.

Albrecht et al. at Crypto 2016 and Cheon et al. at ANTS 2016 independently presented a subfield attack on overstretched NTRU problem. Their idea is to map the public key down to the subfield (by norm and trace map respectively) and hence obtain a lattice of smaller dimension for which a lattice reduction algorithm is efficiently applicable. At Eurocrypt 2017, Kirchner and Fouque proposed another variant attack which exploits the presence of orthogonal bases within the cyclotomic number rings and instead of using the matrix of the public key in the subfield, they use the multiplication matrix by the public key in the full field and apply a lattice reduction algorithm to a suitable projected lattice of smaller dimension. They also showed a tight estimation of the parameters broken by lattice reduction and implementation results that their attack is better than the subfield attack.
In this paper, we exploit technical results from Kirchner and Fouque for the relative norm of field elements in the subfield and we use Hermite factor for estimating the output of a lattice basis reduction algorithm in order to analyze general choice of parameters for the subfield attack by Albrecht et al. As a result, we obtain the estimation for better choices of the subfields for which the attack works with smaller modulus. Our experiment results show that we can attack overstretched NTRU with modulus smaller than that of Albrecht et al. and of Kirchner and Fouque.

The deployment of Genome-wide association studies (GWASs) requires genomic information of a large population to produce reliable results. This raises significant privacy concerns, making people hesitate to contribute their genetic information to such studies.
We propose two provably secure solutions to address this challenge: (1) a somewhat homomorphic encryption approach, and (2) a secure multiparty computation approach. Unlike previous work, our approach does not rely on adding noise to the input data, nor does it reveal any information about the patients. Our protocols calculate the $\chi^2$ statistic in a privacy-preserving manner, without revealing any information other than the significance of the statistic; hence not even the statistic value itself. We significantly increased the efficiency of our protocols by introducing a new masking technique to perform the secure comparison. Our implementations demonstrated that both approaches are efficient. The secure multiparty computation technique completes its execution in approximately 2~ms for data contributed by one million subjects.

We develop a general approach to thresholdizing a large class of (non-threshold) cryptographic schemes. We show how to add threshold functionality to CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. To do so, we introduce a general tool, called a universal thresholdizer, from which many threshold systems are possible. The tool builds upon a lattice-based fully-homomorphic encryption (FHE) system. Applying the tool to a (non-threshold) lattice-based signature, gives the first single-round threshold signature from the learning with errors problem (LWE). Applying the tool to a (non-threshold) lattice-base CCA-secure PKE, gives a single-round lattice-based threshold CCA-secure PKE.

Starting with any selectively secure identity-based encryption (IBE) scheme, we give generic constructions of fully secure IBE and selectively secure hierarchical IBE (HIBE) schemes. Our HIBE scheme allows for delegation arbitrarily many times.

We study the problem of two round oblivious evaluation of cryptographic functionalities. In this setting, one party P1 holds a private key sk for a provably secure instance of a cryptographic functionality F and the second party P2 wishes to evaluate F_sk on a value x. Although it has been known for 22 years that general functionalities cannot be computed securely in the presence of malicious adversaries with only two rounds of communication, we show the existence of a round-optimal protocol that obliviously evaluates cryptographic functionalities. Our protocol is provably secure against malicious receivers under standard assumptions and does not rely on heuristic (setup) assumptions. Our main technical contribution is a novel nonblack-box technique, which makes nonblack-box use of the security reduction of F_sk. Specifically, our proof of malicious receiver security uses the code of the reduction, which reduces the security of F_sk to some hard problem, in order to break that problem directly. Instantiating our framework, we obtain the first two-round oblivious pseudorandom function that is secure in the standard model. This question was left open since the invention of OPRFs in 1997.

We develop a general approach to adding a threshold functionality to a large class of (non- threshold) cryptographic schemes. A threshold functionality enables a secret key to be split into a number of shares, so that only a threshold of parties can use the key, without reconstructing the key. We begin by constructing a threshold fully-homomorphic encryption scheme (TFHE) from the learning with errors (LWE) problem. We next introduce a new concept, called a universal thresholdizer, from which many threshold systems are possible. We show how to construct a universal thresholdizer from our TFHE. A universal thresholdizer can be used to add threshold functionality to many systems, such as CCA-secure public key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. In particular, by applying this paradigm to a (non-threshold) lattice signature system, we obtain the first single-round threshold signature scheme from LWE.

We present here a new code-based digital signature scheme. This scheme uses $(U,U+V)$ codes where both $U$ and $V$ are random. We show that the distribution of signatures is uniform by suitable rejection sampling. This is one of the key ingredients for our proof that the scheme achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model (ROM) under two assumptions from coding theory, both strongly related to the hardness of decoding in a random linear code. Another crucial ingredient is the proof that the syndromes produced by $(U,U+V)$ codes are statistically indistinguishable from random syndromes. Note that these two key properties are also required for applying a recent and generic proof for code-based signature schemes in the QROM model [CD17]. As noticed there, this allows to instantiate the code family which is needed and yields a security proof of our scheme in the QROM. Our scheme also enjoys an efficient signature generation and verification. For a (classical) security of 128 bits, the signature size is less than one kilobyte. Contrarily to a current trend in code-based or lattice cryptography which reduces key sizes by using structured codes or lattices based on rings, we avoid this here and still get reasonable public key sizes (less than 2 megabytes for the aforementioned security level). Our key sizes compare favorably with TESLA-2, which is an (unstructured) lattice-based signature scheme that has also a security reduction in the QROM model. This gives the first
practical signature scheme based on binary codes which comes with a security proof and which scales well with the security parameter: for a security level of $2^\lambda$, the signature size is of order $O(\lambda)$, public key size is of size $O(\lambda^2)$, signature generation cost is of order $O(\lambda^3)$, and signature verification cost is of order $O(\lambda^2)$.

In this paper, we construct an anonymous and decentralized cryptocash protocol which is secure in the quantum computation model. In order to achieve that, a linkable ring signature based on the ideal lattice
is proposed. The size of a signature in our scheme is O(log N ), where N is the number of participants in the ring. The framework of our cryptocash system follows that of CryptoNote with some modications. By adopting the logarithmic size quantum resistant linkable ring signature scheme, our protocol is ecient and anonymous. We also introduce how to generate the verifying and signing key pairs of the linkable ring signature temporarily. With these techniques, both the sender and the receiver's privacy in transactions are protected even though they are published in the public ledger.

Abstract—Crowdsourcing systems which utilize the human intelligence to solve complex tasks have gained considerable interest and adoption in recent years. However, the majority of existing crowdsourcing systems rely on central servers, which are subject to the weaknesses of traditional trust-based model, such as single point of failure and privacy disclosure. They are also vulnerable to distributed denial of service (DDoS) and Sybil attacks due to malicious users involvement. In addition, high service fees from the crowdsourcing platform may hinder the development of crowdsourcing. How to address these potential issues has both research and substantial value. In this paper, we conceptualize a blockchain-based decentralized framework for crowdsourcing named CrowdBC, in which a requester’s task can be solved by a crowd of workers without relying on any third trusted institution, users’ privacy can be guaranteed and only low transaction fees are required. In particular, we introduce the architecture of our proposed framework, based on which we give a concrete scheme. We further implement a software prototype on Ethereum with real-world dataset. Experiment results show the validity and effectiveness of our proposed crowdsourcing system.

This work investigates the fundamental constraints of anonymous communication (AC) protocols.
We analyze the relationship between bandwidth overhead, latency overhead, and sender anonymity or recipient anonymity against the global passive (network-level) adversary.
We confirm the trilemma
that an AC protocol can only achieve two out of the following three properties:
strong anonymity (i.e., anonymity up to a negligible chance),
low bandwidth overhead, and low latency overhead.
We further study anonymity against a stronger global passive adversary that can additionally passively compromise some of the AC protocol nodes.
For a given number of compromised nodes,
we derive necessary constraints between bandwidth and latency overhead whose violation make it impossible for an AC protocol to achieve strong anonymity.
We analyze prominent AC protocols from the literature and depict to which extend those satisfy our necessary constraints.
Our fundamental necessary constraints offer a guideline not only for improving existing AC systems but also for designing novel AC protocols with non-traditional bandwidth and latency overhead choices.

Iterative collision search procedures play a key role in developing combinatorial algorithms for the subset sum and learning parity with noise (LPN) problems.
In both scenarios, the single-list pair-wise iterative collision search finds the most solutions and offers the best efficiency.
However, due to its complex probabilistic structure, no rigorous analysis for it appears to be available to the best of our knowledge.
As a result, theoretical works often resort to overly constrained and sub-optimal iterative collision search variants in exchange for analytic simplicity.
In this paper, we present rigorous analysis for the single-list pair-wise iterative collision search method and its applications in subset sum and LPN.
In the LPN literature, the method is known as the LF2 heuristic.
Besides LF2, we also present rigorous analysis of other LPN solving heuristics and show that they work well when combined with LF2.
Putting it together, we significantly narrow the gap between theoretical and heuristic algorithms for LPN.

An important benchmark for multi-party computation protocols (MPC) is their round complexity. For several important MPC tasks, such as broadcast, (tight) lower bounds on the round complexity are known. However, some of these lower bounds can be circumvented when the termination round of every party is not a priori known, and simultaneous termination is not guaranteed. Protocols with this property are called \emph{probabilistic-termination (PT)} protocols.
Running PT protocols in parallel affects the round complexity of the resulting protocol in somewhat unexpected ways. For instance, an execution of m protocols with constant expected round complexity might take O(\log m) rounds to complete. In a seminal work, Ben-Or and El-Yaniv (Distributed Computing '03) developed a technique for parallel execution of arbitrarily many broadcast protocols, while preserving expected round complexity. More recently, Cohen et al. (CRYPTO '16) devised a framework for universal composition of PT protocols, and provided the first composable parallel-broadcast protocol with a simulation-based proof. These constructions crucially rely on the fact that broadcast is "privacy free," and do not generalize to arbitrary protocols in a straightforward way. This raises the question of whether it is possible to execute arbitrary PT protocols in parallel, without increasing the round complexity.
In this paper we tackle this question and provide both feasibility and infeasibility results. We construct a round-preserving protocol compiler, secure against a dishonest minority of actively corrupted parties, that compiles arbitrary protocols into a protocol realizing their parallel composition, while having a black-box access to the underlying \emph{protocols}. Furthermore, we prove that the same cannot be achieved, using known techniques, given only black-box access to the \emph{functionalities} realized by the protocols, unless merely security against semi-honest corruptions is required, for which case we provide a protocol.
To prove our results, we utilize the language and results by Cohen et al., which we extend to capture parallel composition and reactive functionalities, and to handle the case of an honest majority.

We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudo random functions. Compared to previous results, which require a collision resistant hash function, our result provides significantly smaller signatures at the same security level. We also consider security in the strong sense and show that the Winternitz one-time signature scheme is strongly unforgeable assuming additional properties of the pseudo random function. In this context we formally define several key-based security notions for function families and investigate their relation to pseudorandomness. All our reductions are exact and in the standard model and can directly be used to estimate the output length of the hash function required to meet a certain security level.

Algebraic manipulation detection codes are a class of error detecting codes which have found numerous applications in cryptography. In this paper we extend these codes to defeat general algebraic attacks - we call such codes general algebraic manipulation detection (GAMD) codes. Positive results are shown for the existence of GAMDs for the families of tampering functions corresponding to point additions and affine functions over a finite field. Compared to non-malleable codes, we demonstrate both positive and negative results regarding the existence of GAMDs for arbitrary families of tampering functions.

GGH13, CLT13 and GGH15 of multilinear maps suffer from zeroizing attacks. In this paper, we present a new construction of multilinear maps using a variant of ring-LWE (vRLWE). Furthermore, we also present two new variants of vRLWE, which respectively support the applications of multipartite key exchange and witness encryption. At the same time, we also present a new variant of GGH13 using matrix form. The security of our construction depends upon new hardness assumptions.