A growing body of research on Bitcoin and other permissionless cryptocurrencies that utilize Nakamoto's blockchain has shown that they do not easily scale to process a high throughput of transactions, or to quickly approve individual transactions; blocks must be kept small, and their creation rates must be kept low in order to allow nodes to reach consensus securely. As of today, Bitcoin processes a mere 3-7 transactions per second, and transaction confirmation takes at least several minutes.
We present SPECTRE, a new protocol for the consensus core of cryptocurrencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50\% of the computational power (up until the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network).
Key to SPECTRE's achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system.
Our framework formalizes this weaker set of requirements for a cryptocurrency's distributed ledger.
We then provide a formal proof that SPECTRE satisfies these requirements.

In contrast to classical signature schemes, such as RSA or ECDSA signatures, the lattice-based signature scheme ring-TESLA is expected to be resistant even against quantum adversaries. Due to a recent key recovery from a lattice-based implementation, it becomes clear that cache side channels are a serious threat for lattice-based implementations. In this article, we analyze an existing implementation of ring-TESLA against cache side channels. To reduce the effort for manual code inspection, we selectively employ automated program analysis. The leakage bounds we compute with program analysis are sound overapproximations of cache-side-channel leakage. We detect four cache-side-channel vulnerabilities in the implementation of ring-TESLA. Since two vulnerabilities occur in implementations of techniques common to lattice-based schemes, they are also interesting beyond ring-TESLA. Finally, we show how the detected vulnerabilities can be mitigated effectively.

Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.

Threshold secret sharing schemes enable a dealer to share a secret among $n$ parties such that only subsets of parties of cardinality at least $k = k(n)$ can reconstruct the secret. Komargodski, Naor and Yogev (TCC 2016-B) proposed an efficient scheme for sharing a secret among an unbounded number of parties such that only subsets of $k$ parties can recover the secret, where $k$ is any fixed constant. This access structure is known as $k$-threshold. They left open the possibility of an efficient scheme for the dynamic threshold access structure, in which the qualified sets are of increasing size as the number of parties increases. We resolve this open problem and present a construction in which the share size of the $t$-th party is $O(t^4\cdot \log t)$ bits.
Furthermore, we show how to generically translate any scheme for $k$-threshold into a scheme which is robust, where a shared secret can be recovered even if some parties hand-in incorrect shares. This answers another open problem of Komargodski et al. Our construction is based on the construction of robust (classical) secret sharing schemes of Cramer et al. (EUROCRYPT 2008) using algebraic manipulation detection codes.

Keeping correct and informative log files is crucial for system maintenance, security and forensics. Cryptographic logging schemes offer integrity checks that protect a log file even in the case where an attacker has broken into the system.
A relatively recent feature of these schemes is resistance against truncations, i.e. the deletion and/or replacement of the end of the log file. This is especially relevant as system intruders are typically interested in manipulating the later log entries that point towards their attack. However, there are not many schemes that are resistant against truncating the log file. Those that are have at least one of the following disadvantages: They are memory intensive (they store at least one signature per log entry), or fragile (i.e. a single error in the log renders the signature invalid and useless in determining where the error occurred).
We obtain a publicly-verifiable secure logging scheme that is simultaneously robust, space-efficient and truncation secure with provable security under simple assumptions. Our generic construction uses forward-secure signatures, in a plain and a sequential aggregate variant, where the latter is additionally fault-tolerant, as recently formalized by Hartung et al. (PKC 2016). Fault-tolerant schemes can cope with a number of manipulated log entries (bounded a priori) and offer strong robustness guarantees while still retaining space efficiency. Our implementation and the accompanying performance measurements confirm the practicality of our scheme.

Austrin, Chung, Mahmoody, Pass and Seth (Crypto'14) studied the notion of bitwise $p$-tampering attacks over randomized algorithms in which an efficient `virus' gets to control each bit of the randomness with independent probability $p$ in an online way. The work of Austrin et al. showed how to break certain `privacy primitives' (e.g., encryption, commitments, etc.) through bitwise $p$-tampering, by giving a bitwise $p$-tampering biasing attack for increasing the average $E[f(U_n)]$ of any efficient function $f \colon \{0,1\}^n \to [-1,+1]$ by $\Omega(p \cdot Var[f(U_n)])$ where $Var[f(U_n)]$ is the variance of $f(U_n)$.
In this work, we revisit and extend the bitwise tampering model of Austrin et al. to blockwise setting, where blocks of randomness becomes tamperable with independent probability $p$. Our main result is an efficient blockwise $p$-tampering attack to bias the average $E[f(X)]$ of any efficient function $f$ mapping arbitrary $X$ to $[-1,+1]$ by $\Omega(p \cdot Var[f(X)])$ regardless of how $X$ is partitioned into individually tamperable blocks $X=(X_1,\dots,X_n)$. Relying on previous works, our main biasing attack immediately implies efficient attacks against the privacy primitives as well as seedless multi-source extractors, in a model where the attacker gets to tamper with each block (or source) of the randomness with independent probability $p$. Further, we show how to increase the classification error of deterministic learners in the so called `targeted poisoning' attack model under Valiant's adversarial noise. In this model, an attacker has a `target' test data $d$ in mind and wishes to increase the error of classifying $d$ while she gets to tamper with each training example with independent probability $p$ an in an online way.

We consider the problem of constant-round secure two-party computation in the presence of active (malicious) adversaries. We present the first protocol that has only a constant multiplicative communication overhead compared to Yao's protocol for passive adversaries and can be implemented in the plain model by only making a black-box use of (parallel) oblivious transfer and a pseudo-random generator. This improves over the polylogarithmic overhead of the previous best protocol. A similar result could previously be obtained only in an amortized setting, using preprocessing, or by assuming bit-oblivious-transfer as an ideal primitive that has a constant cost.
We present two variants of this result, one which is aimed at minimizing the number of oblivious transfers and another which is aimed at optimizing concrete efficiency. Our protocols are based on a novel combination of previous techniques together with a new efficient protocol to certify that pairs of strings transmitted via oblivious transfer satisfy a global relation. The communication complexity of the second variant of our protocol can beat the best previous protocols even for realistic values of the circuit size and the security parameter. This variant is particularly attractive in the offline-online setting, where the online cost is dominated by a single evaluation of an authenticated garbled circuit, and can also be made non-interactive using the Fiat-Shamir heuristic.

We revisit the exact round complexity of secure two-party computation. While four rounds are known to be sufficient for securely computing general functions that provide output to one party [Katz-Ostrovsky, CRYPTO'04], Goldreich-Krawczyk [SIAM J. Computing'96] proved that three rounds are insufficient for this task w.r.t. black-box simulation.
In this work, we study the feasibility of secure computation in three rounds using non-black-box simulation. Our main result is a three-round two-party computation protocol for general functions against adversaries with auxiliary inputs of bounded size. This result relies on a new two round input-extraction protocol based on succinct randomized encodings.
We also provide a partial answer to the question of achieving security against non-uniform adversaries. Assuming sub-exponentially secure iO and one-way functions, we rule out three-round protocols that achieve polynomial simulation-based security against the output party and exponential indistinguishability-based security against the other party.

We devise the first weak multilinear map model for CLT13 multilinear maps (Coron et al., CRYPTO 2013) that captures all known classical polynomial-time attacks on the maps. We then show important applications of our model. First, we show that in our model, several existing obfuscation and order-revealing encryption schemes, when instantiated with CLT13 maps, are secure against known attacks under a mild algebraic complexity assumption used in prior work. These are schemes that are actually being implemented for experimentation. However, until our work, they had no rigorous justification for security.
Next, we turn to building constant degree multilinear maps on top of CLT13 for which there are no known attacks. Precisely, we prove that our scheme achieves the ideal security notion for multilinear maps in our weak CLT13 model, under a much stronger variant of the algebraic complexity assumption used above. Our multilinear maps do not achieve the full functionality of multilinear maps as envisioned by Boneh and Silverberg (Contemporary Mathematics, 2003), but do allow for re-randomization and for encoding arbitrary plaintext elements.

Several cryptographic schemes and applications are based on functions that are both reasonably efficient to compute and moderately hard to invert, including client puzzles for Denial-of-Service protection, password protection via salted hashes, or recent proof-of-work blockchain systems. Despite their wide use, a definition of this concept has not yet been distilled and formalized explicitly. Instead, either the applications are proven directly based on the assumptions underlying the function, or some property of the function is proven, but the security of the application is argued only informally. The goal of this work is to provide a (universal) definition that decouples the efforts of designing new moderately hard functions and of building protocols based on them, serving as an interface between the two.
On a technical level, beyond the mentioned definitions, we instantiate the model for four different notions of hardness. We extend the work of Alwen and Serbinenko (STOC 2015) by providing a general tool for proving security for the first notion of memory-hard functions that allows for provably secure applications. The tool allows us to recover all of the graph-theoretic techniques developed for proving security under the older, non-composable, notion of security used by Alwen and Serbinenko. As an application of our definition of moderately hard functions, we prove the security of two different schemes for proofs of effort (PoE). We also formalize and instantiate the concept of a non-interactive proof of effort (niPoE), in which the proof is not bound to a particular communication context but rather any bit-string chosen by the prover.

We present a unified framework for obtaining black-box constructions of Universal Composable (UC) protocol in trusted setup models. Our result is analogous to the unified framework of Lin, Pass, and Venkitasubramaniam [STOC'09, Asiacrypt'12] that, however, only yields non-black-box constructions of UC protocols. Our unified framework shows that to obtain black-box constructions of UC protocols, it suffices to implement a special purpose commitment scheme that is, in particular, concurrently extractable using a given trusted setup. Using our framework, we improve black-box constructions in the common reference string and tamper-proof hardware token models by weakening the underlying computational and setup assumptions.

Realizing indistinguishablility obfuscation (IO) based on well-understood computational assumptions is an important open problem. Recently, realizing functional encryption (FE) has emerged as promising directing towards that goal. This is because: (1) compact single-key FE (where the functional secret-key is of length double the ciphertext length) is known to imply IO [Anath and Jain,
CRYPTO 2015; Bitansky and Vaikuntanathan, FOCS 2015] and (2) several strong variants of single-key FE are known based on various standard computation assumptions.
In this work, we study when FE can be used for obtaining IO.
We show any single-key FE for function families with ``short'' enough outputs (specifically the output is less than ciphertext length by a value at least $\omega(n + k)$, where $n$ is the message length and $k$ is the security parameter) is insufficient for IO
even when non-black-box use of the underlying FE is allowed to some degree. Namely, our impossibility result holds even if we are allowed to plant FE sub-routines as gates inside the circuits for which functional secret-keys are issued (which is exactly how the known FE to IO constructions work).
Complementing our negative result, we show that our condition of ``short'' enough is almost tight. More specifically, we show that any compact single-key FE with functional secret-key output length strictly larger than ciphertext length is sufficient for IO. Furthermore, we show that non-black-box use of the underlying FE is necessary for such a construction, by ruling out any fully black-box construction of IO from FE even with arbitrary long output.

As data sharing has become one of the most popular features offered by cloud storage services, designing public auditing mechanisms for integrity of shared data stored at the cloud becomes much more important. Two unique problems which arise in shared data auditing mechanisms include preserving signer identity privacy and collusion resistant revocation of cloud users. When the data stored at the cloud is shared among a group of users, different users may modify different data blocks; therefore, data blocks are signed by different users which accordingly leak signer identities to the public verifier. Also, when a user is revoked from the group, the signatures generated by this user become invalid to the group and should be re-signed by the cloud server using re-signature keys. In addition, the collusion of cloud server who possess re-signature keys and the revoked user should leak no information about the private key of other users. In this paper, by employing a collusion resistant proxy re-signature scheme, we propose a public auditing mechanism for integrity of shared data that provides identity privacy and collusion resistant user revocation, simultaneously. We also formally prove the mentioned properties based on exact security definition and well-known hard problems in the random oracle model. To our best knowledge, this paper presents the first public auditing mechanism based on collusion resistant proxy re-signatures. Moreover, our protocol supports large dynamic group of users, batch verification of multiple auditing tasks and fully dynamic data operations, efficiently. Overhead analysis and experimental results demonstrate the excellent efficiency of our scheme in comparison to the state of the art.

A secret-sharing scheme realizes the forbidden graph access structure determined by a graph $G=(V,E)$ if a pair of vertices can reconstruct the secret if and only if it is an edge in $G$. Secret-sharing schemes for forbidden graph access structures of bipartite graphs are equivalent to conditional disclosure of secrets protocols, a primitive that is used to construct attributed-based encryption schemes.
We study the complexity of realizing a forbidden graph access structure by linear secret-sharing schemes. A secret-sharing scheme is linear if the reconstruction of the secret from the shares is a linear mapping. In many applications of secret-sharing, it is required that the scheme will be linear. We provide efficient constructions and lower bounds on the share size of linear secret-sharing schemes for sparse and dense graphs, closing the gap between upper and lower bounds: Given a sparse graph with $n$ vertices and at most $n^{1+\beta}$ edges, for some $ 0 \leq \beta < 1$, we construct a linear secret-sharing scheme realizing its forbidden graph access structure in which the total size of the shares is $\tilde{O} (n^{1+\beta/2})$. We provide an additional construction showing that every dense graph with $n$ vertices and at least $\binom{n}{2} - n^{1+\beta}$ edges can be realized by a linear secret-sharing scheme with the same total share size.
We provide lower bounds on the share size of linear secret-sharing schemes realizing forbidden graph access structures. We prove that for most forbidden graph access structures, the total share size of every linear secret-sharing scheme realizing these access structures is $\Omega(n^{3/2})$, which shows that the construction of Gay, Kerenidis, and Wee [CRYPTO 2015] is optimal. Furthermore, we show that for every $ 0 \leq \beta < 1$ there exist a graph with at most $n^{1+\beta}$ edges and a graph with at least $\binom{n}{2}-n^{1+\beta}$ edges, such that the total share size of every linear secret-sharing scheme realizing these forbidden graph access structures is $\Omega (n^{1+\beta/2})$. This shows that our constructions are optimal (up to poly-logarithmic factors).

An attacker or evaluator can detect more information leakages if he improves the Signal-to-Noise Ratio (SNR) of power traces in his tests. For this purpose, pre-processings such as de-noise, distribution-based traces biasing are used. However, the existing traces biasing schemes can't accurately express the characteristics of power traces with high SNR, making them not ideal for leakage detections. Moreover, if the SNR of power traces is very low, it is very difficult to use the existing de-noise schemes and traces biasing schemes to enhance leakage detection. In this paper, a known key based pre-processing tool named Traces Linear Optimal Biasing (TLOB) is proposed, which performs very well even on power traces with very low SNR. It can accurately evaluate the noise of time samples and give reliable traces optimal biasing. Experimental results show that TLOB significantly reduces number of traces used for detection; correlation coefficients in $\rho$-tests using TLOB approach 1.00, thus the confidence of tests is significantly improved. As far as we know, there is no pre-processing tool more efficient than TLOB. TLOB is very simple, and only brings very limited time and memory consumption. We strongly recommend to use it to pre-process traces in side channel evaluations.

We identify a flaw in the security proof and a flaw in the concrete security analysis of the WOTS-PRF variant of the Winternitz one-time signature scheme, and discuss the implications to its concrete security.

We revisit security proofs for various cryptographic primitives in the auxiliary-input random-oracle model (AI-ROM), in which an attacker $A$ can compute arbitrary $S$ bits of leakage about the random oracle $\mathcal O$ before attacking the system and then use additional $T$ oracle queries to $\mathcal O$ during the attack. This model has natural applications in settings where traditional random-oracle proofs are not useful: (a) security against non-uniform attackers; (b) security against preprocessing.
We obtain a number of new results about the AI-ROM:
Unruh (CRYPTO '07) introduced the pre-sampling technique, which generically reduces security proofs in the AI-ROM to those in a much simpler $P$-bit-fixing random-oracle model (BF-ROM), where the attacker can arbitrarily fix the values of $\mathcal O$ on some $P$ coordinates, but then the remaining coordinates are chosen at random. Unruh's security loss for this transformation is $\sqrt{ST/P}$. We improve this loss to the optimal value $O(ST/P)$, which implies nearly tight bounds for a variety of indistinguishability applications in the AI-ROM.
While the basic pre-sampling technique cannot give tight bounds for unpredictability applications, we introduce a novel "multiplicative version" of pre-sampling, which allows to dramatically reduce the size of $P$ of the pre-sampled set to $P=O(ST)$ and yields nearly tight security bounds for a variety of unpredictability applications in the AI-ROM. Qualitatively, it validates Unruh's "polynomial pre-sampling conjecture"---disproved in general by Dodis et al. (EUROCRYPT '17)---for the special case of unpredictability applications.
Using our techniques, we reprove nearly all AI-ROM bounds obtained by Dodis et al. (using a much more laborious compression technique), but we also apply it to many settings where the compression technique is either inapplicable (e.g., computational reductions) or appears intractable (e.g., Merkle-Damgard hashing).
We show that for any salted Merkle-Damgard hash function with m-bit output there exists a collision-finding circuit of size $\Theta(2^{m/3})$ (taking salt as the input), which is significantly below the $2^{m/2}$ birthday security conjectured against uniform attackers.
We build two general compilers showing how to generically extend the security of applications proven in the traditional ROM to the AI-ROM. One compiler simply prepends a public salt to the random oracle and shows that salting generically provably defeats preprocessing.
Overall, our results make it much easier to get concrete security bounds in the AI-ROM. These bounds in turn give concrete conjectures about the security of these applications (in the standard model) against non-uniform attackers.

One of the most important tools for assessing hardness assumptions in cryptography is the Generic Group Model (GGM). Over the past two decades, numerous assumptions have been analyzed within this model. While a proof in the GGM can certainly provide some measure of confidence in an assumption, its scope is rather limited since it does not capture group-specific algorithms that make use of the representation of the group.
To overcome this limitation, we propose the Algebraic Group Model (AGM), a model that lies in between the standard model and the GGM. It is the first restricted model of computation covering group-specific algorithms yet allowing to derive simple and meaningful security statements.
We show that several important assumptions, among them the Computational Diffie-Hellman, the Strong Diffie-Hellman, and the interactive LRSW assumptions, are equivalent to the Discrete Logarithm (DLog) assumption in the AGM. On the more practical side, we prove tight security reductions for two important schemes in the AGM to DLog or a variant thereof: the BLS signature scheme and Groth's zero-knowledge SNARK (Eurocrypt '16), which is the most efficient SNARK for which only a proof in the GGM was known.
Moreover, in combination with known lower bounds on the Discrete Logarithm assumption in the GGM, our results can be used to derive lower bounds for all the above-mentioned results in the GGM.

Private Information Retrieval (PIR) allows a client to obtain data from a public database without disclosing the locations accessed. Traditionally, the stress is on preserving sublinear work for the client, while the server's work is taken to inevitably be at least linear in the database size.
Beimel, Ishai and Malkin (JoC 2004) show PIR schemes where, following a linear-work preprocessing stage, the server's work per query is sublinear in the database size. However, that work only addresses the case of multiple non-colluding servers; the existence of single-server PIR with sublinear server work remained unaddressed.
We consider single-server PIR schemes where, following a preprocessing stage in which the server obtains an encoded version of the database and the client obtains a short key, the per-query work of both server and client is polylogarithmic in the database size. We call such schemes doubly efficient.
Concentrating on the case where the client's key is secret, we show:
- A scheme, based on one-way functions, that works for a bounded number of queries, and where the server storage is linear in the number of queries plus the database size.
- A family of schemes for an unbounded number of queries, whose security follows from a corresponding family of new hardness assumption that are related to the hardness of solving a system of noisy linear equations.
We also show the insufficiency of a natural approach for obtaining doubly efficient PIR in the setting where the preprocessing is public.

Protocols for Private Set Intersection (PSI) are important cryptographic primitives that perform joint operations on datasets in a privacy-preserving way. They allow two parties to compute the intersection of their private sets without revealing any additional information beyond the intersection itself. PSI implementations in the literature usually do not use the best possible cryptographic implementation techniques. This results in protocols presenting computational and communication complexities that are prohibitive, particularly in the case when one of the protocol’s participant is a low-powered device and there are bandwidth restrictions. This paper builds on modern cryptographic engineering techniques and proposes optimizations for a promising one-way PSI protocol based on public-key cryptography. For the case when one of the parties holds a set much smaller than the other (a realistic assumption in many scenarios) we show that our improvements and optimizations yield a protocol that over performs the communication complexity and the run time of previous proposals by up to one thousand times.