We consider Symmetric Searchable Encryption with Sharing and Unsharing (SSEwSU), a notion that
models Symmetric Searchable Encryption (SSE) in a multi-user setting in which documents can be
dynamically shared and unshared among users. Previous works on SSE involving multiple users have
assumed that all users have access to the same set of documents and/or their security models assume
that all users in the system are trusted.
As in SSE, every construction of a SSEwSU will be a trade-off between efficiency and security, as
measured by the amount of leakage. In multi-user settings, we must also consider cross-user leakage
(x-user leakage) where a query performed by one user would leak information about the content of
documents shared with a different user.
We start by presenting two strawman solutions that are at the opposite side of the efficiency-leakage
bidimensional space: x-uz, that has zero x-user leakage but is very inefficient, and x-uL, that is very
efficient but highly insecure with very large x-user leakage. We give a third construction, x-um, that is as
efficient as x-uL and more efficient than x-uz. At the same time, x-um is considerably more secure than
x-uL. Construction x-um is based on the concept of a Re-writable Deterministic Hashing (RDH), which
can be thought of as a two-argument hash function with tokens that add re-writing capabilities. Sharing
and unsharing in x-um is supported in constant (in the number of users, documents, and keywords) time.
We give a concrete instantiation whose security is based on the Decisional Diffie-Hellman assumption.
We provide a rigorous analysis of x-um and show a tight bound on the leakage in the presence of an
active adversary that corrupts a subset of the users. We report on experimental work that show that
x-um is very efficient and x-user leakage grows very slowly as queries are performed by the users.
Additionally, we present extensions of x-um. We modify x-um to support a finer grained access
granularity, so a document can be shared to a user either only for reading (i.e., searching) or for writing
(i.e., editing). We also extend x-um to the bilinear setting to further reduce leakage.

We present new constructions of multi-input functional encryption (MIFE) schemes for the inner-product functionality that improve the state of the art solution of Abdalla et al. (Eurocrypt 2017) in two main directions.
First, we put forward a novel methodology to convert single-input functional encryption for inner products into multi-input schemes for the same functionality. Our transformation is surprisingly simple, general, and efficient. In particular, it does not require pairings and it can be instantiated with all known single-input schemes. This leads to two main advances. First, we enlarge the set of assumptions this primitive can be based on, notably obtaining new MIFEs for inner products from plain DDH, LWE and Composite Residuosity. Second, we obtain the first MIFE schemes from standard assumptions where decryption works efficiently even for messages of super-polynomial size.
Our second main contribution is the first function-hiding MIFE scheme for inner products based on standard assumptions. To this end, we show how to extend the original, pairing-based, MIFE by Abdalla et al. in order to make it function hiding, thus obtaining a function-hiding MIFE from the MDDH assumption.

We propose a protocol to securely compute the solution to the (single source) Shortest Path Problem, based on Dijkstra's algorithm and Secure Multiparty Computation. Our protocol improves state of the art by Aly et al. [FC 2013 \& ICISC 2014] and offers perfect security against both semi-honest and malicious adversaries. Moreover, it can easily be adapted to form a subroutine in other combinatorial mechanisms and we show how it can help solve certain combinatorial auctions. Finally, we demonstrate the efficiency of our protocol by experiments and benchmarking.

Functional encryption, which emerges in the community recently, is a generalized concept of traditional encryption (e.g. RSA and AES).
In traditional encryption scheme, decrypting a ciphertext with a correct decryption key will output the original plaintext associated to the ciphertext.
In contrast, in functional encryption scheme, decrypting a ciphertext with a correct decryption key will output a value that is derived from both the plaintext and the decryption key, and the decryption output would change when different correct decryption key is used to decrypt the same ciphertext.
We propose a new functional encryption scheme for multidimensional range query.
Given a ciphertext that is the encryption of some secret plaintext under a public attribute (a multidimensional point), and a decryption key corresponding to a query range and a function key.
If the public attribute point is within the query range,
a user is able to decrypt the ciphertext with the decryption key to obtain a value, which is the output of a pre-defined \emph{one-way} function with the secret plaintext and the function key as input. In comparison, in previous functional encryption for range query, a decryption will simply output the original secret plaintext when the attribute point is within the query range.

Indistinguishability obfuscation (iO) is a powerful cryptographic
tool often employed to construct a variety of core cryptographic
primitives such as public key encryption and signatures. In this paper, we focus on the employment of iO in order to construct short signatures with strong security guarantees (i.e., adaptive security) that provide a very efficient signing process for resource constrained devices. Sahai and Waters (SW) (STOC 2014) initially explored the construction of iO-based short signature schemes but their proposal provides selective security. Ramchen and Waters (RW) (CCS 2014) attempted to provide stronger security guarantees (i.e., adaptive security) but their proposal is much more computationally expensive than the SW proposal.
In this work, we propose an iO-based short signature scheme that provides adaptive security, fast signing for resource-constrained devices and is much more cost-ecient than the RW signature scheme. More precisely, we employ a puncturable PRF with a fixed length input to get a fast and adaptively secure signature scheme without any additional hardness assumption as in the SW signature scheme. To achieve this goal, we employ the technique of Hofheinz et al. called "delayed backdoor programming" using a random oracle, which allows to embed an execution thread that will only be invoked by special inputs generated using secret key information. Furthermore, we compare the cost of our signature scheme in terms of the cost of the underlying PRG used by the puncturable PRF. Our scheme has a much lower cost than the RW scheme, while providing strong security guarantees (i.e., adaptive security).

Recently, a new framework, called secure server-designation public key encryption with keyword search (SPEKS), was introduced to improve the security of dPEKS (which suffers from the on-line keyword guessing attack) by defining a new security model ‘original ciphertext indistinguishability’. In this paper, we note that off-line keyword
guessing attack can be launched by a malicious server to find the keyword used for generating the trapdoor, which was not considered in the related work. SPEKS can suffer from this kind of attack. Moreover, the security model defined for TD-IND in SPEKS is incomplete. Owing to the shown weaknesses, the existing security models are enhanced for trapdoor indistinguishability by defining two new security models. Finally, we propose a new framework.

In August 2015 the U.S. National Security Agency (NSA) released a major policy statement on the need for post-quantum
cryptography (PQC). This announcement will be a great stimulus to the development, standardization, and commercialization of new quantum-safe algorithms. However, certain peculiarities in the wording and timing of the statement have puzzled many people and given rise to much speculation concerning the NSA, elliptic curve cryptography (ECC), and quantum-safe cryptography. Our purpose is to attempt to evaluate some of the theories that have been proposed.

Abstract. A fuzzy extractor (Dodis et al., Eurocrypt 2004) is a pair of procedures that turns a noisy secret into a uniformly distributed key R. To eliminate noise, the generation procedure takes as input an enrollment value w and outputs R and a helper string P that enables further reproduction of R from some close reading w'.
Boyen immediately highlighted the need for reusable fuzzy extractors (CCS 2004) that remain secure even when numerous calls to the generation procedure are made on a user’s noisy secret. Boyen proved that any information-theoretically secure reusable fuzzy extractor is subject to strong limitations. In subsequent
work, Simoens et al. (IEEE S&P, 2009) showed that reusability was indeed a practical vulnerability.
More recently, Canetti et al. (Eurocrypt 2016) proposed moving to computational security and constructed a computationally secure reusable fuzzy extractor for the Hamming metric that corrects a
sublinear fraction of errors.
We propose a generic approach to building reusable fuzzy extractors where the main idea is to separate the reusability property from the key recovery. To do so, we define a new primitive called a reusable pseudoentropic isometry that projects an input metric space in a distance-and-entropy-preserving manner
even if applied multiple times. Generation of multiple randomized secrets $\Omega$s via such a tool does not reveal information about the original fuzzy secret w and can be used to “decorrelate” noisy versions of w.
We show that building a reusable fuzzy extractor from a reusable pseudoentropic isometry is straightforward by 1) randomizing the noisy secret w into $\Omega$ and 2) applying a traditional fuzzy extractor to derive a secret key from $\Omega$.
To show the promise of our framework, we propose instantiations that handle the set difference and Hamming metrics. The first one is an original construction based on composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008) yielding the first reusable fuzzy extractor that corrects a linear fraction of errors. For the second one, we show that Construction 2 proposed by Canetti et al. in Eurocrypt 2016 (Section 5.1) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry’s
reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet.
At last, we describe two practical solutions that reap benefits of our results while dealing with the aforementioned limitation.

In recent years, performance counters have been used as a side channel source for the branch mispredictions which has been used to attack ciphers with user privileges. However, existing research considers blinding techniques, like scalar blinding, scalar splitting as a mechanism of thwarting such attacks. In this endeavour, we reverse engineer the undisclosed model of Intel’s Broadwell and Sandybridge branch predictor and further utilize the largely unexplored perf ioctl calls in sampling mode to granularly monitor the branch prediction events asynchronously when a victim cipher is executing. With these artifacts in place, we target scalar blinding and splitting countermeasures to develop a key retrieval process using what is called as Deduce & Remove. The Deduce step uses template based on the number of branch misses as expected from the
3-bit model of the BPU to infer the matched candidate values. In the
Remove step, we correct any erroneous conclusions that are made, by
using the properties of the blinding technique under attack. It may be emphasized that as in iterated attacks the cost of a mistaken deduction could be significant, the blinding techniques actually aids in removing wrong guesses and in a way auto-corrects the key retrieval process. Finally, detailed experimental results have been provided to illustrate all the above steps for point blinding, scalar blinding, and scalar splitting to show that the secret scalar can be correctly recovered with high confidence. The paper concludes with recommendation on some suitable countermeasure at the algorithm level to thwart such attacks.

In anonymous identity-based encryption (IBE), ciphertexts not only hide their corresponding messages, but also their target identity. We construct an anonymous IBE scheme based on the Computational Diffie-Hellman (CDH) assumption in general groups (and thus, as a special case, based on the hardness of factoring Blum integers).
Our approach extends and refines the recent tree-based approach of Cho et al. (CRYPTO '17) and Döttling and Garg (CRYPTO '17). Whereas the tools underlying their approach do not seem to provide any form of anonymity, we introduce two new building blocks which we utilize for achieving anonymity: blind garbled circuits (which we construct based on any one-way function), and blind batch encryption (which we construct based on CDH).
We then further demonstrate the applicability of our newly-developed tools by showing that batch encryption implies a public-key encryption scheme that is both resilient to leakage of a $(1-o(1))$-fraction of its secret key, and KDM secure (or circular secure) with respect to all linear functions of its secret key (which, in turn, is known to imply KDM security for bounded-size circuits). These yield the first high-rate leakage-resilient encryption scheme and the first KDM-secure encryption scheme based on the CDH or Factoring assumptions.
Finally, relying on our techniques we also construct a batch encryption scheme based on the hardness of the Learning Parity with Noise (LPN) problem, albeit with very small noise rate $\Omega(\log^2(n)/n)$. Although this batch encryption scheme is not blind, we show that it still implies standard (i.e., non-anonymous) IBE, leakage resilience and KDM security. IBE and high-rate leakage resilience were not previously known from LPN, even with extremely low noise.

We introduce Multi Tree XMSS (XMSS^MT), a hash-based signature scheme that can be used to sign a virtually unlimited number of messages. It is provably forward and hence EU-CMA secure in the standard model and improves key and signature generation times compared to previous schemes. XMSS^MT has --- like all practical hash-based signature schemes --- a lot of parameters that control different trade-offs between security, runtimes and sizes. Using linear optimization, we show how to select provably optimal parameter sets for different use cases.

We present WOTS+, a Winternitz type one-time signature scheme (WOTS). We prove that WOTS+ is strongly unforgeable under chosen message attacks in the standard model. Our proof is exact and tight. The first property allows us to compute the security of the scheme for given parameters. The second property allows for shorter signatures than previous proposals without lowering the security. This improvement in signature size directly carries over to all recent hash-based signature schemes. I.e. we can reduce the signature size by more than 50% for XMSS+ at a security level of 80 bits. As the main drawback of hash-based signature schemes is assumed to be the signature size, this is a further step in making hash-based signatures practical.

Designing a secure permissionless distributed ledger that performs on
par with centralized payment processors such as Visa is challenging.
Most existing distributed ledgers are unable to "scale-out'' --
growing total processing capacity with number of participants --
and those that do compromise security or decentralization.
This work presents OmniLedger, the first scale-out
distributed ledger that can preserve long-term security
under permissionless operation.
OmniLedger ensures strong correctness and security
by using a bias-resistant public randomness protocol
to choose large statistically representative shards to process transactions,
and by introducing an efficient cross-shard commit protocol
to handle transactions affecting multiple shards atomically.
In addition, OmniLedger optimizes performance via
scalable intra-shard parallel transaction processing,
ledger pruning via collectively-signed state blocks,
and optional low-latency "trust-but-verify'' validation
of low-value transactions.
Evaluation of our working experimental prototype shows that
OmniLedger's throughput scales linearly in the number of validators available,
supporting Visa-level workloads and beyond,
while confirming typical transactions in under two seconds.

We present DECIM, an approach to solve the challenge of detecting endpoint compromise in messaging. DECIM manages and refreshes encryption/decryption keys in an automatic and transparent way: it makes it necessary for uses of the key to be inserted in an append-only log, which the device owner can interrogate in order to detect misuse.
We propose a multi-device messaging protocol that exploits our concept to allow users to detect unauthorised usage of their device keys. It is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing the main features required for deployment. We find that DECIM messaging is efficient even for millions of users.
The methods we introduce are not intended to replace existing methods used to keep keys safe (such as hardware devices, careful procedures, or key refreshment techniques). Rather, our methods provide a useful and effective additional layer of security.

Secure and highly efficient authenticated encryption (AE) algorithms which achieve data confidentiality and authenticity in the symmetric-key setting have existed for well over a decade. By all conventional measures, AES-OCB seems to be the AE algorithm of choice on any platform with AES-NI: it has a proof showing it is secure assuming AES is, and it is one of the fastest out of all such algorithms. However, algorithms such as AES-GCM and ChaCha20+Poly1305 have seen more widespread adoption, even though they will likely never outperform AES-OCB on platforms with AES-NI. Given the fact that changing algorithms is a long and costly process, some have set out to maximize the security that can be achieved with the already deployed algorithms, without sacrificing efficiency: ChaCha20+Poly1305 already improves over GCM in how it authenticates, GCM-SIV uses GCM's underlying components to provide nonce misuse resistance, and TLS1.3 introduces a randomized nonce in order to improve GCM's multi-user security. We continue this line of work by looking more closely at GCM and ChaCha20+Poly1305 to see what robustness they already provide over algorithms such as OCB, and whether minor variants of the algorithms can be used for applications where defense in depth is critical. We formalize and illustrate how GCM and ChaCha20+Poly1305 offer varying degrees of resilience to nonce misuse, as they can recover quickly from repeated nonces, as opposed to OCB, which loses all security. More surprisingly, by introducing minor tweaks such as an additional XOR, we can create a GCM variant which provides security even when unverified plaintext is released.

The multi-key, or multi-user, setting challenges cryptographic algorithms to maintain high levels of security when used with many different keys, by many different users. Its significance lies in the fact that in the real world, cryptography is rarely used with a single key in isolation. A folklore result, proved by Bellare, Boldyreva, and Micali for public-key encryption in EUROCRYPT 2000, states that the success probability in attacking any one of many independently keyed algorithms can be bounded by the success probability of attacking a single instance of the algorithm, multiplied by the number of keys present. Although sufficient for settings in which not many keys are used, once cryptographic algorithms are used on an internet-wide scale, as is the case with TLS, the effect of multiplying by the number of keys can drastically erode security claims. We establish a sufficient condition on cryptographic schemes and security games under which multi-key degradation is avoided. As illustrative examples, we discuss how AES and GCM behave in the multi-key setting, and prove that GCM, as a mode, does not have multi-key degradation. Our analysis allows limits on the amount of data that can be processed per key by GCM to be significantly increased. This leads directly to improved security for GCM as deployed in TLS on the Internet today.

In the context of the security evaluation of cryptographic implementations, profiling attacks (aka Template Attacks) play a fundamental role. Nowadays the most popular Template Attack strategy consists in approximating the information leakages by Gaussian distributions. Nevertheless this approach suffers from the difficulty to deal with both
the traces misalignment and the high dimensionality of the data. This forces the attacker to perform critical preprocessing phases, such as the selection of the points of interest and the realignment of measurements. Some software and hardware countermeasures have been conceived exactly to create such a misalignment. In this paper we propose an end-to-end profiling attack strategy based on the Convolutional Neural Networks: this strategy greatly facilitates the attack roadmap, since it does not require a previous trace realignment nor a precise selection of points of interest. To significantly increase the performances of the CNN, we moreover propose to equip it with the data augmentation technique that is classical in other applications of Machine Learning. As a validation, we present several experiments against traces misaligned by different kinds of countermeasures, including the augmentation of the clock jitter effect in a secure hardware implementation over a modern chip. The excellent results achieved in these experiments prove that Convolutional Neural Networks approach combined with data augmentation gives a very efficient alternative to the state-of-the-art profiling attacks.

In the RFC 7748 memorandum, the Internet Research Task Force specified a Montgomery-ladder scalar multiplication function based on two recently adopted elliptic
curves, ``curve25519" and ``curve448". The purpose of this function is to support the Diffie-Hellman key exchange algorithm that will be included in the forthcoming version of the Transport Layer Security cryptographic protocol. In this paper, we describe a ladder variant that permits to accelerate the fixed-point multiplication function inherent to the Diffie-Hellman key pair generation phase. Our proposal combines a right-to-left version of the Montgomery ladder along with the pre-computation of constant values directly derived from the base-point and its multiples. To our knowledge, this is the first proposal of a Montgomery ladder procedure for prime elliptic curves that admits the extensive use of pre-computation. In exchange of very modest memory resources and a small extra programming effort, the proposed ladder obtains significant speedups for software implementations.
Moreover, our proposal fully complies with the RFC 7748 specification.
Our estimates suggest that a full implementation of our pre-computable ladder should outperform state-of-the-art software implementations of the X25519 and X448 functions by a 40\% speedup when working in the fixed-point scenario.

We present Recursive Square Root ORAM (R-SQRT), a simple and flexible ORAM that can be
instantiated for different client storage requirements. R-SQRT requires significantly less bandwidth than
Ring and Partition ORAM, the previous two best practical constructions in their respective classes of
ORAM according to client storage requirements. Specifically, R-SQRT is a 4x improvement in amortized
bandwidth over Ring ORAM for similar server storage. R-SQRT is also a 1.33-1.5x improvement over
Partition ORAM under the same memory restrictions. R-SQRT-AHE, a variant of R-SQRT, is a 1.67-
1.75x improvement over the reported Partition ORAM results in the same settings. All the while,
R-SQRT maintains a single data roundtrip per query. We emphasize the simplicity of R-SQRT which
uses straightforward security and performance proofs.
Additionally, we present Twice-Recursive Square Root ORAM (TR-SQRT) with smaller client stor-
age requirements. Due to its flexibility, we construct several instantiations under different memory
requirements. TR-SQRT is asymptotically competitive with previous results, yet remarkably simple.

Cryptographic hash functions have wide applications including password hashing, pricing functions for spam and denial-of-service countermeasures and proof of work in cryptocurrencies. Recent progress on ASIC (Application Specific Integrated Circuit) hash engines raise concerns about the security of the above applications. This leads to a growing interest in ASIC resistant hash function and ASIC resistant proof of work schemes, i.e., those that do not give ASICs a huge advantage. The standard approach towards ASIC resistance today is through memory hard functions or memory hard proof of work schemes. However, we observe that the memory hardness approach is an incomplete solution. It only attempts to provide resistance to an ASIC's area advantage but overlooks the more important energy advantage. In this paper, we propose the notion of bandwidth hard functions to reduce an ASIC's energy advantage. CPUs cannot compete with ASICs for energy efficiency in computation, but we can rely on memory accesses to reduce an ASIC's energy advantage because energy costs of memory accesses are comparable for ASICs and CPUs. We propose a model for hardware energy cost that has sound foundations in practice. We then analyze the bandwidth hardness property of ASIC resistant candidates. We find scrypt, Catena-BRG and Balloon are bandwidth hard with suitable parameters. Lastly, we observe that a capacity hard function is not necessarily bandwidth hard, with a stacked double butterfly graph being a counterexample.