Private function evaluation aims to securely compute a function f (x_1 , . . ., x_n ) without leaking any information other than what is revealed by the output, where f is a private input of one of the parties (say Party_1) and x_i is a private input of the i-th party Party_i. In this work, we propose a novel and secure two-party private function evaluation (2PFE) scheme based on DDH assumption. Our scheme introduces a reusability feature that significantly improves the state-of-the-art. Accordingly, our scheme has two variants, one is utilized in the first evaluation of the function f, and the other is utilized in its subsequent evaluations. To the best of our knowledge, this is the first and most efficient special purpose PFE scheme that enjoys a reusablity feature. Our protocols achieve linear communication and computation complexities and a constant number of rounds which is at most three.

We point out an implicit unproven assumption underlying the security of rational proofs of storage that is related to a concept we call weak randomized compression. This is a class of interactive proofs designed in a security model with a rational prover who is encouraged to store data (possibly in a particular format), as otherwise it either fails verification or does not save any storage costs by deviating (in some cases it may even increase costs by ``wasting" the space).
Weak randomized compression is a scheme that takes a random seed $r$ and a compressible string $s$ and outputs a compression of the concatenation $r \circ s$. Strong compression would compress $s$ by itself (and store the random seed separately).
In the context of a storage protocol, it is plausible that the adversary knows a weak compression that uses its incompressible storage advice as a seed to help compress other useful data it is storing, and yet it does not know a strong compression that would perform just as well.
It therefore may be incentivized to deviate from the protocol in order to save space. This would be particularly problematic for proofs of replication, designed to encourage provers to store data in a redundant format, which weak compression would likely destroy.
We thus motivate the question of whether weak compression can always be used to efficiently construct strong compression, and find (negatively) that a black-box reduction would imply a universal compression scheme in the random oracle model for all compressible efficiently sampleable sources. Implausibility of universal compression aside, we conclude that constructing this black-box reduction for a class of sources is at least as hard as directly constructing a universal compression scheme for that class.

In this paper, we propose an improved cryptanalysis of the double-branch hash function RIPEMD-160 standardized by ISO/IEC. Firstly, we show how to theoretically calculate the step differential probability of RIPEMD-160, which was stated as an open problem by Mendel $et$ $al.$ at ASIACRYPT 2013. Secondly, based on the method proposed by Mendel $et$ $al.$ to automatically find a differential path of RIPEMD-160, we construct a 30-step differential path where the left branch is sparse and the right branch is controlled as sparse as possible. To ensure the message modification techniques can be applied to RIPEMD-160, some extra bit conditions should be pre-deduced and well controlled. These extra bit conditions are used to ensure that the modular difference can be correctly propagated. This way, we can find a collision of 30-step RIPEMD-160 with complexity $2^{70}$. This is the first collision attack on round-reduced RIPEMD-160. Moreover, by a different choice of the message words to merge two branches and adding some conditions to the starting point, the semi-free-start collision attack on the first 36-step RIPEMD-160 from ASIACRYPT 2013 can be improved. However, the previous way to pre-compute the equation $T^{\lll S_0}\boxplus C_0=(T\boxplus C_1)^{\lll S_1}$ costs too much. To overcome this obstacle, we are inspired by Daum's $et~al$. work on MD5 and describe a method to reduce the time complexity and memory complexity to pre-compute that equation. Combining all these techniques, the time complexity of the semi-free-start collision attack on the first 36-step RIPEMD-160 can be reduced by a factor of $2^{15.3}$ to $2^{55.1}$.

Unspent Transaction Outputs (UTXOs) are the internal mechanism used in many cryp-
tocurrencies to represent coins. Such representation has some clear benefits, but also entails some complexities that, if not properly handled, may leave the system in an inefficient state. Specifically, inefficiencies arise when wallets (the software responsible for transferring coins between parties) do not manage UTXOs properly when performing payments. In this paper, we study three cryptocurrencies: Bitcoin, Bitcoin Cash and Litecoin, by analyzing the actual state of their UTXO sets, that is, the status of their sets of spendable coins. These three cryptocurrencies are the top-3 UTXO based cryptocurrencies by market capitalization. Our analysis shows that the usage of each cryptocurrency is quite different, and let to different results. Furthermore, it also points out that the management of the transactions has not been always performed efficiently and then the actual state of the UTXO set for each cryptocurrency is far from ideal.

Braid group is a very important non-commutative group. It is also an important tool of quantum field theory, and has good topological properties. This paper focuses on the provable security research of cryptosystem over braid group, which consists of two aspects: One, we proved that the Ko's cryptosystem based on braid group is secure against chosen-plaintext-attack(CPA) which proposed in CRYPTO2000, while it dose not resist active attack. The other is to propose a new public key cryptosystem over braid group which is secure against adaptive chosen-ciphertext-attack(CCA2). Our proofs are based on random oracle models, under the computational conjugacy search assumption(CCS assumption). This kind of results have never been seen before

The ICAO-standardized Password Authenticated Connection Establishment (PACE) protocol is used all over the world to secure access to electronic passports. Key-secrecy of PACE is proven by first modeling
it as an Observational Transition System (OTS) in CafeOBJ, and then
proving invariant properties by induction.

Private linear key agreement (PLKA) enables a group of users to agree upon a common session key in a broadcast encryption (BE) scenario, while traitor tracing (TT) system allows a tracer to identify conspiracy of a troop of colluding pirate users. This paper introduces a key encapsulation mechanism in BE that provides the functionalities of both PLKA and TT in a unified cost-effective primitive. Our PLKA based traitor tracing offers a solution to the problem of achieving full collusion resistance property and public traceability simultaneously with significant efficiency and storage compared to a sequential improvement of the PLKA based traitor tracing systems. Our PLKA builds on a prime order multilinear group setting employing indistinguishability obfuscation (iO) and pseudorandom function (PRF). The resulting scheme has a fair communication, storage and computational efficiency compared to that of composite order groups. Our PLKA is adaptively chosen ciphertext attack (CCA)-secure and based on the hardness of the multilinear assumption, namely, the Decisional Hybrid Diffie-Hellman Exponent (DHDHE) assumption in standard model and so far a plausible improvement in the literature. More precisely, our PLKA design significantly reduces the ciphertext size, public parameter size and user secret key size. We frame a traitor tracing algorithm with shorter running time which can be executed publicly.

A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT '14) and Asharov et al. (STOC '16) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed.
We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the ``pad-and-split'' framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality $L$ must use space $\Omega ( N \log N / \log L )$ for databases of size $N$. This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD '17) which is captured by our pad-and-split framework.
Then, within the ``statistical-independence'' framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $O(\log \log \log N)$ factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly-optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $n = N^{1 - \epsilon(n)}$ document identifiers, the read efficiency is $\omega(1) \cdot \epsilon(n)^{-1}+ O(\log\log\log N)$ when retrieving its identifiers (where the $\omega(1)$ term may be arbitrarily small, and $\omega(1) \cdot \epsilon(n)^{-1}$ is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $N^{1 - 1/o(\log \log \log N)}$ document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $O(\log \log \log N)$ when retrieving its identifiers.

We consider information-theoretic secure two-party computation in the
plain model where no reliable channels are assumed, and all communication is performed over the binary symmetric channel (BSC) that flips each bit with fixed probability. In this reality-driven setting we investigate feasibility of communication-optimal noise-resilient semi-honest two-party computation i.e., efficient computation which is both private and correct despite channel noise.
We devise an information-theoretic technique that converts any correct, but not necessarily private, two-party protocol that assumes reliable channels, into a protocol which is both correct and private against semi-honest adversaries, assuming BSC channels alone. Our results also apply to other types of noisy-channels such as the elastic-channel.
Our construction combines tools from the cryptographic literature with tools from the literature on interactive coding, and achieves, to our knowledge, the best known communication overhead. Specifically, if $f$ is given as a circuit of size $s$, our scheme communicates $O(s + \kappa)$ bits for $\kappa$ a security parameter.
This improves the state of the art (Ishai et al., CRYPTO' 11) where the communication is $O(s) + \text{poly}(\kappa \cdot \text{depth}(s))$.

Post-quantum cryptography has attracted much attention from worldwide cryptologists. However, most research works are related to public-key cryptosystem due to Shor's attack on RSA and ECC ciphers. At CRYPTO 2016, Kaplan et al. breaks many secret-key (symmetric) systems using quantum period finding algorithm, which arises researcher's attentions to evaluate the symmetric systems against quantum attackers.
In this paper, we continue to study the symmetric ciphers against quantum attackers. First, we convert the classical advanced slide attacks (introduced by Biryukov and Wagner) to a quantum one, that gains an exponential speed-up of the time complexity. Thus, we could break 2/4K-Feistel and 2/4K-DES in polynomial time. Second, we give a new quantum key-recovery attack on full-round GOST, a Russian standard, with $2^{112}$ Grover iterations, which is faster than a quantum brute force search attack by a factor $2^{16}$.

By representing data in a unary way, the identity of the bits can be used as a printing pad to stain the data with the identity of its handlers. Passing data will identify its custodians, its pathway, and its bona fide. This technique will allow databases to recover from a massive breach as the thieves will be caught when trying to use this 'sticky data'. Heavily traveled data on networks will accumulate the 'fingerprints' of its holders, to allow for a forensic analysis of fraud attempts, or data abuse. Special applications for the financial industry, and for intellectual property management. Fingerprinting data may be used for new ways to balance between privacy concerns and public statistical interests. This technique might restore the identification power of the US Social Security Number, despite the fact that millions of them have been compromised. Another specific application regards credit card fraud. Once the credit card numbers are 'sticky' they are safe. The most prolific application though, may be in conjunction with digital money technology. The BitMint protocol, for example, establishes its superior security on 'sticky digital coins'. Advanced fingerprinting applications require high quality randomization. The price paid for the fingerprinting advantage is a larger data footprint -- more bits per content. Impacting both storage and transmission. This price is reasonable relative to the gained benefit.

Secure multi-party computation (MPC) is a general cryptographic technique that allows distrusting parties to compute a function of their individual inputs, while only revealing the output of the function. It has found applications in areas such as auctioning, email filtering, and secure teleconference. Given their importance, it is crucial that the protocols are specified and implemented correctly. In the programming language community, it has become good practice to use computer proof assistants to verify correctness proofs. In the field of cryptography, EasyCrypt is the state of the art proof assistant. It provides an embedded language for probabilistic programming,
together with a specialized logic, embedded into an ambient general purpose higher-order logic. It allows us to conveniently express cryptographic properties. EasyCrypt has been used successfully on many applications, including public-key encryption, signatures, garbled circuits and differential privacy. Here we show for the first time that it can also be used to prove security of MPC against a malicious adversary.
We formalize additive and replicated secret sharing schemes and apply them to Maurer’s MPC protocol for secure addition and multiplication. Our method extends to general polynomial functions. We follow the insights from EasyCrypt that security proofs can often be reduced to proofs about program equivalence, a topic that is well understood in the verification of programming languages. In particular, we show that for a class of MPC protocols in the passive case the non-interference-based (NI) definition is equivalent to a standard simulation-based security definition. For the active case, we provide a new non-interference based alternative to the usual simulation-based cryptographic definition that is tailored specifically to our protocol.

MapReduce programming paradigm allows to process big data sets in parallel on a large cluster. We focus on a scenario where the data owner outsources her data on an honest-but-curious server. Our aim is to evaluate grouping and aggregation with SUM, COUNT, AVG, MIN, and MAX operations for an authorized user. For each of these five operations, we assume that the public cloud provider and the user do not collude i.e., the public cloud does not know the secret key of the user. We prove the security of our approach for each operation.

All known multilinear map candidates have suffered from a class of attacks known as ``zeroizing'' attacks, which render them unusable for many applications. We provide a new construction of polynomial-degree multilinear maps and show that our scheme is provably immune to zeroizing attacks under a strengthening of the Branching Program Un-Annihilatability Assumption (Garg et al., TCC 2016-B).
Concretely, we build our scheme on top of the CLT13 multilinear maps (Coron et al., CRYPTO 2013). In order to justify the security of our new scheme, we devise a weak multilinear map model for CLT13 that captures zeroizing attacks and generalizations, reflecting all known classical polynomial-time attacks on CLT13. In our model, we show that our new multilinear map scheme achieves ideal security, meaning no known attacks apply to our scheme. Using our scheme, we give a new multiparty key agreement protocol that is several orders of magnitude more efficient that what was previously possible.
We also demonstrate the general applicability of our model by showing that several existing obfuscation and order-revealing encryption schemes, when instantiated with the CLT13 maps, are secure against known attacks. These are schemes that are actually being implemented for experimentation, but until our work had no rigorous justification for security.

We consider the problem of removing subexponential assumptions from cryptographic constructions based on indistinguishability obfuscation (iO). Specifically, we show how to apply complexity absorption (Zhandry, Crypto 2016) to the recent notion of probabilistic indistinguishability obfuscation (piO, Canetti et al., TCC 2015). As a result, we obtain a variant of piO which allows to obfuscate a large class of probabilistic programs, from polynomially secure indistinguishability obfuscation and extremely lossy functions. We then revisit several (direct or indirect) applications of piO, and obtain
- a fully homomorphic encryption scheme (without circular security assumptions),
- a multi-key fully homomorphic encryption scheme with threshold decryption,
- an encryption scheme secure under arbitrary key-dependent messages,
- a spooky encryption scheme for all circuits,
- a function secret sharing scheme with additive reconstruction for all circuits,
all from polynomially secure iO, extremely lossy functions, and, depending on the scheme, also other (but polynomial and comparatively mild) assumptions. All of these assumptions are implied by polynomially secure iO and the (non-polynomial, but very well-investigated) exponential DDH assumption. Previously, all the above applications required to assume the *subexponential* security of iO (and more standard assumptions).

We give a construction of a secure multi-party computation (MPC) protocol from a special type of key agreement, where the distribution of the messages sent by one of the parties is computationally close to the uniform distribution over an efficiently sampleable group, even when the other party is malicious. We term the latter strongly uniform key agreement (SU-KA).
First, we show that for any odd t, t-round SU-KA and statistically binding commitments are sufficient for a black-box construction of (t+1)-round maliciously secure oblivious transfer (M-OT). By invoking a recent result of Benhamouda and Lin (Eurocrypt 2017), the latter implies maliciously secure MPC within max(t+1,5) rounds in the plain model.
Additionally, we investigate the relationship between SU-KA, and similar types of public-key encryption and semi-honestly secure OT protocols where we also demand strong uniformity. This finally allows us to instantiate our result for t=2 and t=3 under standard assumptions, including any of low-noise LPN, LWE, Subset Sum, DDH, CDH, and RSA (all with polynomial hardness), so that under the same set of assumptions we also obtain 5-round maliciously secure MPC (and 4-round M-OT) in the plain model.

A major approach to overcoming the performance and scalability limitations of current blockchain protocols is to use sharding, which is to split the overheads of processing transactions among multiple, smaller groups of nodes. These groups work in parallel to maximize performance while requiring significantly smaller communication, computation, and storage per node, allowing the system to scale to large networks. However, existing sharding-based blockchain protocols still require a linear amount of communication (in the number of participants) per transaction, and hence, attain only partially the potential benefits of sharding. We show that this introduces a major bottleneck to the throughput and latency of these protocols. Aside from the limited scalability, these protocols achieve weak security guarantees due to either a small fault resiliency (e.g., 1/8 and 1/4) or high failure probability, or they rely on strong assumptions (e.g., trusted setup) that limit their applicability to mainstream payment systems.
We propose RapidChain, the first sharding-based public blockchain protocol that is resilient to Byzantine faults from up to a 1/3 fraction of its participants, and achieves complete sharding of the communication, computation, and storage overhead of processing transactions without assuming any trusted setup. We introduce an optimal intra-committee consensus algorithm that can achieve very high throughputs via block pipelining, a novel gossiping protocol for large blocks, and a provably-secure reconfiguration mechanism to ensure robustness.
Using an efficient cross-shard transaction verification technique, RapidChain avoids gossiping transactions to the entire network. Our empirical evaluations suggest that RapidChain can process (and confirm) more than 7,300 tx/sec with an expected confirmation latency of roughly 8.7 seconds in a network of 4,000 nodes with an overwhelming time-to-failure of more than 4,500 years.

In the simplest setting of proxy reencryption, there are three parties: Alice, Bob, and Polly (the proxy). Alice keeps some encrypted data that she can decrypt with a secret key known only to her. She wants to communicate the data to Bob, but not to Polly (nor anybody else).
Using proxy reencryption, Alice can create a reencryption key that will enable Polly to reencrypt the data for Bob's use, but which will not help Polly learn anything about the data.
There are two well-studied notions of security for proxy reencryption schemes: security under chosen-plaintext attacks (CPA) and security under chosen-ciphertext attacks (CCA). Both definitions aim to formalize the security that Alice enjoys against both Polly and Bob.
In this work, we demonstrate that CPA security guarantees much less security against Bob than was previously understood. In particular, CPA security does not prevent Bob from learning Alice's secret key after receiving a single honestly reencrypted ciphertext.
As a result, CPA security provides scant guarantees in common applications.
We propose security under honest-reencryption attacks (HRA), a strengthening of CPA security that better captures the goals of proxy reencryption. In applications, HRA security provides much more robust security. We identify a property of proxy reencryption schemes that suffices to amplify CPA security to HRA security and show that two existing proxy reencryption schemes are in fact HRA secure.

The information ratio of a secret sharing scheme $\Sigma$ is the ratio between the length of the largest share and the length of the secret, and it is denoted by $\sigma(\Sigma)$. The optimal information ratio of an access structure $\Gamma$ is the infimum of $\sigma(\Sigma)$ among all schemes $\Sigma$ with access structure $\Gamma$, and it is denoted by $\sigma(\Gamma)$. The main result of this work is that for every two access structures $\Gamma$ and $\Gamma'$, $|\sigma(\Gamma)-\sigma(\Gamma')|\leq |\Gamma\cup\Gamma'|-|\Gamma\cap\Gamma'|$. We prove it constructively. Given any secret sharing scheme $\Sigma$ for $\Gamma$, we present a method to construct a secret sharing scheme $\Sigma'$ for $\Gamma'$ that satisfies that $\sigma(\Sigma')\leq \sigma(\Sigma)+|\Gamma\cup\Gamma'|-|\Gamma\cap\Gamma'|$. As a consequence of this result, we see that \emph{close} access structures admit secret sharing schemes with similar information ratio. We show that this property is also true for particular classes of secret sharing schemes and models of computation, like the family of linear secret sharing schemes, span programs, Boolean formulas and circuits.
In order to understand this property, we also study the limitations of the techniques for finding lower bounds on the information ratio and other complexity measures. We analyze the behavior of these bounds when we add or delete subsets from an access structure.

Conventional (M, N )-threshold signature schemes leave users with
a painful choice. Setting M = N offers maximum resistance to key
compromise. With this choice, though, loss of a single key renders
the signing capability unavailable, creating paralysis in systems that
use signatures for access control. Lower M improves availability,
but at the expense of security. For example, (3, 3)-multisig wallet
experiences access-control paralysis upon loss of a single key, but
a (2, 3)-multisig allows any two players to collude and steal funds
from the third.
In this paper, we introduce techniques that address this impasse
by making general cryptographic access structures dynamic. Our
schemes permit, e.g., a (3, 3)-multisig, to be downgraded to a (2, 3)-
multisig if a player goes missing. This downgrading is secure in the
sense that it occurs only if a player is provably unavailable.
Our main tool is what we call a Paralysis Proof, evidence that play-
ers, i.e., key holders, are unavailable or incapacitated. Using Paraly-
sis Proofs, we show how to construct a Dynamic Access Structure
System, which can securely and flexibly update target access struc-
tures without a trusted third party such as a system administrator.
We present DASS constructions that combine a trust anchor (a
trusted execution environment or smart contract) with a censorship-
resistant channel in the form of a blockchain. We offer a formal
framework for specifying DASS policies, and define and show how
to achieve critical security and usability properties (safety, liveness,
and paralysis-freeness) in a DASS.
Paralysis Proofs can help address pervasive key-management chal-
lenges in many different settings. We present DASS schemes for
three important example use cases: recovery of cryptocurrency
funds should players become unavailable, returning funds to users
when cryptocurrency custodians fail, and remediating critical smart-
contract failures such as frozen funds. We report on practical im-
plementations for Bitcoin and Ethereum.