One of the greatest challenges on exchanging seemingly random nonces
or data either on a trusted or untrusted channel is the hardness of verify-
ing the correctness of such output. If one of the parties or an eavesdropper
can gain game-theoretic advantage of manipulating this seed, others can-
not efficiently notice modifications nor accuse the oracle in some way.
Decentralized applications where an oracle can go unnoticed with biased
outputs are highly vulnerable to attacks of this kind, limiting applicability
of these parties even though they can introduce great scalability to such
systems. Verifiable random functions[1] presented by Micali can be viewed
as keyed hash funcions where the key(s) used are asymmetric. They al-
low the oracle to prove correctness of a defined pseudorandom function
on seed
s
without actually making it public, thus not compromising the
unpredictability of the function. Our contribution here is to provide a
variant of this scheme and proving it’s security against known quantum
attacks and quantum oracles

Bitcoin, the first successful cryptocurrency, uses the blockchain structure and PoW mechanism to generate blocks. PoW makes an adversary difficult to control the network until she retains over 50\% of the hashrate of the total network. Another cryptocurrency, Ethereum, also uses this mechanism and it did not make problem before. In PoW research, however, several attack strategies are studied. In this paper, we researched selfish mining in the pooled mining environment and found the pooled mining exposes mining information of the block which adversary is mining to the random miners. Using this leaked information, other miners can exploit the selfish miner. At the same time, the adversary loses revenue than when she does honest mining. Because of the existence of our counter method, the adversary with pooled mining cannot do selfish mining easily on Bitcoin or blockchains using PoW.

Lattices in Euclidean spaces are important research objects in geometric number theory, and they have important applications in many areas, such as cryptology. The shortest vector problem (SVP) and the closest vector problem (CVP) are two famous computational problems about lattices. In this paper, we define so-called p-adic lattices, and consider the p-adic analogues of SVP and CVP in local fields. We find that, in contrast with lattices in Euclidean spaces, the situation is completely different and interesting. We also develop relevant algorithms, indicating that these problems are computable.

In this work, we study the problem of constructing oblivious RAM for secure multi-party computation to obliviously access memory at private locations during secure computation. We build on recent two-party Floram construction that uses function secret sharing for a point function and incurs $O(\sqrt N)$ secure computation and $O(N)$ local computation per ORAM access for an $N$-element data set. Our new construction, Top ORAM, is designed for multi-party computation with $n \ge 3$ parties and uses replicated secret sharing. We reduce secure computation component to $O(\log N)$, which has notable effect on performance. As a result, when Top ORAM is instantiated with $n=3$ parties, it outperforms all other 2- and 3-party ORAM constructions that we tested for datasets up to a few million (at which point $O(N)$ local work becomes the bottleneck).
To be able to accomplish the above, we design a number of secure $n$-party protocols for semi-honest adversaries in the setting with honest majority for replicated secret sharing. They are suitable to be instantiated over any finite ring, which has the advantage of using native hardware arithmetic with rings $\mathbb{Z}_{2^k}$ for some $k$. We also provide conversion procedures between other, more common types of secret sharing and replicated secret sharing to enable integration of Top ORAM with other secure computation frameworks.
As an additional contribution of this work, we show how our ORAM techniques can be used to realize private binary search at the cost of only a single ORAM access and $\log N$ comparisons, instead of conventional $O(\log N)$ ORAM accesses and comparisons. Because of this property, performance of our binary search is significantly faster than binary search using other ORAM schemes for all ranges of values that we tested.

Oblivious linear evaluation (OLE) is a two party protocol that allows a receiver to compute an evaluation of a sender's private, degree $1$ polynomial, without letting the sender learn the evaluation point. OLE is a special case of oblivious polynomial evaluation (OPE) which was first introduced by Naor and Pinkas in 1999. In this article we utilise OLE for the purpose of computing multiplication in multi-party computation (MPC).
MPC allows a set of $n$ mutually distrustful parties to privately compute any given function across their private inputs, even if up to $t<n$ of these participants are corrupted and controlled by an external adversary. In terms of efficiency and communication complexity, multiplication in MPC has always been a large bottleneck. The typical method employed by most current protocols has been to utilise Beaver's method, which relies on some precomputed information. In this paper we introduce an OLE-based MPC protocol which also relies on some precomputed information.
Our proposed protocol has a more efficient communication complexity than Beaver's protocol by a multiplicative factor of $t$. Furthermore, to compute a share to a multiplication, a participant in our protocol need only communicate with one other participant; unlike Beaver's protocol which requires a participant to contact at least $t$ other participants.

A typical countermeasure against side-channel attacks consists of masking intermediate values with a random number. In symmetric cryptographic algorithms, Boolean shares of the secret are typically used, whereas in asymmetric algorithms the secret exponent/scalar is typically masked using algebraic properties. This paper presents a new exponent splitting technique with minimal impact on performance based on Boolean shares. More precisely, it is shown how an exponent can be efficiently split into two shares, where the exponent is the XOR sum of the two shares, typically requiring only an extra register and a few register copies per bit. Our novel exponentiation and scalar multiplication algorithms can be randomized for every execution and combined with other blinding techniques. In this way, both the exponent and the intermediate values can be protected against various types of side-channel attacks. We perform a security evaluation of our algorithms using the mutual information framework and provide proofs that they are secure against first-order side-channel attacks. The side-channel resistance of the proposed algorithms is also practically verified with test vector leakage assessment performed on Xilinx's Zynq zc702 evaluation board.

We describe a hardware-software co-design
for the hash-based post-quantum signature scheme XMSS
on a RISC-V embedded processor.
We provide software optimizations
for the XMSS reference implementation
for SHA-256 parameter sets
and several hardware accelerators
that allow
to balance area consumption
and performance
based on individual needs.
The version with the best time-area product
for key generation
gives a 47x speedup in wall-clock time
at 5.1x larger resource requirements;
the best speedup of 52x
is achieved at a higher resource cost.
For signing, we achieve a maximum speedup
of over 23x
and for verification
of over 18x.
We tested and measured the cycle counts
of our implementation
on Intel (Altera) and Xilinx FPGAs.
The integration of our XMSS accelerators into
an embedded RISC-V processor
enables post-quantum secure signatures
for a large variety
of embedded applications.

Structure-Preserving Signatures (SPSs) are a useful tool for the design of modular cryptographic protocols. Recent series of works have shown that by limiting the message space of those schemes to the set of Diffie-Hellman (DH) pairs, it is possible to circumvent the known lower bounds in the Type-3 bilinear group setting thus obtaining the shortest signatures consisting of only 2 elements from the shorter source group.
It has been shown that such a variant yields efficiency gains for some cryptographic constructions, including attribute-based signatures and direct anonymous attestation. Only the cases of signing a single DH pair or a DH pair and a vector from $\Z_p$ have been considered. Signing a vector of group elements is required for various applications of SPSs, especially if the aim is to forgo relying on heuristic assumptions. Example applications where it is required to sign a vector of group elements include group, attribute-based and proxy signatures, and k-times anonymous authentication.
An open question is whether such an improved lower bound also applies to signing a vector of $\ell > 1$ messages. We answer this question negatively for schemes existentially unforgeable under an adaptive chosen-message attack (EUF-CMA) whereas we answer it positively for schemes
existentially unforgeable under a random-message attack (EUF-RMA) and those which are existentially unforgeable under a combined chosen-random-message attack (EUF-CMA-RMA). The latter notion is a leeway between the two former notions where it allows the adversary to adaptively choose part of the message to be signed whereas the remaining part of the message is chosen uniformly at random by the signer.
Another open question is whether strongly existentially unforgeable under an adaptive chosen-message attack (sEUF-CMA) schemes with
2-element signatures exist. We answer this question negatively, proving it is impossible to construct sEUF-CMA schemes with 2-element signatures even if the signature consists of elements from both source groups. On the other hand, we prove that sEUF-RMA and sEUF-CMA-RMA schemes with 2-element (unilateral) signatures are possible by giving constructions for those notions.

Code-based cryptography is one of the main techniques enabling cryptographic primitives in a post-quantum scenario. In particular, the MDPC scheme is a basic scheme from which many other schemes have been derived. These schemes rely on iterative decoding in the decryption process and thus have a certain small probability p of having a decryption (decoding) error.
In this paper we show a very fundamental and important property of code-based encryption schemes. Given one initial error pattern that fails to decode, the time needed to generate another message that fails to decode is strictly much less than 1/p. We show this by developing a method for fast generation of undecodable error patterns (error pattern chaining), which additionally proves that a measure of closeness in ciphertext space can be exploited through its strong linkage to the difficulty of decoding these messages. Furthermore, if side-channel information is also available (time to decode), then the initial error pattern no longer needs to be given since one can be easily generated in this case.
These observations are fundamentally important because they show that a, say, 128- bit encryption scheme is not inherently safe from reaction attacks even if it employs a decoder with a failure rate of 2−128. In fact, unless explicit protective measures are taken, having a failure rate at all – of any magnitude – can pose a security problem because of the error amplification effect of our method.
A key-recovery reaction attack was recently shown on the MDPC scheme as well as similar schemes, taking advantage of decoding errors in order to recover the secret key. It was also shown that knowing the number of iterations in the iterative decoding step, which could be received in a timing attack, would also enable and enhance such an attack. In this paper we apply our error pattern chaining method to show how to improve the performance of such reaction attacks in the CPA case. We show that after identifying a single decoding error (or a decoding step taking more time than expected in a timing attack), we can adaptively create new error patterns that have a much higher decoding error probability than for a random error. This leads to a significant improvement of the attack based on decoding errors in the CPA case and it also gives the strongest known attack on MDPC-like schemes, both with and without using side-channel information.

Token-based obfuscation (TBO) is an interactive approach to cryptographic program obfuscation that was proposed by Goldwasser et al. as a potentially more practical alternative to conventional non-interactive security models, such as Virtual Black Box (VBB) and Indistinguishability Obfuscation. We implement in PALISADE several optimized TBO constructions based on (Ring) LWE covering a relatively broad spectrum of capabilities, ranging from special-purpose linear functions to general branching programs. To the best of our knowledge, these are first implementations of TBO constructions based on lattices.
The linear-function construction is first proposed in our work, and can be used to efficiently obfuscate binary classifiers by utilizing the token-based model where the number and format of queries can be restricted by the token generator. Our implementation can evaluate obfuscated binary classifiers in less than 1 millisecond and requires a program size of only 8MB for the case of 16 2-byte features. We also present an optimized TBO implementation for conjunctions, which outperforms the prior recent implementation of distributional VBB conjunction obfuscator by one order of magnitude and reduces the program size by a factor of 3. The token-based model also provides protection against exhaustive search attacks the VBB implementation is prone to. The last group of TBO constructions implemented in our work deals with obfuscating permutation and general branching programs.
To enable efficient implementation of all these constructions, we developed many algorithmic and code-level optimizations that can also be applied to other lattice-based cryptography primitives.

We develop a new methodology to assess cryptographic key strength using cloud computing, by calculating the true economic cost of (symmetric- or private-) key retrieval for the most common cryptographic primitives. Although the present paper gives the current year (2018), 2015, 2012 and 2011 costs, more importantly it provides the tools and infrastructure to derive new data points at any time in the future, while allowing for improvements such as of new algorithmic approaches. Over time the resulting data points will provide valuable insight in the selection of cryptographic key sizes. For instance, we observe that the past clear cost-advantage of total cost of ownership compared to cloud-computing seems to be evaporating.

Fuchsbauer, Kiltz, and Loss~(Crypto'18) gave a simple and clean definition of an ¥emph{algebraic group model~(AGM)} that lies in between the standard model and the generic group model~(GGM). Specifically, an algebraic adversary is able to exploit group-specific structures as the standard model while the AGM successfully provides meaningful hardness results as the GGM. As an application of the AGM, they show a tight computational equivalence between the computing Diffie-Hellman~(CDH) assumption and the discrete logarithm~(DL) assumption. For the purpose, they used the square Diffie-Hellman assumption as a bridge, i.e., they first proved the equivalence between the DL assumption and the square Diffie-Hellman assumption, then used the known equivalence between the square Diffie-Hellman assumption and the CDH assumption. In this paper, we provide an alternative proof that directly shows the tight equivalence between the DL assumption and the CDH assumption. The crucial benefit of the direct reduction is that we can easily extend the approach to variants of the CDH assumption, e.g., the bilinear Diffie-Hellman assumption. Indeed, we show several tight computational equivalences and discuss applicabilities of our techniques.

In this paper we extend the work presented by Ashur and Posteuca in BalkanCryptSec 2018, by designing 0-correlation key-dependent linear trails covering more than one round of DES. First, we design a 2-round 0-correlation key-dependent linear trail which we then connect to Matsui's original trail in order to obtain a linear approximation covering the full DES and 3DES. We show how this approximation can be used for a key recovery attack against both ciphers. To the best of our knowledge, this paper is the first to use this kind of property to attack a symmetric-key algorithm, and our linear attack against 3DES is the first statistical attack against this cipher.

Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. We explore a new space of plausible PRF candidates that are obtained by mixing linear functions over different small moduli. Our candidates are motivated by the goals of maximizing simplicity and minimizing complexity measures that are relevant to cryptographic applications such as secure multiparty computation.
We present several concrete new PRF candidates that follow the above approach. Our main candidate is a weak PRF candidate (whose conjectured pseudorandomness only holds for uniformly random inputs) that first applies a secret mod-2 linear mapping to the input, and then a public mod-3 linear mapping to the result. This candidate can be implemented by depth-2 $ACC^0$ circuits. We also put forward a similar depth-3 strong PRF candidate. Finally, we present a different weak PRF candidate that can be viewed as a deterministic variant of ``Learning Parity with Noise'' (LPN) where the noise is obtained via a mod-3 inner product of the input and the key.
The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw natural connections between their hardness and questions in complexity theory or learning theory (e.g., learnability of depth-2 $ACC^0$ circuits and width-3 branching programs, interpolation and property testing for sparse polynomials, and natural proof barriers for showing super-linear circuit lower bounds). On the applied side, the ``piecewise-linear'' structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). Using our PRF candidates, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity (often both) compared to protocols obtained by combining standard MPC protocols with PRFs like AES, LowMC, or Rasta (the latter two are specialized MPC-friendly PRFs). Our advantage over competing approaches is maximized in the setting of MPC with an honest majority, or alternatively, MPC with preprocessing.
Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We conclude by showing that our main weak PRF candidate can plausibly be boosted to an encoded-input PRF by leveraging error-correcting codes.

We investigate the differential properties of a construction in which a given function $F : \mathbb{F}_{2^n} \rightarrow \mathbb{F}_{2^n}$ is modified at $K \in \mathbb{N}$ points in order to obtain a new function $G$. This is motivated by the question of determining the minimum Hamming distance between two APN functions and can be seen as a generalization of a previously studied construction in which a given function is modified at a single point. We derive necessary and sufficient conditions which the derivatives of $F$ must satisfy for $G$ to be APN, and use these conditions as the basis for an efficient filtering procedure for searching for APN functions whose value differs from that of a given APN function $F$ at a given set of points. We define a quantity $m_F$ related to $F$ counting the number of derivatives of a given type, and derive a lower bound on the distance between an APN function $F$ and its closest APN neighbor in terms of $m_F$. Furthermore, the value $m_F$ is shown to be invariant under CCZ-equivalence and easier to compute in the case of quadratic functions. We give a formula for $m_F$ in the case of $F(x) = x^3$ which allows us to express a lower bound on the distance between $F(x)$ and the closest APN function in terms of the dimension $n$ of the underlying field. We observe that this distance tends to infinity with $n$. We also compute $m_F$ and the distance to the closest APN function for a representative $F$ from each of the switching classes over $\mathbb{F}_{2^n}$ for $4 \le n \le 8$.
For a given function $F$ and value $v$, we describe an efficient method for finding all sets of points $\{ u_1, u_2, \dots, u_K \}$ such that setting $G(u_i) = F(u_i) + v$ and $G(x) = F(x)$ for $x \ne u_i$ is APN.

Very recently, a preprint ``Cryptanalysis of the Wave Signature Scheme'', eprint 2018/1111, appeared claiming to break Wave ``Wave: A New Code-Based Signature Scheme'', eprint 2018/996. We explain here why this claim is incorrect.

Distance-bounding (DB) protocols are being adopted in
different applications, e.g., contactless payments, keyless entries.
For DB to be application-ready, "pick-and-choose" corruption models
and clear-cut security definitions in DB are needed. Yet, this is virtually
impossible using the four existing formalisms for distance-bounding
(DB), whereby each considers around five different security
properties, arguably intertwined and hard to compare amongst each
other.
In particular, terrorist-fraud resistance has been notoriously
problematic to formalise in DB. Also, achieving this property, often
weakness a protocol's general security.
We demonstrate that --in fact-- terrorist-fraud resistance cannot be achieved,
under standard assumptions made for DB protocols.
Our result wraps up terrorist-fraud resistance in provable-security in DB.
As a consequence of terrorist-fraud resistance being made irrelevant,
and to address application-ready DB, we present a new,
provable-security model for distance-bounding. It formalises
fine-grained corruption-modes (i.e., white-box and black-box corrupted
provers) and this allows for clearer security definitions driven by
the separation in corruption-modes. Also, our model explicitly
includes a security-property generalising key-leakage, which per se
--before this-- was studied only implicitly or as a by-product of
other DB-security properties.
In all, our formalism only requires three, clear-cut security
definitions which can be "picked and chosen" based on the
application-driven prover-corruption modes.

Determining the security of AES is a central problem in cryptanalysis, but progress in this area had been slow and only a handful of cryptanalytic techniques led to significant advancements. At Eurocrypt 2017 Grassi et al. presented a novel type of distinguisher for AES-like structures, but so far all the published attacks which were based on this distinguisher were inferior to previously known attacks in their complexity. In this paper we combine the technique of Grassi et al. with several other techniques in a novel way to obtain the best known key recovery attack on 5-round AES in the single-key model, reducing its overall complexity from about $2^{32}$ to less than $2^{22}$. Extending our techniques to 7-round AES, we obtain the best known attacks on AES-192 which use practical amounts of data and memory, breaking the record for such attacks which was obtained in 2000 by the classical Square attack.

In the area of distributed graph algorithms a number of network's entities with local views solve some computational task by exchanging messages with their neighbors. Quite unfortunately, an inherent property of most existing distributed algorithms is that throughout the course of their execution, the nodes get to learn not only their own output but rather learn quite a lot on the inputs or outputs of many other entities. This leakage of information might be a major
obstacle in settings where the output (or input) of network's individual is a private information (e.g., distributed networks of selfish agents, decentralized digital currency such as Bitcoin).
While being quite an unfamiliar notion in the classical distributed setting, the notion of secure multi-party computation (MPC) is one of the main themes in the Cryptographic community. The existing secure MPC protocols do not quite fit the framework of classical distributed models in which only messages of bounded size are sent on graph edges in each round. In this paper, we introduce a new framework for \emph{secure distributed graph algorithms} and provide the first \emph{general compiler} that takes any "natural" non-secure distributed algorithm that runs in $r$ rounds, and turns it into a secure algorithm that runs in $\widetilde{O}(r \cdot D \cdot
poly(\Delta))$ rounds where $\Delta$ is the maximum degree in the graph and $D$ is its diameter. A "natural"
distributed algorithm is one where the local computation at each node can be performed in polynomial time. An interesting advantage of our approach is that it allows one to decouple between the price of locality and the price of \emph{security} of a given graph function $f$. The security of the compiled algorithm is information-theoretic but holds only against a semi-honest adversary that controls a single node in the network.
This compiler is made possible due to a new combinatorial structure called \emph{private neighborhood trees}: a collection of $n$ trees $T(u_1),\ldots, T(u_n)$, one for each vertex $u_i \in V(G)$, such that each tree $T(u_i)$ spans the neighbors of $u_i$ {\em without going through $u_i$}. Intuitively, each tree $T(u_i)$ allows all neighbors of $u_i$ to exchange a \emph{secret} that is hidden from $u_i$, which is the basic graph infrastructure of the compiler. In a
$(d,c)$-private neighborhood trees each tree $T(u_i)$ has depth at most $d$ and each edge $e \in G$ appears in at most $c$ different trees. We show a construction of private neighborhood trees with
$d=\widetilde{O}(\Delta \cdot D)$ and $c=\widetilde{O}(D)$,
both these bounds are \emph{existentially} optimal.

We propose a formal model of Bitcoin transactions, which is sufficiently abstract to enable formal reasoning, and at the same time is concrete enough to serve as an alternative documentation to Bitcoin. We use our model to formally prove some well-formedness properties of the Bitcoin blockchain, for instance that each transaction can only be spent once. We release an open-source tool through which programmers can write transactions in our abstract model, and compile them into standard Bitcoin transactions.