In this paper we revisit some of the main aspects of the DAGS Key Encapsulation Mechanism, one of the code-based candidates to NIST's standardization call for the key exchange/encryption functionalities. In particular, we modify the algorithms for key generation, encapsulation and decapsulation to fit an alternative KEM framework, and we present a new set of parameters that use binary codes. We discuss advantages and disadvantages for each of the variants proposed.

As surveillance systems are popular, the privacy of the recorded video becomes more important.
On the other hand, the authenticity of video images should be guaranteed when used as evidence in court. It is challenging to satisfy both (personal) privacy and authenticity of a video simultaneously, since the privacy requires modifications (e.g., partial deletions) of an original video image while the authenticity does not allow any modifications of the original image.
This paper proposes a novel method to convert an encryption scheme to support partial decryption with a constant number of keys and construct a privacy-aware authentication scheme by combining with a signature scheme. The security of our proposed scheme is implied by the security of the underlying encryption and signature schemes. Experimental results show that the proposed scheme can handle the UHD video stream with more than 17 fps on a real embedded system, which validates the practicality of the proposed scheme.

Algorithm substitution attack (ASA) on signatures should be treated seriously as the authentication services of numerous systems and applications rely on signature schemes and compromising them has a significant impact on the security of users. We present a somewhat alarming result in this regard: a highly efficient ASA on the Digital Signature Algorithm (DSA) and its implementation. Compared with the generic ASAs on signature schemes proposed in the literature, our attack provides fast and undetectable subversion, which will extract the user's private signing key by collecting maximum three signatures arbitrarily. Moreover, our ASA is proven to be robust against state reset.
We implemented the proposed ASA by replacing the original DSA in Libgcrypt (a popular cryptographic library used in many applications) with our subverted DSA. Experiment shows that the user's private key can readily be recovered once the subverted DSA is used to sign messages. In our implementation, various measures have been considered to significantly reduce the possibility of detection through comparing the running time of the original DSA and the subverted one (i.e. timing analysis). To our knowledge, this is the first implementation of ASA in practice, which shows that ASA is a real threat rather than only a theoretical speculation.

A repair of the Faure-Loidreau (FL) public-key code-based cryptosystem is proposed.The FL cryptosystem is based on the hardness of list decoding Gabidulin codes which are special rank-metric codes. We prove that the recent structural attack on the system by Gaborit et al. is equivalent to decoding an interleaved Gabidulin code. Since all known polynomial-time decoders for these codes fail for a large constructive class of error patterns, we are able to construct public keys that resist the attack. It is also shown that all other known attacks fail for our repair and parameter choices. Compared to other code-based cryptosystems, we obtain significantly smaller key sizes for the same security level.

In this short note we give a polynomial-time quantum reduction from the vectorization problem (DLP) to the parallelization problem (CDHP) for group actions.
Combined with the trivial reduction from parallelization to vectorization, we thus prove the quantum equivalence of both problems.

The recently proposed CSIDH primitive is a promising candidate for post quantum static-static key exchanges with very small keys. However, until now there is only a variable-time proof-of-concept implementation by Castryck, Lange, Martindale, Panny, and Renes, recently optimized by Meyer and Reith, that can leak various information about the private key. Therefore, we present a constant-time implementation that samples key elements only from intervals of nonnegative numbers and uses dummy isogenies, which prevents certain kinds of side-channel attacks. We apply several optimizations, e.g. SIMBA and Elligator, in order to get a more efficient implementation.

We present an approach and a tool to answer the need for effective, generic and easily applicable protections against side-channel attacks. The protection mechanism is based on code polymorphism, so that the observable behaviour of the protected component is variable and unpredictable to the attacker. Our approach combines lightweight specialized runtime code generation with the optimization capabilities of static compilation. It is extensively configurable. Experimental results show that programs secured by our approach present strong security levels and meet the performance requirements of constrained systems.

Past few years have seen the emergence of Machine Learning
and Deep Learning algorithms as promising tools for profiling attacks, especially Convolutional Neural Networks (CNN). The latters have indeed been shown to overcome countermeasures such as de-synchronization or masking. However, CNNs are not widely used yet and Gaussian Templates are usually preferred. Though their efficiency is highly impacted by the countermeasures previously mentioned, their relevance relies on theoretical and physical justifications fairly recognized among the Side Channel community. Instead, the efficiency of CNNs still raises a certain scepticism as they act as a black-box tool. This scepticism is not specific to the Side Channel Analysis context: understanding to what extent CNNs would be so powerful and how they learn to recognize discriminative features for classification problems is still an open problem. Some methods have been proposed by the computer vision community, without satisfying performance in this field. However, methods based on Sensitivity Analysis particularly fit our problem. We propose to apply one of them called Gradient Visualization that uses the derivatives of a CNN model with respect to an input trace in order to accurately identify temporal moments where sensitive information leaks. In this paper, we theoretically show that this method may be used to efficiently localize Points of Interest in the SCA context. The efficiency of the proposed method does not depend on the particular countermeasure that may be applied to the measured traces as long as the profiled CNN can still learn in presence of such difficulties. In addition, the characterization can be made for each trace individually. We verified the soundness of our proposed method on simulated data and on experimental traces from a public Side Channel database. Eventually we empirically show that Sensitivity Analysis is at least as well as state-of-the-art characterization methods, in presence (or not) of countermeasures.

Cryptographic implementations on embedded systems need to be protected against physical attacks. Today, this means that apart from incorporating countermeasures against side-channel analysis, implementations must also withstand fault attacks and combined attacks. Recent proposals in this area have shown that there is a big tradeoff between the implementation cost and the strength of the adversary model. In this work, we introduce a new combined countermeasure M&M that combines Masking with information-theoretic MAC tags and infective computation. It works in a stronger adversary model than the existing scheme ParTI, yet is a lot less costly to implement than the provably secure MPC-based scheme CAPA. We demonstrate M&M with a SCA- and DFA-secure implementation of the AES block cipher. We evaluate the side-channel leakage of the second-order secure design with a non-specific t-test and use simulation to validate the fault resistance.

A set $S \subseteq \mathbb{F}_2^n$ is called degree-$d$ zero-sum if the sum $\sum_{s \in S} f(s)$ vanishes for all $n$-bit Boolean functions of algebraic degree at most $d$. Those sets correspond to the supports of the $n$-bit Boolean functions of degree at most $n-d-1$. We prove some results on the existence of degree-$d$ zero-sum sets of full rank, i.e., those that contain $n$ linearly independent elements, and show relations to degree-1 annihilator spaces of Boolean functions and semi-orthogonal matrices. We are particularly interested in the smallest of such sets and prove bounds on the minimum number of elements in a degree-$d$ zero-sum set of rank $n$.
The motivation for studying those objects comes from the fact that degree-$d$ zero-sum sets of full rank can be used to build linear mappings that preserve special kinds of \emph{nonlinear invariants}, similar to those obtained from orthogonal matrices and exploited by Todo, Leander and Sasaki for breaking the block ciphers Midori, Scream and iScream.

Seminal results by Luby and Rackoff show that the 3-round Feistel cipher is secure against chosen-plaintext attacks (CPAs), and the 4-round version is secure against chosen-ciphertext attacks (CCAs). However, the security significantly changes when we consider attacks in the quantum setting, where the adversary can make superposition queries. By using Simon's algorithm that detects a secret cycle-period in polynomial-time, Kuwakado and Morii showed that the 3-round version is insecure against quantum CPA by presenting a polynomial-time distinguisher. Since then, Simon's algorithm has been heavily used against various symmetric-key constructions. However, its applications are still not fully explored.
In this paper, based on Simon's algorithm, we first formalize a sufficient condition of a quantum distinguisher against block ciphers so that it works even if there are multiple collisions other than the real period. This distinguisher is similar to the one proposed by Santoli and Schaffner, and it does not recover the period. Instead, we focus on the dimension of the space obtained from Simon's quantum circuit. This eliminates the need to evaluate the probability of collisions, which was needed in the work by Kaplan et al. at CRYPTO 2016. Based on this, we continue the investigation of the security of Feistel ciphers in the quantum setting. We show a quantum CCA distinguisher against the 4-round Feistel cipher. This extends the result of Kuwakado and Morii by one round, and follows the intuition of the result by Luby and Rackoff where the CCA setting can extend the number of rounds by one. We also consider more practical cases where the round functions are composed of a public function and XORing the subkeys. We show the results of both distinguishing and key recovery attacks against these constructions.

We describe a variation of the Schnorr-Lyubashevsky approach
to devising signature schemes that is adapted to rank based cryptography. This new approach enables us to obtain
a randomization of the signature, which previously seemed difficult to derive for code-based cryptography. We provide a detailed analysis
of attacks and an EUF-CMA proof for our scheme. Our scheme relies
on the security of the Ideal Rank Support Learning and the Ideal Rank Syndrome problems
and a newly introduced problem: Product Spaces Subspaces Indistinguishability,
for which we give a detailed analysis. Overall the parameters
we propose are efficient and comparable in terms of signature size to the Dilithium lattice-based scheme, with a signature size of less than 4kB for
a public key of size less than 20kB.

Decentralized cryptocurrencies based on blockchains provide attractive features, including user privacy and system transparency, but lack active control of money supply and capabilities for regulatory oversight, both existing features of modern monetary systems. These limitations are critical, especially if the cryptocurrency is to replace, or complement, existing fiat currencies. Centralized cryptocurrencies, on the other hand, provide controlled supply of money, but lack transparency and transferability. Finally, they provide only limited privacy guarantees, as they do not offer recipient anonymity or payment value secrecy.
We propose a novel digital currency, called PRCash, where the control of money supply is centralized, money is represented as value-hiding transactions for transferability and improved privacy, and transactions are verified in a distributed manner and published to a public ledger for verifiability and transparency. Strong privacy and regulation are seemingly conflicting features, but we overcome this technical problem with a new regulation mechanism based on zero-knowledge proofs. Our implementation and evaluation shows that payments are fast and large-scale deployments practical. PRCash is the first digital currency to provide control of money supply, transparency, regulation, and privacy at the same time, and thus make its adoption as a fiat currency feasible.

A treasury system is a community controlled and decentralized collaborative decision-making mechanism for sustainable funding of the blockchain development and maintenance. During each treasury period, project proposals are submitted, discussed, and voted for; top-ranked projects are funded from the treasury. The Dash governance system is a real-world example of such kind of systems. In this work, we, for the first time, provide a rigorous study of the treasury system. We modeled, designed, and implemented a provably secure treasury system that is compatible with most existing blockchain infrastructures, such as Bitcoin, Ethereum, etc. More specifically, the proposed treasury system supports liquid democracy/delegative voting for better collaborative intelligence. Namely, the stake holders can either vote directly on the proposed projects or delegate their votes to experts. Its core component is a distributed universally composable secure end-to-end verifiable voting protocol. The integrity of the treasury voting decisions is guaranteed even when all the voting committee members are corrupted. To further improve efficiency, we proposed the world’s first honest verifier zero-knowledge proof for unit vector encryption with logarithmic size communication. This partial result may be of independent interest to other cryptographic protocols. A pilot system is implemented in Scala over the Scorex 2.0 framework, and its benchmark results indicate that the proposed system can support tens of thousands of treasury participants with high efficiency.

This paper was merged with a work by Jia Liu, Saqib A. Kakvi, and Bogdan Warinschi. See https://eprint.iacr.org/2015/482.

Multi-Key Homomorphic Signatures (MKHS)
enable clients in a system to sign and upload messages to an untrusted server.
At any later point in time, the server can perform a computation $C$
on data provided by $t$ different clients, and
return the output $y$ and a short signature $\sigma{C, y}$
vouching for the correctness
of $y$ as the output of the function $f$ on the signed data.
Interestingly, MKHS enable verifiers to check the validity of the signature
using solely the public keys of the signers whose messages were used in the computation.
Moreover, the signatures $\sigma{C, y}$ are succinct,
namely their size depends at most linearly in the number of clients,
and only logarithmically in the total number of inputs of $C$.
Existing MKHS are constructed based either on standard assumptions over lattices
(Fiore et al., ASIACRYPT'16),
or on non-falsifiable assumptions (SNARKs) (Lai et al., ePrint'16).
In this paper, we investigate connections between
single-key and multi-key homomorphic signatures.
We propose a generic compiler, called \matrioska, which turns any (sufficiently expressive)
single-key homomorphic signature scheme into a multi-key scheme.
Matrioska establishes a formal connection between these two
primitives
and is the first alternative to the only known construction
under standard falsifiable assumptions.
Our result relies on a novel technique that exploits the homomorphic property of a single-key HS scheme
to compress an arbitrary number of signatures from $t$ different users into only $t$ signatures.

In this paper, we describe and analyze the security of the AES-GCM-SIV mode of operation, as defined in the CFRG specification \cite{CFRG}. This mode differs from the original GCM-SIV mode that was designed in \cite{GL2015} in two main aspects. First, the CTR encryption uses a 127-bit pseudo-random counter instead of a 95-bit pseudo-random value concatenated with a 32-bit counter. This construction leads to improved security bounds when encrypting short messages. In addition, a new key derivation function is used for deriving a fresh set of keys for each nonce. This addition allows for encrypting up to $2^{50}$ messages with the same key, compared to the significant limitation of only $2^{32}$ messages that were allowed with GCM-SIV (which inherited this same limit from AES-GCM). As a result, the new construction is well suited for real world applications that need a nonce-misuse resistant Authenticated Encryption scheme. We explain the limitations of GCM-SIV, which motivate the new construction, prove the security properties of AES-GCM-SIV, and show how these properties support real usages. Implementations are publicly available in \cite{ShayGit}. We remark that AES-GCM-SIV is already integrated into Google's BoringSSL library \cite{BoringSSL}, and its deployment for ticket encryption in QUIC \cite{QUIC} is underway.

Ideas from Fourier analysis have been used in cryptography for the last three decades. Akavia, Goldwasser and Safra unified some of these ideas to give a complete algorithm that finds significant Fourier coefficients of functions on any finite abelian group. Their algorithm stimulated a lot of interest in the cryptography community, especially in the context of ``bit security''. This manuscript attempts to be a friendly and comprehensive guide to the tools and results in this field. The intended readership is cryptographers who have heard about these tools and seek an understanding of their mechanics and their usefulness and limitations. A compact overview of the algorithm is presented with emphasis on the ideas behind it. We show how these ideas can be extended to a ``modulus-switching'' variant of the algorithm. We survey some applications of this algorithm, and explain that several results should be taken in the right context. In particular, we point out that some of the most important bit security problems are still open. Our original contributions include: a discussion of the limitations on the usefulness of these tools; an answer to an open question about the modular inversion hidden number problem.

Tendermint-core blockchains (e.g. Cosmos) are considered today one of the most viable alternatives for the highly energy consuming proof-of-work blockchains such as Bitcoin and Ethereum. Their particularity is that they
aim at offering strong consistency (no forks) in an open system combining two ingredients (i) a set of validators that generate blocks via a variant of Practical Byzantine Fault Tolerant (PBFT) consensus protocol and (ii) a selection strategy that dynamically selects nodes to be validators for the next block via a proof-of-stake mechanism.
However, the exact assumptions on the system model under which Tendermint underlying algorithms are correct and the exact properties Tendermint verifies have never been formally analyzed. The contribution of this paper is two-fold. First, while formalizing Tendermint algorithms we precisely characterize the system model and the exact problem solved by Tendermint. We prove that in eventual synchronous systems a modified version of Tendermint solves (i) under additional assumptions, a variant of one-shot consensus for the validation of one single block and (ii) a variant of the repeated consensus problem for multiple blocks. These results hold even if the set of validators is hit by Byzantine failures, provided that for each one-shot consensus instance less than one third of the validators is Byzantine. Our second contribution relates to the fairness of the rewarding mechanism. It is common knowledge that in permisionless blockchain systems the main threat is the tragedy of commons that may yield the system to collapse if the rewarding mechanism is not adequate. Ad minimum the rewarding mechanism must be $fair$, i.e. distributing the rewards in proportion to the merit of participants. We prove, for the first time in blockchain systems, that in repeated-consensus based blockchains there exists an (eventual) fair rewarding mechanism if and only if the system is (eventual) synchronous.
We also show that the original Tendermint rewarding is not fair, however, a modification of the original protocol makes it eventually fair.

Shamir's celebrated secret sharing scheme provides an efficient method for
encoding a secret of arbitrary length \ell among any N \leq 2^\ell players
such that for a threshold parameter t, (i) the knowledge of any t shares
does not reveal any information about the secret and, (ii) any choice of t+1
shares fully reveals the secret. It is known that any such threshold secret
sharing scheme necessarily requires shares of length \ell, and in this sense
Shamir's scheme is optimal. The more general notion of ramp schemes requires
the reconstruction of secret from any t+g shares, for a positive integer gap
parameter g. Ramp secret sharing scheme necessarily requires shares of length
\ell/g. Other than the bound related to secret length \ell, the share
lengths of ramp schemes can not go below a quantity that depends only on the
gap ratio g/N. In this work, we study secret sharing in the extremal case of
bit-long shares and arbitrarily small gap ratio g/N, where standard ramp
secret sharing becomes impossible. We show, however, that a slightly relaxed
but equally effective notion of semantic security for the secret, and
negligible reconstruction error probability, eliminate the impossibility.
Moreover, we provide explicit constructions of such schemes. One of the
consequences of our relaxation is that, unlike standard ramp schemes with
perfect secrecy, adaptive and non-adaptive adversaries need different analysis
and construction. For non-adaptive adversaries, we explicitly construct secret
sharing schemes that provide secrecy against any \tau fraction of observed
shares, and reconstruction from any \rho fraction of shares, for any choices
of 0 \leq \tau < \rho \leq 1. Our construction achieves secret length
N(\rho-\tau-o(1)), which we show to be optimal. For adaptive adversaries, we
construct explicit schemes attaining a secret length \Omega(N(\rho-\tau)).