We provide an analysis of IEEE standard P1735, which describes methods for encrypting electronic-design intellectual property (IP), as well as the management of access rights for such IP. We find a surprising number of cryptographic mistakes in the standard. In the most egregious cases, these mistakes enable attack vectors that allow us to recover the entire underlying plaintext IP. Some of these attack vectors are well-known, e.g. padding-oracle attacks. Others are new, and are made possible by the need to support the typical uses of the underlying IP; in particular, the need for commercial system-on-chip (SoC) tools to synthesize multiple pieces of IP into a fully specified chip design and to provide syntax errors. We exploit these mistakes in a variety of ways, leveraging a commercial SoC tool as a black-box oracle. In addition to being able to recover entire plaintext IP, we show how to produce standard-compliant ciphertexts of IP that have been modified to include targeted hardware Trojans. For example, IP that correctly implements the AES block cipher on all but one (arbitrary) plaintext that induces the block cipher to return the secret key. We outline a number of other attacks that the standard allows, including on the cryptographic mechanism for IP licensing. Unfortunately, we show that obvious “quick fixes” to the standard (and the tools that support it) do not stop all of our attacks. This suggests that the standard requires a significant overhaul, and that IP-authors using P1735 encryption should consider themselves at risk.

Composability and robustness against physical defaults (e.g., glitches) are two highly desirable properties for secure implementations of masking schemes. While tools exist to guarantee them separately, no current formalism enables their joint investigation. In this paper, we solve this issue by introducing a new model, the robust probing model, that is naturally suited to capture the combination of these properties. We first motivate this formalism by analyzing the excellent robustness and low randomness requirements of first-order threshold implementations, and highlighting the difficulty to extend them to higher orders. Next, and most importantly, we use our theory to design higher-order secure, robust and composable multiplication gadgets. While admittedly inspired by existing approaches to masking (e.g., Ishai-Sahai-Wagner-like, threshold, domain-oriented), these gadgets exhibit subtle implementation differences with these state-of-the-art solutions (none of which being provably composable and robust). Hence, our results illustrate how sound theoretical models can guide practically-relevant implementations.

Deep learning technology has been evaluated to achieve the high-accuracy of state-of-the-art algorithms in a variety of AI tasks. Its popularity draws security researchers’ attention to the topic of privacy-preserving deep learning, in which neither training data nor model is expected to be exposed. Recently, federated learning becomes a promising way where multi-parties upload local gradients and a server updates parameters with collected gradients, in which the privacy issue has been discussed widely. In this paper, we explore additional security issues in this setting, not merely the privacy. First, we consider that the general assumption of honest-but-curious server is problematic, and the malicious server may break privacy. Second, the malicious server or participants may damage the correctness of training, such as incorrect gradient collecting and parameter updating. Third, we indicate that federate learning lacks incentives, since privacy and financial considerations may prevent distrustful parties from collaborative training. To address the aforementioned issues, we introduce a value-driven incentive mechanism based on Blockchain. Adapted to this incentive setting, we migrate the malicious threats from server and participants, and guarantee the privacy and public auditability. Thus, we propose to present DeepChain which gives distrustful parties incentives to participate in privacy-preserving training, share gradients and update parameters correctly, and accomplish iterative training with a win-win result. At last, we give an implementation prototype for DeepChain by integrating deep learning module with a blockchain development platform. We evaluate it in terms of encryption performance and training accuracy, which demonstrates the feasibility of DeepChain.

A proof-of-replication (PoRep) is an interactive proof system in which a prover defends a publicly verifiable claim that it is dedicating unique resources to storing one or more retrievable replicas of a data file.
In this sense a PoRep is both a proof of space (PoS) and a proof of retrievability (PoR).
This paper is a foundational study of PoReps, exploring both their capabilities and their limitations. While PoReps may unconditionally demonstrate possession of data, they fundamentally cannot guarantee that the data is stored redundantly. Furthermore, as PoReps are proofs of space, they must rely either on rational time/space tradeoffs or timing bounds on the online prover's runtime.
We introduce a rational security notion for PoReps called epsilon-rational replication based on the notion of an epsilon-Nash equilibrium, which captures the property that a server does not gain any significant advantage by storing its data in any other (non-redundant) format.
We apply our definitions to formally analyze two recently proposed PoRep constructions based on verifiable delay functions and depth robust graphs.
Lastly, we reflect on a notable application of PoReps---its unique suitability as a Nakamoto consensus mechanism that replaces proof-of-work with PoReps on real data, simultaneously incentivizing and subsidizing the cost of file storage.

Following the development of quantum computing, the demand for post-quantum alternatives to current cryptosystems has firmly increased recently. The main disadvantage of those schemes is the amount of resources needed to implement them in comparison to their classical counterpart. In conjunction with the growth of the Internet of Things, it is crucial to know if post-quantum algorithms can evolve in constraint environments without incurring an unacceptable performance penalty. In this paper, we propose an instantiation of a module-lattice-based KEM working over a ring of dimension 128 using a limited amount of memory at runtime. It can be seen as a lightweight version of Kyber or a module version of Frodo. We propose parameters targeting popular 8-bit AVR microcontrollers and security level 1 of NIST. Our implementation fits in around 2 KB of RAM while still providing reasonable efficiency and 128 bits of security, but at the cost of a reduced correctness.

Since the development of cryptanalysis of AES and AES-like constructions in the late 1990s, the set of inputs (or a subset of
it) which differ only in one diagonal has special importance. It appears in various (truncated) differential, integral, and impossible differential attacks, among others.
In this paper we present new techniques to analyze this special set of
inputs, and report on new properties. In differential cryptanalysis, statements about the probability distribution of output differences, like mean or variance, are of interest. So far such statements were only possible for up to 4 rounds of AES. In this paper we consider the probabilistic distribution of the number of different pairs of corresponding ciphertexts that lie in certain subspaces after 5 rounds. We show that the following two properties (independent of any key or constant additions) hold for 5 rounds of the AES permutation:
– the mean value is bigger for AES than for a random permutation;
– the variance is approximately by a factor 36 higher for AES than for
a random permutation.
For a large class of AES-like constructions, with an APN-like assumption on the S-Box which closely resembles the AES-Sbox, we can even give rigorous proofs of these properties. The technique we developed for that may be of independent interest.
While the distinguisher based on the variance is (almost) independent of the details of the S-Box and of the MixColumns matrix, the mean value distinguisher does depend on the details of the S-Box and may give rise to a new design criterion for S-Boxes.
To the best of our knowledge this seems to be the first time that such
a precise differential analysis was performed. Practical implementations and verification confirm our analysis.

The static power consumption of modern CMOS devices has become a substantial concern in the context of the side-channel security of cryptographic hardware. Its continuous growth in nanometer-scaled technologies is not only inconvenient for effective low power designs, but does also create a new target for power analysis adversaries. Additionally it has to be noted that several of the numerous sources of static power dissipation in CMOS circuits exhibit an exponential dependency on environmental factors which a classical power analysis adversary is in control of – much in contrast to the dynamic power consumption. These factors include the operating conditions temperature and supply voltage. Furthermore, in case of clock control, the measurement interval can be adjusted to arbitrarily enhance the measurement quality. We investigate the influence of each of these factors on our ability to exploit the data-dependent leakage currents in a 150nm CMOS ASIC prototype chip and provide results that once again show how fatal it can be to neglect this source of information leakage. With respect to the signal-to-noise ratio as a common metric in side-channel analysis we are able to demonstrate that increasing the measurement interval exponentially decreases the noise and even more importantly that increasing the working temperature exponentially increases the signal. Control over the supply voltage has a far smaller, but still noticeable, positive impact on the exploitability of the leakage currents as well. In summary, a static power analysis adversary can physically force a device to leak more information by controlling its operating environment and furthermore measure these leakages with arbitrary precision by modifying the interval length.

In a recent paper the authors and their collaborators proposed
a new hard problem, called the finite field isomorphism problem,
and they used it to construct a fully homomorphic encryption scheme.
In this paper, we investigate how one might build a digital signature
scheme from this new problem. Intuitively, the hidden field isomorphism
allows us to convert short vectors in the underlying lattice of one field
into generic looking vectors in an isomorphic field.

The majority of currently deployed cryptographic public-key schemes are at risk of becoming insecure once large scale quantum computers become practical. Therefore, substitutes resistant to quantum attacks—known as post-quantum cryptography—are required. In particular, hash-based signature schemes appear to be the most conservative choice for post-quantum digital signatures.
In this work, we mount the first practical fault attack against hash-based cryptography. The attack was originally proposed by Castelnovi et al. [9] and allows the creation of a universal signature forgery that applies to all current standardisation candidates (XMSS, LMS, SPHINCS+, and Gravity-SPHINCS). We perform the attack on an Arduino Due board featuring an ARM Cortex-M3 microprocessor running the original stateless scheme SPHINCS with a focus on practicality. We describe how the attack is mountable with a simple voltage glitch injection on the targeted platform, which allowed us to collect enough faulty signatures to create a universal forgery within seconds.
As the attack also applies to stateful schemes, we show how caching one-time signatures can entirely prevent the attack for stateful schemes, such as XMSS and LMS. However, we discuss how protecting stateless schemes, like SPHINCS, SPHINCS+, and Gravity-SPHINCS, is more challenging, as this countermeasure does not apply as efficiently as in stateful schemes.

Quantum computing threatens conventional public-key cryptography. In response, standards bodies such as NIST increasingly focus on post-quantum cryptography. In particular, hash-based signature schemes are notable candidates for deployment. No rigorous side-channel analysis of hash-based signature schemes has been conducted so far. This work bridges this gap. We analyse the stateful hash-based signature schemes XMSS and XMSS^MT, which are currently undergoing standardisation at IETF, as well as SPHINCS — the only practical stateless hash-based scheme. While timing and simple power analysis attacks are unpromising, we show that the differential power analysis resistance of XMSS can be reduced to the differential power analysis resistance of the underlying pseudorandom number generator. This first systematic analysis helps to further increase confidence in XMSS, supporting current standardisation efforts. Furthermore, we show that at least a 32-bit chunk of the SPHINCS secret key can be recovered using a differential power analysis attack due to its stateless construction. We present novel differential power analyses on a SHA-2-based pseudorandom number generator for XMSS and a BLAKE-256-based pseudorandom function for SPHINCS-256 in the Hamming weight model. The first attack is not threatening current versions of XMSS, unless a customised pseudorandom number generator is used. The second one compromises the security of a hardware implementation of SPHINCS-256. Our analysis is supported by a power simulator implementation of SHA-2 for XMSS and a hardware implementation of BLAKE for SPHINCS. We also provide recommendations for XMSS implementers.

In this work, we consider the ring- and module- variants of the LWE problem and investigate cold boot attacks on cryptographic schemes based on these problems, wherein an attacker is faced with the problem of recovering a scheme's secret key from a noisy version of that key. The leakage resilience of cryptography based on the learning with errors (LWE) problem has been studied before, but there are only limited results considering the parameters observed in cold boot attack scenarios. There are two main encodings for storing ring- and module-LWE keys, and, as we show, the performance of cold boot attacks can be highly sensitive to the exact encoding used. The first encoding stores polynomial coefficients directly in memory. The second encoding performs a number theoretic transform (NTT) before storing the key, a commonly used method leading to more efficient implementations. We first give estimates for a cold boot attack complexity on the first encoding method based on standard algorithms; this analysis confirms that this encoding method is vulnerable to cold boot attacks only at very low bit-flip rates. We then show that, for the second encoding method, the structure introduced by using an NTT is exploitable in the cold boot setting: we develop a bespoke attack strategy that is much cheaper than our estimates for the first encoding when considering module-LWE keys. For example, at a \(1\%\) bit-flip rate (which corresponds roughly to what can be achieved in practice for cold boot attacks when applying cooling), a cold boot attack on Kyber KEM parameters has a cost of \(2^{43}\) operations when the second, NTT-based encoding is used for key storage, compared to \(2^{70}\) operations with the first encoding. On the other hand, in the case of the ring-LWE-based KEM, New Hope, the cold boot attack complexities are similar for both encoding methods.

Belief propagation, or the sum-product algorithm, is a powerful and well known method for inference on probabilistic graphical models, which has been proposed for the specific use in side channel analysis by Veyrat-Charvillon et al.
We define a novel metric to capture the importance of variable nodes in factor graphs, we propose two improvements to the sum-product algorithm for the specific use case in side channel analysis, and we explicitly define and examine different ways of combining information from multiple side channel traces. With these new considerations we systematically investigate a number of graphical models that "naturally" follow from an implementation of AES. Our results are unexpected: neither a larger graph (i.e. more side channel information) nor more connectedness necessarily lead to significantly better attacks. In fact our results demonstrate that in practice the (on balance) best choice is to utilise an acyclic graph in an independent graph combination setting, which gives us provable convergence to the correct key distribution. We provide evidence using both extensive simulations and a final confirmatory analysis on real trace data.

We formalize the notion of a constrained linear trapdoor as an abstract strategy for the generation of signature schemes, concrete instantiations of which can be found in MQ-based, code-based, and lattice-based cryptography. Moreover, we revisit and expand on a transformation by Szepieniec et al. to shrink the public key at the cost of a larger signature while reducing their combined size. This transformation can be used in a way that is provably secure in the random oracle model, and in a more aggressive variant whose security remained unproven. In this paper we show that this transformation applies to any constrained linear trapdoor signature scheme, and prove the security of the first mode in the quantum random oracle model. Moreover, we identify a property of constrained linear trapdoors that is sufficient (and necessary) for the more aggressive variant to be secure in the quantum random oracle model. We apply the transformation to an MQ-based scheme, a code-based scheme and a lattice-based scheme targeting 128-bits of post quantum security, and we show that in some cases the combined size of a signature and a public key can be reduced by more than a factor 300.

This paper introduces a novel implementation of the elliptic curve factoring method specifically designed for medium-size integers such as those arising by billions in the cofactorization step of the number field sieve. In this context, our algorithm requires fewer modular multiplications than any other publicly available implementation. The main ingredients are: the use of batches of primes, fast point tripling, optimal double-base decompositions and Lucas chains, and a good mix of Edwards and Montgomery representations.

In this paper, we analyze the security of an end-to-end encryption scheme (E2EE) of LINE, a.k.a Letter Sealing. LINE is one of the most widely-deployed instant messaging applications, especially in East Asia. By a close inspection of their protocols, we give several attacks against the message integrity of Letter Sealing. Specifically, we propose forgery and impersonation attacks on the one-to-one message encryption and the group message encryption.
All of our attacks are feasible with the help of an end-to-end adversary, who has access to the inside of the LINE server (e.g. service provider LINE themselves). We stress that the main purpose of E2EE is to provide a protection against the end-to-end adversary. In addition, we found some attacks that even do not need the help of E2E adversary, which shows a critical security flaw of the protocol. Our results reveal that the E2EE scheme of LINE do not sufficiently guarantee the integrity of messages compared to the state-of-the-art E2EE schemes such as Signal, which is used by WhatApp and Facebook Messenger. We also provide some countermeasures against our attacks.
We have shared our findings with LINE corporation in advance.
The LINE corporation has confirmed our attacks are valid as long as the E2E adversary is involved, and officially recognizes our results as a vulnerability of encryption break.

In this paper, we investigate the hardware circuit complexity of the class of Boolean functions recently introduced by Tang and Maitra (IEEE-TIT 64(1): 393 402, 2018). While this class of functions has very good cryptographic properties, the exact hardware requirement is an immediate concern as noted in the paper itself. In this direction, we consider different circuit architectures based on finite field arithmetic and Boolean optimization. An estimation of the circuit complexity is provided for such functions given any input size n. We study different candidate architectures for implementing these functions, all based on the finite field arithmetic. We also show different implementations for both ASIC and FPGA, providing further analysis on the practical aspects of the functions in question and the relation between these implementations and the theoretical bound. The practical results show that the Tang-Maitra functions are quite competitive in terms of area, while still maintaining an acceptable level of throughput performance for both ASIC and FPGA implementations.

In this paper we study structured linear block codes, starting from well known examples and generalizing them to a wide class of codes that we call reproducible codes. These codes have the property that can be entirely generated from a small number of signature vectors, and consequently admit matrices that can be described in a very compact way. We then show some cryptographic applications of this class of codes and explain why the general framework we introduce may pave the way for future developments of code-based cryptography based on structured codes.

In this paper, we investigate the security of the BLISS lattice-based signature scheme, one of the most promising candidates for post-quantum-secure signatures, against side-channel attacks. Several works have been devoted to its efficient implementation on various platforms, from desktop CPUs to micro-controllers and FPGAs, and more recent papers have also considered its security against certain types of physical attacks, notably fault injection and cache attacks. We turn to more traditional side-channel analysis, and describe several attacks that can yield a full key recovery.
We first identify a serious source of leakage in the rejection sampling algorithm used during signature generation. Existing implementations of that rejection sampling step, which is essential for security, actually leak the ``relative norm'' of the secret key. We show how an extension of an algorithm due to Howgrave-Graham and Szydlo can be used to recover the key from that relative norm, at least when the absolute norm is easy to factor (which happens for a significant fraction of secret keys). We describe how this leakage can be exploited in practice both on an embedded device (an 8-bit AVR microcontroller) using electromagnetic analysis (EMA), and a desktop computer (recent) Intel CPU running Linux) using branch tracing. The latter attack has been mounted against the open source VPN software strongSwan.
We also show that other parts of the BLISS signing algorithm can leak secrets not just for a subset of secret keys, but for 100% of them. The BLISS Gaussian sampling algorithm in strongSwan is intrinsically variable time. This would be hard to exploit using a noisy source of leakage like EMA, but branch tracing allows to recover the entire randomness and hence the key: we show that a single execution of the strongSwan signature algorithm is actually sufficient for full key recovery. We also describe a more traditional side-channel attack on the sparse polynomial multiplications carried out in BLISS: classically, multiplications can be attacked using DPA; however, our target 8-bit AVR target implementation uses repeated shifted additions instead. Surprisingly, we manage to obtain a full key recovery in that setting using integer linear programming from a single EMA trace.

The notion of universal re-encryption is an established primitive
used in the design of many anonymity protocols. It allows anyone
to randomize a ciphertext without changing its size, without first
decrypting it, and without knowing who the receiver is (i.e., not
knowing the public key used to create it).
By design it prevents the randomized ciphertext from being
correlated with the original ciphertext.
We revisit and analyze the security
foundation of universal re-encryption and show a subtlety in it,
namely, that it does not require that the encryption function
achieve key anonymity. Recall that the encryption function is
different from the re-encryption function.
We demonstrate this subtlety by constructing a cryptosystem that satisfies the
established definition of a universal cryptosystem but that has an encryption
function that does not achieve key anonymity, thereby instantiating the gap in
the definition of security of universal re-encryption. We note that the
gap in the definition carries over to a set of applications
that rely on universal re-encryption, applications in the original
paper on universal re-encryption and also follow-on work.
This shows that the original definition needs to be corrected
and it shows that it had a knock-on
effect that negatively impacted security in later work.
We then introduce a new definition that includes
the properties that are needed for a re-encryption cryptosystem to achieve
key anonymity in both the encryption function and the re-encryption
function, building on Goldwasser and Micali's "semantic security" and
the original "key anonymity" notion of Bellare, Boldyreva, Desai, and Pointcheval.
Omitting any of the properties in our definition leads to a problem.
We also introduce a new generalization of the Decision
Diffie-Hellman (DDH) random self-reduction and use it, in turn, to prove
that the original ElGamal-based universal cryptosystem of Golle et al
is secure under our revised security definition.
We apply our new DDH reduction
technique to give the first proof in the standard model that ElGamal-based
incomparable public keys achieve key anonymity under DDH.
We present a novel secure Forward-Anonymous Batch Mix
as a new application.

We present a new approach to designing concretely efficient MPC protocols with semi-honest security in the dishonest majority setting. Motivated by the fact that within the dishonest majority setting the efficiency of most practical protocols \emph{does not depend on the number of honest parties}, we investigate how to construct protocols which improve in efficiency as the number of honest parties increases. Our central idea is to take a protocol which is secure for $n-1$ corruptions and modify it to use short symmetric keys, with the aim of basing security on the concatenation of all honest parties' keys. This results in a more efficient protocol tolerating fewer corruptions, whilst also introducing an LPN-style syndrome decoding assumption.
We first apply this technique to a modified version of the semi-honest GMW protocol, using OT extension with short keys, to improve the efficiency of standard GMW with fewer corruptions. We also obtain more efficient constant-round MPC, using BMR-style garbled circuits with short keys, and present an implementation of the online phase of this protocol. Our techniques start to improve upon existing protocols when there are around $n=20$ parties with $h=6$ honest parties, and as these increase we obtain up to a 13 times reduction (for $n=400,h=120$) in communication complexity for our GMW variant, compared with the best-known GMW-based protocol modified to use the same threshold.