Driven by the open problem raised by Hofheinz and Kiltz (Journal of Cryptology, 2012), we study the formalization of lattice-based programmable hash function (PHF), and give three types of concrete constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is a collision-resistant hash function, which gives a direct application of this new primitive. We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain new short signature schemes and IBE schemes from (ideal) lattices. Specifically, by instantiating the generic constructions with our Type-II and Type-III PHF constructions, we immediately obtain two short signatures and two IBE schemes with asymptotically much shorter keys. A major downside which inherits from our Type-II and Type-III PHF constructions is that we can only prove the security of the new signatures and IBEs in the bounded security model that the number Q of the adversary’s queries is required to be known in advance. Another downside is that the computational time of our new signatures and IBEs is a linear function of Q, which is large for typical parameters. To overcome the above limitations, we also give a refined way of using Type-II and Type-III PHFs to construct lattice-based short signatures with short verification keys in the full security model. In particular, our methods depart from the confined guessing technique of B¨ohl et al. (Eurocrypt’13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto’14) and by Alperin-Sheriff (PKC’15), and allow us to achieve much tighter security from weaker hardness assumptions.
Credential tweaking attacks use breached passwords to generate semantically similar passwords and gain access to victims' services. These attacks sidestep the first generation of compromised credential checking (C3) services. The second generation of compromised credential checking services, called "Might I Get Pwned" (MIGP), is a privacy-preserving protocol that defends against credential tweaking attacks by allowing clients to query whether a password or a semantically similar variation is present in the server's compromised credentials dataset. The desired privacy requirements include not revealing the user's entered password to the server and ensuring that no compromised credentials are disclosed to the client. In this work, we formalize the cryptographic leakage of the MIGP protocol and perform a security analysis to assess its impact on the credentials held by the server. We focus on how this leakage aids breach extraction attacks, where an honest-but-curious client interacts with the server to extract information about the stored credentials. Furthermore, we discover additional leakage that arises from the implementation of Cloudflare's deployment of MIGP. We evaluate how the discovered leakage affects the guessing capability of an attacker in relation to breach extraction attacks. Finally, we propose MIGP 2.0, a new iteration of the MIGP protocol designed to minimize data leakage and prevent the introduced attacks.
Nonlinear feedback shift registers (NFSRs) are used in many stream ciphers as their main building blocks. One security criterion for the design of a stream cipher is to assure its keystream has a long period. To meet this criterion, the NFSR used in a stream cipher must have a long state cycle. Further, to simultaneously avoid equivalent keys, the keystream's period is not compressed compared to the NFSR's state cycle length, which can be guaranteed if the NFSR is observable in the sense that any two distinct initial states are distinguishable from their resulting output sequences. The cycle structure of a general NFSR remains an open hard problem. Constructing Fibonacci NFSRs with maximum state cycles has therefore attracted much attention, but so far such Fibonacci NFSRs with known feedback functions have been found only for their stage numbers no greater than 33. Considering that Galois NFSRs may decrease the area and increase the throughput compared to Fibonacci NFSRs, this paper studies two types of $n$-stage Galois NFSRs, whose state transition matrices are circulant matrices with only one nonzero element of 1 in each column. The cycle structure and observability of both types are disclosed using the semi-tensor product based Boolean network approach. In the first type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is even. It has the maximum state cycle with an arbitrary stage number and an explicit feedback functions. It is observable if and only if its output function is dependent on the first state bit. In the second type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is $2^m+1$ with positive integer $m\leq n-1$ for the NFSR's stage number $n$. It has $2^m$ cycles of length $2^{n-m}$, and it is observable if its output function is dependent on all the state bits whose indices are no smaller than $n-m+1$.
Common block ciphers like AES specified by the NIST or KASUMI (A5/3) of GSM are extensively utilized by billions of individuals globally to protect their privacy and maintain confidentiality in daily communications. However, these ciphers lack comprehensive security proofs against the vast majority of known attacks. Currently, security proofs are limited to differential and linear attacks for both AES and KASUMI. For instance, the consensus on the security of AES is not based on formal mathematical proofs but on intensive cryptanalysis over its reduced rounds spanning several decades. In this work, we introduce new security proofs for AES against another attack method: impossible differential (ID) attacks. We classify ID attacks as reciprocal and nonreciprocal ID attacks. We show that sharp and generic lower bounds can be imposed on the data complexities of reciprocal ID attacks on substitution permutation networks. We prove that the minimum data required for a reciprocal ID attack on AES using a conventional ID characteristic is $2^{66}$ chosen plaintexts whereas a nonreciprocal ID attack involves at least $2^{88}$ computational steps. We mount a nonreciprocal ID attack on 6-round AES for 192-bit and 256-bit keys, which requires only $2^{18}$ chosen plaintexts and outperforms the data complexity of any attack. Given its marginal time complexity, this attack does not pose a substantial threat to the security of AES. However, we have made enhancements to the integral attack on 6-round AES, thereby surpassing the longstanding record for the most efficient attack after a period of 23 years.
Anonymous credentials are cryptographic mechanisms enabling users to authenticate themselves with a fine-grained control on the information they leak in the process. They have been the topic of countless papers which have improved the performance of such mechanisms or proposed new schemes able to prove ever-more complex statements about the attributes certified by those credentials. However, whereas these papers have studied in depth the problem of the information leaked by the credential and/or the attributes, almost all of them have surprisingly overlooked the information one may infer from the knowledge of the credential issuer. In this paper we address this problem by showing how one can efficiently hide the actual issuer of a credential within a set of potential issuers. The novelty of our work is that we do not resort to zero-knowledge proofs but instead we show how one can tweak Pointcheval-Sanders signatures to achieve this issuer-hiding property at a very low cost. This results in an efficient anonymous credential system that indeed provide a complete control of the information leaked in the authentication process. Our construction is moreover modular and can then fit a wide spectrum of applications, notably for Self-Sovereign Identity (SSI) systems.
We show the following unconditional results on quantum commitments in two related yet different models: 1. We revisit the notion of quantum auxiliary-input commitments introduced by Chailloux, Kerenidis, and Rosgen (Comput. Complex. 2016) where both the committer and receiver take the same quantum state, which is determined by the security parameter, as quantum auxiliary inputs. We show that computationally-hiding and statistically-binding quantum auxiliary-input commitments exist unconditionally, i.e., without relying on any unproven assumption, while Chailloux et al. assumed a complexity-theoretic assumption, ${\bf QIP}\not\subseteq{\bf QMA}$. On the other hand, we observe that achieving both statistical hiding and statistical binding at the same time is impossible even in the quantum auxiliary-input setting. To the best of our knowledge, this is the first example of unconditionally proving computational security of any form of (classical or quantum) commitments for which statistical security is impossible. As intermediate steps toward our construction, we introduce and unconditionally construct post-quantum sparse pseudorandom distributions and quantum auxiliary-input EFI pairs which may be of independent interest. 2. We introduce a new model which we call the common reference quantum state (CRQS) model where both the committer and receiver take the same quantum state that is randomly sampled by an efficient setup algorithm. We unconditionally prove that there exist statistically hiding and statistically binding commitments in the CRQS model, circumventing the impossibility in the plain model. We also discuss their applications to zero-knowledge proofs, oblivious transfers, and multi-party computations.
This study focuses on spotting and stopping new types of online threats by improving the UGRansome dataset to detect unusual activity in real-time. By blending different machine learning methods, like naïve tree-based ensemble learning and recursive feature elimination (RFE), the research achieves a high accuracy rate of 97%. Naïve Bayes (NB) stands out as the most effective classifier. The suggested setup, combining gradient boosting (GB) and random forest (RF) with NB, effectively identifies and prevents unknown vulnerabilities in computer systems. UGRansome successfully blocks over 100 kilobits per second (kbps) of harmful online traffic by using details pinpointed by the RFE method, specifically uniform resource locators (URLs). This outperforms existing Intrusion Detection System (IDS) datasets. It's particularly good at stopping secure shell attacks, proving the dataset's usefulness in making networks safer. This research marks significant progress in detecting intrusions. The NB model excels in accuracy, precision, and remembering patterns, especially in identifying new threats. Moreover, the suggested naïve tree-based ensemble model shows outstanding accuracy, standing out as the best-performing technique among all models studied. Applying the UGRansome properties-based rule noticeably changes how traffic is sorted, decreasing unknown traffic while increasing unclassified traffic, which requires more investigation.
Lido, the leading Liquidity Staking Derivative (LSD) provider on Ethereum, allows users to stake an arbitrary amount of ETH to receive stETH, which can be integrated with Decentralized Finance (DeFi) protocols such as Aave. The composability between Lido and Aave enables a novel strategy called “leverage staking”, where users stake ETH on Lido to acquire stETH, utilize stETH as collateral on Aave to borrow ETH, and then restake the borrowed ETH on Lido. Users can iteratively execute this process to optimize potential returns based on their risk profile. This paper systematically studies the opportunities and risks associated with leverage staking. We are the first to formalize the stETH-ETH leverage staking strategy within the Lido-Aave ecosystem. Our empirical study identifies 262 leverage staking positions on Ethereum, with an aggregated staking amount of 295,243 ETH (482M USD). We discover that 90.13% of leverage staking positions have achieved higher returns than conventional staking. Furthermore, we perform stress tests to evaluate the risk introduced by leverage staking under extreme conditions. We find that leverage staking significantly amplifies the risk of cascading liquidations. We hope this paper can inform and encourage the development of robust risk management approaches to protect the Lido-Aave LSD ecosystem.
Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program in a quantum state such that a user in possession of $k$ such states cannot create $k+1$ working copies. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve. In this work, we construct public-key encryption and functional encryption schemes whose secret keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure $\mathcal{iO}$, one-way functions and LWE. This resolves a long-standing open question of constructing fully collusion-resistant copy-protected functionalities raised by multiple previous works. Prior to our work, copy-protected functionalities were known only in restricted collusion models where either an a-priori bound on the collusion size was needed, in the plain model with the same assumptions as ours (Liu, Liu, Qian, Zhandry [TCC'22]), or adversary was only prevented from doubling their number of working programs, in a structured quantum oracle model (Aaronson [CCC'09]). We obtain our results through a novel technique which uses identity-based encryption to construct unbounded collusion resistant copy-protection schemes from $1\to2$ secure schemes. This is analogous to the technique of using digital signatures to construct full-fledged quantum money from single banknote schemes (Lutomirski et al. [ICS'09], Farhi et al. [ITCS'12], Aaronson and Christiano [STOC'12]). We believe our technique is of independent interest. Along the way, we also construct a puncturable functional encryption scheme whose master secret key can be punctured at all functions $f$ such that $f(m_0) \neq f(m_1)$. This might also be of independent interest.
We demonstrate how to build computationally secure commitment schemes with the aid of quantum auxiliary inputs without unproven complexity assumptions. Furthermore, the quantum auxiliary input can be prepared either (1) efficiently through a trusted setup similar to the classical common random string model, or (2) strictly between the two involved parties in uniform exponential time. Classically this remains impossible without first proving $\mathsf{P} \neq \mathsf{NP}$.
We extend the known pseudorandomness of Ring-LWE to be based on lattices that do not correspond to any ideal of any order in the underlying number field. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices. In this work we show that hardness of $q$-Ring-LWE can be based on worst-case hardness of ideal lattices in arbitrary orders $O$, as long as the order $O$ satisfies the property that $\frac{1}{m}\cdot O$ contains the ring of integers, for some $m$ co-prime to $q$. Further, the hard lattice problems need not be given the order $O$ itself as input. The reduction requires that the noise be a factor $m$ more than the original Ring-LWE reduction. We also show that for the power-of-two cyclotomic number fields, there exist orders with $m=4$ such that non-trivial ideals of the order, which are not contained in the conductor, are non-invertible. Another reduction shows that hardness of $q$-Ring-LWE can be based on worst-case hardness of lattices that correspond to sum of ideal-lattices in arbitrary and different orders in the number field, as long as the (set of) orders $\{O_i\}$ satisfy the property that $\frac{1}{m}\cdot O_i$ contains the ring of integers, for some $m$ co-prime to $q$. We also show that for the power-of-two cyclotomic number fields, there exist orders $O_1, O_2$ with $m=8$ such that there are ideals $I_1, I_2$ of $O_1, O_2$ resp. with $I_1+ I_2$ not an ideal of any order in the number field.
There appears to be a widespread belief that some processes of selecting cryptosystems are less risky than other processes. As a case study of quantifying the difference in risks, this paper compares the currently-known-failure rates of three large groups of cryptosystems: (1) the round-1 submissions to the NIST Post-Quantum Cryptography Standardization Project, (2) the round-1 submissions not broken by the end of round 1, and (3) the round-1 submissions selected by NIST for round 2 of the same project. These groups of cryptosystems turn out to have currently-known-failure rates that are strikingly high, and that include statistically significant differences across the groups, not matching the pattern of differences that one might expect. Readers are cautioned that the actual failure rates could be much higher than the currently-known-failure rates.
Panny [3] described how to forge the “tropical signatures” proposed by Chen, Grigoriev and Shpilrain [1]. (These signatures are loosely related to the NP-complete problem of factoring tropical polynomials). We describe more methods to forge these tropical signatures. We also describe some patches that thwart all but one of these forgery methods (which we summarize as re-hashing an honest signature).
A proof of sequential work (PoSW) scheme allows the prover to convince a verifier that it computed a certain number of computational steps sequentially. Very recently, graph-labeling PoSW schemes, found applications in light-client blockchain protocols, most notably bootstrapping. A bootstrapping protocol allows a light client, with minimal information about the blockchain, to hold a commitment to its stable prefix. An incremental PoSW (iPoSW) scheme allows the prover to non-trivially increment proofs: given $\chi,\pi_1$ and integers $N_1,N_2$ such that $\pi_1$ is a valid proof for $N_1$, it generates a valid proof $\pi$ for $N_1+N_2$. In this work, we construct an iPoSW scheme based on the skiplist-based PoSW scheme of Abusalah et al. and prove its security in the random oracle model by employing the powerful on-the-fly sampling technique of Döttling et al. Moreover, unlike the iPoSW scheme of Döttling et al., ours is the first iPoSW scheme which is suitable for constructing incremental non-interactive arguments of chain knowledge (SNACK) schemes, which are at the heart of space and time efficient blockchain light-client protocols. In particular, our scheme works for general weight distributions, which we characterize as incrementally sampleable distributions. Our general treatment recovers the distribution underlying the scheme of Döttling et al. as well as the distribution underlying SNACK-enabled bootstrapping application as special cases. In realizing our general construction, we develop a new on-the-fly sampling technique.
Vehicle Ad Hoc Networks (VANETs) play a pivotal role in intelligent transportation systems, offering dynamic communication between vehicles, Road Side Units (RSUs), and the internet. Given the open-access nature of VANETs and the associated threats, such as impersonation and privacy violations, ensuring the security of these communications is of utmost importance. This paper presents the Identity-based Cluster Authentication and Key Exchange (ID-CAKE) scheme, a new approach to address security challenges in VANETs. The ID-CAKE scheme integrates the Cluster Consensus Identity-based Identification (CCIBI) with Zero-Knowledge (ZK) proofs and the Identity-based Multireceiver Key Exchange Mechanism (ID-mKEM) signature scheme. This integration provides robust authorization via CCIBI, while ID-mKEM signatures ensure message integrity, and guarantee both non-repudiation and unforgeability through mKEM for message broadcasting. The scheme employs a novel three-party ZK proof for batch verification using mKEM, which significantly reduces computational burdens. Our scheme also ensures anonymity and unlinkability by introducing pseudo-identities to all users in the cluster. The rigorous security proofs provided confirm the resilience of the ID-CAKE scheme against potential attacks, adhering to the different scenarios, against the hardness of the elliptic curve computational Diffie-Hellman under the random oracle model. The ID-CAKE scheme establishes a robust security framework for VANETs, and its introduction highlights potential pathways for future exploration in the realm of VANET security.
In spite of being a popular technique for designing block ciphers, Lai-Massey networks have received considerably less attention from a security analysis point-of-view than Feistel networks and Substitution-Permutation networks. In this paper we study the beyond-birthday-bound (BBB) security of Lai-Massey networks with independent random round functions against chosen-plaintext adversaries. Concretely, we show that five rounds are necessary and sufficient to achieve BBB security.
QARMAv2 is a general-purpose and hardware-oriented family of lightweight tweakable block ciphers (TBCs) introduced in ToSC 2023. QARMAv2, as a redesign of QARMA with a longer tweak and tighter security margins, is also designed to be suitable for cryptographic memory protection and control flow integrity. The designers of QARMAv2 provided a relatively comprehensive security analysis in the design specification, e.g., some bounds for the number of attacked rounds in differential and boomerang analysis, together with some concrete impossible differential, zero-correlation, and integral distinguishers. As one of the first third-party cryptanalysis of QARMAv2, Hadipour et al. significantly improved the integral distinguishers of QARMAv2 and provided the longest concrete distinguishers of QARMAv2 up to now. However, they provided no key recovery attack based on their distinguishers. This paper delves into the cryptanalysis of QARMAv2 to enhance our understanding of its security. Given that the integral distinguishers of QARMAv2 are the longest concrete distinguishers for this cipher so far, we focus on integral attack. To this end, we first further improve the automatic tool introduced by Hadipour et al., for finding integral distinguishers of TBCs following the TWEAKEY framework. This new tool exploits the MixColumns property of QARMAv2 to find integral distinguishers more suitable for key recovery attacks. Then, we combine several techniques for integral key recovery attacks, e.g., Meet-in-the-middle and partial-sum techniques to build a fine-grained integral key recovery attack on QARMAv2. Notably, we demonstrate how to leverage the low data complexity of the integral distinguishers of QARMAv2 to reduce the memory complexity of the meet-in-the-middle technique. As a result, we managed to propose the first concrete key recovery attacks on reduced-round versions of QARMAv2 by attacking 13 rounds of QARMAv2-64-128 with a single tweak block, 14 rounds of QARMAv2-64-128 with two independent tweak blocks, and 16 rounds of QARMAv2-128-256 with two independent tweak blocks. Our attacks do not compromise the claimed security of QARMAv2, but they shed more light on the cryptanalysis of this cipher.
The relationships between various meta-complexity problems are not well understood in the worst-case regime, including whether the search version is harder than the decision version, whether the hardness scales with the ``threshold", and how the hardness of different meta complexity problems relate to one another, and to the task of function inversion. In this note, we present resolutions to some of these questions with respect to the \emph{black-box} analog of these problems. In more detail, let $MK^t_MP[s]$ denote the language consisting of strings $x$ with $K_{M}^t(x) < s(|x|)$, where $K_M^t(x)$ denotes the $t$-bounded Kolmogorov complexity of $x$ with $M$ as the underlying (Universal) Turing machine, and let $search-MK^t_MP[s]$ denote the search version of the same problem. We show that if there for every Universal Turing machine $U$ there exists a $2^{\alpha n}poly(n)$-size $U$-oracle aided circuit deciding $MK^t_UP [n-O(1)]$, then for every function $s$, and every not necessarily universal Turing machine $M$, there exists a $2^{\alpha s(n)}poly(n)$ size $M$-oracle aided circuit solving $search-MK^t_MP[s(n)]$; this in turn yields circuits of roughly the same size for both the Minimum Circuit Size Problem (MCSP), and the function inversion problem, as they can be thought of as instantiating $MK^t_MP$ with particular choices of (a non universal) TMs $M$ (the circuit emulator for the case of MCSP, and the function evaluation in the case of function inversion). As a corollary of independent interest, we get that the complexity of black-box function inversion is (roughly) the same as the complexity of black-box deciding $MK^t_UP[n-O(1)]$ for any universal TM $U$; that is, also in the worst-case regime, black-box function inversion is ``equivalent" to black-box deciding $MKtUP$.
An important criteria to assert the security of a cryptographic primitive is its resistance against differential cryptanalysis. For word-oriented primitives, a common technique to determine the number of rounds required to ensure the immunity against differential distinguishers is to consider truncated differential characteristics and to count the number of active S-boxes. Doing so allows one to provide an upper bound on the probability of the best differential characteristic with a reduced computational cost. However, in order to design very efficient primitives, it might be needed to evaluate the probability more accurately. This is usually done in a second step, during which one tries to instantiate truncated differential characteristics with actual values and computes its corresponding probability. This step is usually done either with ad-hoc algorithms or with CP, SAT or MILP models that are solved by generic solvers. In this paper, we present a generic tool for automatically generating these models to handle all word-oriented ciphers. Furthermore the running times to solve these models are very competitive with all the previous dedicated approaches.
Dynamic vector commitments that enable local updates of opening proofs have applications ranging from verifiable databases with membership changes to stateless clients on blockchains. In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state. When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment. We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof. Existing vector commitment schemes require that either the information size or the runtime scale linearly in the number $k$ of updated state elements. We construct a vector commitment scheme that asymptotically achieves both length and runtime that is sublinear in $k$, namely $k^\nu$ and $k^{1-\nu}$ for any $\nu \in (0,1)$. We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme. For $\nu = 1/2$, our constructions outperform Verkle commitments by about a factor of $2$ in terms of both the update information size and runtime, but makes use of larger public parameters.
In May 2020, Zoom Video Communications, Inc. (Zoom) announced a multi-step plan to comprehensively support end-to-end encrypted (E2EE) group video calls and subsequently rolled out basic E2EE support to customers in October 2020. In this work we provide the first formal security analysis of Zoom's E2EE protocol, and also lay foundation to the general problem of E2EE group video communication. We observe that the vast security literature analyzing asynchronous messaging does not translate well to synchronous video calls. Namely, while strong forms of forward secrecy and post compromise security are less important for (typically short-lived) video calls, various liveness properties become crucial. For example, mandating that participants quickly learn of updates to the meeting roster and key, media streams being displayed are recent, and banned participants promptly lose any access to the meeting. Our main results are as follows: 1. Propose a new notion of leader-based continuous group key agreement with liveness, which accurately captures the E2EE properties specific to the synchronous communication scenario. 2. Prove security of the core of Zoom's E2EE meetings protocol in the above well-defined model. 3. Propose ways to strengthen Zoom's liveness properties by simple modifications to the original protocol, which subsequently influenced updates implemented in production.
Motivated by the violation of two fundamental assumptions in secure communication - receiver-privacy and sender-freedom - by a certain entity referred to as ``the dictator'', Persiano et al. introduced the concept of Anamorphic Encryption (AME) for public key cryptosystems (EUROCRYPT 2022). Specifically, they presented receiver/sender-AME, directly tailored to scenarios where receiver privacy and sender freedom assumptions are compromised, respectively. In receiver-AME, entities share a double key to communicate in anamorphic fashion, raising concerns about the online distribution of the double key without detection by the dictator. The sender-AME with no shared secret is a potential candidate for key distribution. However, the only such known schemes (i.e., LWE and Dual LWE encryptions) suffer from an intrinsic limitation and cannot achieve reliable distribution. Here, we reformulate the sender-AME, present the notion of $\ell$-sender-AME and formalize the properties of (strong) security and robustness. Robustness refers to guaranteed delivery of duplicate messages to the intended receiver, ensuring that decrypting normal ciphertexts in an anamorphic way or decrypting anamorphic ciphertexts with an incorrect duplicate secret key results in an explicit abort signal. We first present a simple construction for pseudo-random and robust public key encryption that shares the similar idea of public-key stegosystem by von Ahn and Hopper (EUROCRYPT 2004). Then, inspired by Chen et al.'s malicious algorithm-substitution attack (ASA) on key encapsulation mechanisms (KEM) (ASIACRYPT 2020), we give a generic construction for hybrid PKE with special KEM that encompasses well-known schemes, including ElGamal and Cramer-Shoup cryptosystems. The constructions of $\ell$-sender-AME motivate us to explore the relations between AME, ASA on PKE, and public-key stegosystem. The results show that a strongly secure $\ell$-sender-AME is such a strong primitive that implies reformulated receiver-AME, public-key stegosystem, and generalized ASA on PKE. By expanding the scope of sender-anamorphic encryption and establishing its robustness, as well as exploring the connections among existing notions, we advance secure communication protocols under challenging conditions.
Password-Authenticated Key Exchange (PAKE) allows two parties to establish a common high-entropy secret from a possibly low-entropy pre-shared secret such as a password. In this work, we provide the first PAKE protocol with subversion resilience in the framework of universal composability (UC), where the latter roughly means that UC security still holds even if one of the two parties is malicious and the honest party's code has been subverted (in an undetectable manner). We achieve this result by sanitizing the PAKE protocol from oblivious transfer (OT) due to Canetti et al. (PKC'12) via cryptographic reverse firewalls in the UC framework (Chakraborty et al., EUROCRYPT'22). This requires new techniques, which help us uncover new cryptographic primitives with sanitation-friendly properties along the way (such as OT, dual-mode cryptosystems, and signature schemes). As an additional contribution, we delve deeper in the backbone of communication required in the subversion-resilient UC framework, extending it to the unauthenticated setting, in line with the work of Barak et al. (CRYPTO'05).
Most existing MPC protocols consider the homogeneous setting, where all the MPC players are assumed to have identical communication and computation resources. In practice, the weakest player often becomes the bottleneck of the entire MPC protocol execution. In this work, we initiate the study of so-called load-balanced MPC in the heterogeneous computing. A load-balanced MPC protocol can adjust the workload of each player accordingly to maximize the overall resource utilization. In particular, we propose new notions called composite circuit and composite garbling scheme, and construct two efficient server-aided protocols with malicious security and semi-honest security, respectively. Our maliciously secure protocol is over 400$\times$ faster than the authenticated garbling protocol (CCS'17); our semi-honest protocol is up to 173$\times$ faster than the optimized BMR protocol (CCS'16).
By leveraging the no-cloning principle of quantum mechanics, unclonable cryptography enables us to achieve novel cryptographic protocols that are otherwise impossible classically. Two most notable examples of unclonable cryptography are quantum copy-protection and unclonable encryption. Despite receiving a lot of attention in recent years, two important open questions still remain: copy- protection for point functions in the plain model, which is usually considered as feasibility demonstration, and unclonable encryption with unclonable indistinguishability security in the plain model. In this work, by relying on previous works of Coladangelo, Liu, Liu, and Zhandry (Crypto’21) and Culf and Vidick (Quantum’22), we establish a new monogamy-of-entanglement property for subspace coset states, which allows us to obtain the following new results: • We show that copy-protection of point functions exists in the plain model, with different challenge distributions (including arguably the most natural ones). • We show, for the first time, that unclonable encryption with unclonable indistinguishability security exists in the plain model.
The Learning with Errors (LWE) problem has been widely utilized as a foundation for numerous cryptographic tools over the years. In this study, we focus on an algebraic variant of the LWE problem called Group ring LWE (GR-LWE). We select group rings (or their direct summands) that underlie specific families of finite groups constructed by taking the semi-direct product of two cyclic groups. Unlike the Ring-LWE problem described in \cite{lyubashevsky2010ideal}, the multiplication operation in the group rings considered here is non-commutative. As an extension of Ring-LWE, it maintains computational hardness and can be potentially applied in many cryptographic scenarios. In this paper, we present two polynomial-time quantum reductions. Firstly, we provide a quantum reduction from the worst-case shortest independent vectors problem (SIVP) in ideal lattices with polynomial approximate factor to the search version of GR-LWE. This reduction requires that the underlying group ring possesses certain mild properties; Secondly, we present another quantum reduction for two types of group rings, where the worst-case SIVP problem is directly reduced to the (average-case) decision GR-LWE problem. The pseudorandomness of GR-LWE samples guaranteed by this reduction can be consequently leveraged to construct semantically secure public-key cryptosystems.
In recent years, quantum computers and Shor’s quantum algorithm have been able to effectively solve NP (Non-deterministic Polynomial-time) problems such as prime factorization and discrete logarithm problems, posing a threat to current mainstream asymmetric cryptography, including RSA and Elliptic Curve Cryptography (ECC). As a result, the National Institute of Standards and Technology (NIST) in the United States call for Post-Quantum Cryptography (PQC) methods that include lattice-based cryptography methods, code-based cryptography methods, multivariate cryptography methods, and hash-based cryptography methods for resisting quantum computing attacks. Therefore, this study proposes a PQC neural network (PQC-NN) that maps a code-based PQC method to a neural network structure and enhances the security of ciphertexts with non-linear activation functions, random perturbation of ciphertexts, and uniform distribution of ciphertexts. The main innovations of this study include: (1) constructing a neural network structure that complies with code-based PQC, where the weight sets between the input layer and the ciphertext layer can be used as a public key for encryption, and the weight sets between the ciphertext layer and the output layer can be used as a private key for decryption; (2) adding random perturbations to the ciphertext layer, which can be removed during the decryption phase to restore the original plaintext; (3) constraining the output values of the ciphertext layer to follow a uniform distribution with a significant similarity by adding the cumulative distribution function (CDF) values of the chi-square distribution to the loss function, ensuring that the neural network produces sufficiently uniform distribution for the output values of the ciphertext layer. In practical experiments, this study uses cellular network signals as a case study to demonstrate that encryption and decryption can be performed by the proposed PQC neural network with the uniform distribution of ciphertexts. In the future, the proposed PQC neural network could be applied to various applications.
VOX has been submitted to the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition in June 2023. VOX is a strengthened variant of UOV which uses the Quotient-Ring (QR) setting to reduce the public-key size. At the end of August 2023, Furue and Ikamatsu posted on the NIST mailing-list a post, indicating that the parameters of VOX can be attacked efficiently using the rectangular attack in the QR setting. In this note, we explain the attack in the specific case of VOX, we detail the complexity, and show that as Furue and Ikematsu indicated, the attack can be completely avoided by adding one more constraint on the parameter selection. Finally, we show that this constraint does not increase the sizes of the public keys or signature.
This note presents attacks on the lightweight hash function TS-Hash proposed by Tsaban, including a polynomial-time preimage attack for short messages (at most n/2 bits), high-probability differentials, a general subexponential-time preimage attack, and linearization techniques.
Multi-signatures allow for compressing many signatures for the same message that were generated under independent keys into one small aggregated signature. This primitive is particularly useful for proof-of-stake blockchains, like Ethereum, where the same block is signed by many signers, who vouch for the block's validity. Being able to compress all signatures for the same block into a short string significantly reduces the on-chain storage costs, which is an important efficiency metric for blockchains. In this work, we consider multi-signatures in the synchronized setting, where the signing algorithm takes an additional time parameter as input and it is only required that signatures for the same time step are aggregatable. The synchronized setting is simpler than the general multi-signature setting, but is sufficient for most blockchain related applications, as signers are naturally synchronized by the length of the chain. We present Chipmunk, a concretely efficient lattice-based multi-signature scheme in the synchronized setting that allows for signing an a-priori bounded number of messages. Chipmunk allows for non-interactive aggregation of signatures and is secure against rogue-key attacks. The construction is plausibly secure against quantum adversaries as our security relies on the assumed hardness of the short integer solution problem. We significantly improve upon the previously best known construction in this setting by Fleischhacker, Simkin, and Zhang (CCS 2022). Our aggregate signature size is $5.6 \times$ smaller and for $112$ bits of security our construction allows for compressing 8192 individual signatures into a multi-signature of size around $136$ KB. We provide a full implementation of Chipmunk and provide extensive benchmarks studying our construction's efficiency.
In their seminal work, Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC`07) presented the MPC-in-the-Head paradigm, which shows how to design Zero-Knowledge Proofs (ZKPs) from secure Multi-Party Computation (MPC) protocols. This paradigm has since then revolutionized and modularized the design of efficient ZKP systems, with far-reaching applications beyond ZKPs. However, to the best of our knowledge, all previous instantiations relied on fully-secure MPC protocols, and have not been able to leverage the fact that the paradigm only imposes relatively weak privacy and correctness requirements on the underlying MPC. In this work, we extend the MPC-in-the-Head paradigm to game-based cryptographic primitives supporting homomorphic computations (e.g., fully-homomorphic encryption, functional encryption, randomized encodings, homomorphic secret sharing, and more). Specifically, we present a simple yet generic compiler from these primitives to ZKPs which use the underlying primitive as a black box. We also generalize our paradigm to capture commit-and-prove protocols, and use it to devise tight black-box compilers from Interactive (Oracle) Proofs to ZKPs, assuming One-Way Functions (OWFs). We use our paradigm to obtain several new ZKP constructions: 1. The first ZKPs for NP relations $\mathcal{R}$ computable in (polynomial-time uniform) $NC^1$, whose round complexity is bounded by a fixed constant (independent of the depth of $\mathcal{R}$'s verification circuit), with communication approaching witness length (specifically, $n\cdot poly\left(\kappa\right)$, where $n$ is the witness length, and $\kappa$ is a security parameter), assuming DCR. Alternatively, if we allow the round complexity to scale with the depth of the verification circuit, our ZKPs can make black-box use of OWFs. 2. Constant-round ZKPs for NP relations computable in bounded polynomial space, with $O\left(n\right)+o\left(m\right)\cdot poly\left(\kappa\right)$ communication assuming OWFs, where $m$ is the instance length. This gives a black-box alternative to a recent non-black-box construction of Nassar and Rothblum (CRYPTO`22). 3. ZKPs for NP relations computable by a logspace-uniform family of depth-$d\left(m\right)$ circuits, with $n\cdot poly\left(\kappa,d\left(m\right)\right)$ communication assuming OWFs. This gives a black-box alternative to a result of Goldwasser, Kalai and Rothblum (JACM).
We build the first unleveled fully homomorphic signature scheme in the standard model. Our scheme is not constrained by any a-priori bound on the depth of the functions that can be homomorphically evaluated, and relies on subexponentially-secure indistinguishability obfuscation, fully-homomorphic encryption and a non-interactive zero-knowledge (NIZK) proof system with composable zero-knowledge. Our scheme is also the first to satisfy the strong security notion of context-hiding for an unbounded number of levels, ensuring that signatures computed homomorphically do not leak the original messages from which they were computed. All building blocks are instantiable from falsifiable assumptions in the standard model, avoiding the need for knowledge assumptions. The main difficulty we overcome stems from the fact that bootstrapping, which is a crucial tool for obtaining unleveled fully homomorphic encryption (FHE), has no equivalent for homomorphic signatures, requiring us to use novel techniques.
This paper describes a way to protect medications against falsification, a long-standing problem in the world. We combine several existing technologies to achieve the stated goal. The building-blocks used are inherent physical randomness generated during the packaging process, artificial vision, short digital signatures and QR-codes.
Decentralized payment system gradually get more attention in recent years. By removing the trusted third party used for accounting ledgers, it fundamentally empowers users to control their own assets. As the privacy concerns grow, some cryptocurrencies is proposed to preserve the privacy of users. However, those cryptocurrencies cause illegal transactions such as money laundering, fraudulent trading and so on. So it is necessary to design a auditing scheme. To solve this problem, many privacy-preserving and audit scheme was proposed. But there exists no scheme that effectively solve the issue of privacy-preserving and auditing on both user identity and transaction content. In this paper, we propose a design for a decentralized payment system with privacy preserving and auditing. We use cryptographic accumulators based on Merkle trees for accounting and use a combination of Twist ElGamal, NIZK (Non-Interactive Zero-Knowledge), Bulletproofs, and zk-SNARKs for privacy preserving and auditing.
The evolution of quantum algorithms threatens to break public key cryptography in polynomial time. The development of quantum-resistant algorithms for the post-quantum era has seen a significant growth in the field of post quantum cryptography (PQC). Polynomial multiplication is the core of Ring Learning with Error (RLWE) lattice based cryptography (LBC) which is one of the most promising PQC candidates. In this work, we present the design of fast and energy-efficient pipelined Number Theoretic Transform (NTT) based polynomial multipliers and present synthesis results on a Field Programmable Gate Array (FPGA) to evaluate their efficacy. NTT is performed using the pipelined R2SDF and R22SDF Fast Fourier Transform (FFT) architectures. In addition, we propose an energy efficient modified architecture (Modr2). The NTT-based designed polynomial multipliers employs the Modr2 architecture that achieve on average 2× better performance over the R2SDF FFT and 2.4× over the R22SDF FFT with similar levels of energy consumption. The proposed polynomial multiplier with Modr2 architecture reaches 12.5× energy efficiency over the state-ofthe-art convolution-based polynomial multiplier and 4× speedup over the systolic array NTT based polynomial multiplier for polynomial degrees of 1024, demonstrating its potential for practical deployment in future designs.
Attribute-Based Encryption is widely recognized as a leap forward in the field of public key encryption. It allows to enforce an access control on encrypted data. Decryption time in ABE schemes can be long depending on the number of attributes and pairing operations. This drawback hinders their adoption on a broader scale. In this paper, we propose a non-monotone CP-ABE scheme that has no restrictions on the size of attribute sets and policies, allows fast decryption and is adaptively secure under the CBDH-3 assumption. To achieve this, we approached the problem from a new angle, namely using a set membership relation for access structure. We have implemented our scheme using the Java Pairing-Based Cryptography Library (JPBC) and the source code is available on GitHub.
Minimizing the round complexity of byzantine broadcast is a fundamental question in distributed computing and cryptography. In this work, we present the first early stopping byzantine broadcast protocol that tolerates up to $t=n-1$ malicious corruptions and terminates in $O(\min\{f^2,t+1\})$ rounds for any execution with $f\leq t$ actual corruptions. Our protocol is deterministic, adaptively secure, and works assuming a plain public key infrastructure. Prior early-stopping protocols all either require honest majority or tolerate only up to $t=(1-\epsilon)n$ malicious corruptions while requiring either trusted setup or strong number theoretic hardness assumptions. As our key contribution, we show a novel tool called a polariser that allows us to transfer certificate-based strategies from the honest majority setting to settings with a dishonest majority.
The Number Theoretic Transform (NTT) plays a central role in efficient implementations of cryptographic primitives selected for Post Quantum Cryptography. Although it certainly exists, academic papers that cite the NTT omit the connection between the NTT and residues of a polynomial modulo factors of $X^{2^d} + 1$ and mention only the final expressions of what the NTT computes. This short paper establishes that connection and, in doing so, elucidates key aspects of computing the NTT. Based on this, the specific instantiations of the NTT function used in CRYSTALS-Kyber and CRYSTALS-Dilithium are derived.
The Fujisaki-Okamoto (FO) transformation is used in most proposals for post-quantum secure key encapsulation mechanisms (KEMs) like, e.g., Kyber [BDK+18]. The security analysis of FO in the presence of quantum attackers has made huge progress over the last years. Recently, [HHM22] made a particular improvement by giving a security proof that is agnostic towards how invalid ciphertexts are being treated: in contrast to previous proofs, it works regardless whether invalid ciphertexts are rejected by reporting decryption failure explicitly or implicitly (by returning pseudorandom values). The proof in [HHM22] involves a new correctness notion for the encryption scheme that is used to encapsulate the keys. This allows in principle for a smaller additive security related to decryption failures, but requires to analyze this new notion for the encryption scheme on which a concrete KEM at hand is based. This note offers a trade-off between [HHM22] and its predecessors: it offers a bound for both rejection variants, being mostly based on [HHM22], but uses a more established correctness notion.
Blind Signatures are a useful primitive for privacy preserving applications such as electronic payments, e-voting, anonymous credentials, and more. However, existing practical blind signature schemes based on standard assumptions require either pairings or lattices. We present the first construction of a round-optimal blind signature in the random oracle model based on standard assumptions without resorting to pairings or lattices. In particular, our construction is secure under the strong RSA assumption and DDH (in pairing-free groups). For our construction, we provide a NIZK-friendly signature based on strong RSA, and efficiently instantiate Fischlin's generic framework (CRYPTO'06). Our Blind Signature scheme has signatures of size 4.28 KB and communication cost 62.19 KB. On the way, we develop techniques that might be of independent interest. In particular, we provide efficient relaxed range-proofs with subversion zero-knowledge and compact commitments to elements of arbitrary groups.
Algorithm hardness can be described by 5 categories: hardness in computation, in sequential computation, in memory, in energy consumption (or bandwidth), in code size. Similarly, hardness can be a concern for solving or for verifying, depending on the context, and can depend on a secret trapdoor or be universally hard. Two main lines of research investigated such problems: cryptographic puzzles, that gained popularity thanks to blockchain consensus systems (where solving must be moderately hard, and verification either public or private), and white box cryptography (where solving must be hard without knowledge of the secret key). In this work, we improve upon the classification framework proposed by Biryukov and Perrin in Asiacypt 2017 and offer a united hardness framework, PURED, that can be used for measuring all these kinds of hardness, both in solving and verifying. We also propose three new constructions that fill gaps previously uncovered by the literature (namely, trapdoor proof of CMC, trapdoor proof of code, and a hard challenge in sequential time trapdoored in verification), and analyse their hardness in the PURED framework.
DCT is a beyond-birthday-bound~(BBB) deterministic authenticated encryption~(DAE) mode proposed by Forler et al. in ACISP 2016, ensuring integrity by redundancy. The instantiation scheme of DCT employs the BRW polynomial, which is more efficient than the usual polynomial function in GCM by reducing half of the multiplication operations. However, we show that DCT suffers from a small stretch problem similar to GCM. When the stretch length $\tau$ is small, choosing a special $m$-block message, we can reduce the number of queries required by a successful forgery to $\mathcal{O}(2^{\tau}/m)$. We emphasize that this attack efficiently balances space and time complexity, but does not contradict the security bounds of DCT. Finally, we propose an improved scheme named Robust DCT~(RDCT) with a minor change to DCT, which improves the security when $\tau$ is small and makes it resist the above attack.
Graph convolutional networks (GCNs) are gaining popularity due to their powerful modelling capabilities. However, guaranteeing privacy is an issue when evaluating on inputs that contain users’ sensitive information such as financial transactions, medical records, etc. To address such privacy concerns, we design Entrada, a framework for securely evaluating GCNs that relies on the technique of secure multiparty computation (MPC). For efficiency and accuracy reasons, Entrada builds over the MPC framework of Tetrad (NDSS’22) and enhances the same by providing the necessary primitives. Moreover, Entrada leverages the GraphSC paradigm of Araki et al. (CCS’21) to further enhance efficiency. This entails designing a secure and efficient shuffle protocol specifically in the 4-party setting, which to the best of our knowledge, is done for the first time and may be of independent interest. Through extensive experiments, we showcase that the accuracy of secure GCN evaluated via Entrada is on par with its cleartext counterpart. We also benchmark efficiency of Entrada with respect to the included primitives as well as the framework as a whole. Finally, we showcase Entrada’s practicality by benchmarking GCN-based fraud detection application.
In ASIACRYPT 2016, Bellare et al. first demonstrated that it is impossible to achieve subversion soundness and standard zero knowledge simultaneously. Subsequently, there have been lots of effort to construct zero-knowledge succinct non interactive arguments of knowledge protocols (zk-SNARKs) that satisfy subversion zero knowledge (S-ZK) and standard soundness from the zk-SNARK in the common reference string (CRS) model. The various constructions could be regarded secure in the bare public key (BPK) model because of the equivalence between S-ZK in the CRS model, and uniform non-black-box zero knowledge in the BPK model has been proved by Abdolmaleki et al. in PKC 2020. In this study, We proposed the first publicly verifiable non-uniform ZK zk-SNARK scheme in the BPK model maintaining comparable efficiency with its conventional counterpart, which can also be compatible with the well-known transformation proposed by Bitansky et al. in TCC 2013 to obtain an efficient designated-verifier zk-SNARK. We achieve this goal by only adding a constant number of elements into the CRS, and using an unconventional but natural method to transform Groth’s zkSNARK in EUROCRYPT 2016. In addition, we propose a new speed-up technique that provides a trade-off. Specifically, if a logarithmic number of elements are added into the CRS, according to different circuits, the CRS verification time in our construction could be approximately 9%-23% shorter than that in the conventional counterpart.
The privacy pass protocol allows users to redeem anonymously issued cryptographic tokens instead of solving annoying CAPTCHAs. The issuing authority verifies the credibility of the user, who can later use the pass while browsing the web using an anonymous or virtual private network. Hendrickson et al. proposed an IETF draft (privacypass-rate-limit-tokens-00) for a rate-limiting version of the privacy pass protocol, also called rate-limited Privacy Pass (RlP). Introducing a new actor called a mediator makes both versions inherently different. The mediator applies access policies to rate-limit users’ access to the service while, at the same time, should be oblivious to the website/origin the user is trying to access. In this paper, we formally define the rate-limited Privacy Pass protocol and propose a game-based security model to capture the informal security notions introduced by Hendrickson et al.. We show a construction from simple building blocks that fulfills our security definitions and even allows for a post-quantum secure instantiation. Interestingly, the instantiation proposed in the IETF draft is a specific case of our construction. Thus, we can reuse the security arguments for the generic construction and show that the version used in practice is secure.
Authenticated PIR enables a server to initially commit to a database of $N$ items, for which a client can later privately obtain individual items with complexity sublinear in $N$, with the added guarantee that the retrieved item is consistent with the committed database. A crucial requirement is privacy with abort, i.e., the server should not learn anything about a query even if it learns whether the client aborts. This problem was recently considered by Colombo et al. (USENIX '23), who proposed solutions secure under the assumption that the database is committed to honestly. Here, we close this gap, and present a solution that tolerates fully malicious servers that provide potentially malformed commitments. Our scheme has communication and client computational complexity $\mathcal{O}_{\lambda}(\sqrt{N})$, solely relies on the DDH assumption, and does not introduce heavy machinery (e.g., generic succinct proofs). Privacy with abort holds provided the server succeeds in correctly answering $\lambda$ validation queries, which, from its perspective, are computationally indistinguishable from regular PIR queries. In fact, server side, our scheme is exactly the DDH-based scheme by Colombo et al.
The linear layer of block ciphers plays an important role in their security. In particular, ciphers designed following the wide-trail strategy use the branch number of the linear layer to derive bounds on the probability of linear and differential trails. At FSE 2014, the LS-design construction was introduced as a simple and regular structure to design bitsliced block ciphers. It considers the internal state as a bit matrix, and applies alternatively an identical S-Box on all the columns, and an identical L-Box on all the lines. Security bounds are derived from the branch number of the L-Box. In this paper, we focus on bitsliced linear layers inspired by the LS-design construction and the Spook AEAD algorithm. We study the construction of bitsliced linear transformations with efficient implementations using XORs and rotations (optimized for bitsliced ciphers implemented on 32-bit processors), and a high branch number. In order to increase the density of the activity patterns, the linear layer is designed on the whole state, rather than using multiple parallel copies of an L-Box. Our main result is a linear layer for 128-bit ciphers with branch number 21, improving upon the best 32-bit transformation with branch number 12, and the one of Spook with branch number 16.
Secure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols---in particular, when communication can be sublinear in the circuit representation size of the desired function. Significant advances have been made affirmatively answering this question within the two-party setting, based on a variety of structures and hardness assumptions. In contrast, in the multi-party setting, only one general approach is known: using Fully Homomorphic Encryption (FHE). This remains the state of affairs even for just three parties, with two corruptions. We present a framework for achieving secure sublinear-communication $(N+1)$-party computation, building from a particular form of Function Secret Sharing for only $N$ parties. In turn, we demonstrate implications to sublinear secure computation for various function classes in the 3-party and 5-party settings based on an assortment of assumptions not known to imply FHE.
This work describes an existential signature forgery vulnerability of the current CMS and PKCS#7 signature standards. The vulnerability results from an ambiguity of how to process the signed message in the signature verification process. Specifically, the absence or presence of the so called SignedAttributes field determines whether the signature message digest receives as input the message directly or the SignedAttributes, a DER-encoded structure which contains a digest of the message. If an attacker takes a CMS or PKCS#7 signed message M which was originally signed with SignedAttributes present, then he can craft a new message M 0 that was never signed by the signer and has the DER-encoded SignedAttributes of the original message as its content and verifies correctly against the original signature of M . Due to the limited flexibility of the forged message and the limited control the attacker has over it, the fraction of vulnerable systems must be assumed to be small but due to the wide deployment of the affected protocols, such instances cannot be excluded. We propose a countermeasure based on attack-detection that prevents the attack reliably.
Fully homomorphic encryption (FHE) is an advanced cryptography technique to allow computations (i.e., addition and multiplication) over encrypted data. After years of effort, the performance of FHE has been significantly improved and it has moved from theory to practice. The transciphering framework is another important technique in FHE to address the issue of ciphertext expansion and reduce the client-side computational overhead. Motivated by this framework, several FHE-friendly symmetric-key primitives have been proposed since the publication of LowMC at EUROCRYPT 2015. To apply the transciphering framework to the CKKS scheme, a new transciphering framework called the Real-to-Finite-Field (RtF) framework and a corresponding FHE-friendly symmetric-key primitive called HERA were proposed at ASIACRYPT 2021. Although HERA has a very similar structure to AES, it is considerably different in the following aspects: 1) the power map $x\mapsto x^3$ is used as the S-box; 2) a randomized key schedule is used; 3) it is over a prime field $\mathbb F_p$ with $p>2^{16}$. In this work, we perform the first third-party cryptanalysis of HERA, by showing how to mount new algebraic attacks with multiple collisions in the round keys. Specifically, according to the special way to randomize the round keys in HERA, we find it possible to peel off the last nonlinear layer by using collisions in the last-round key and a simple property of the power map. In this way, we could construct an overdefined system of equations of a much lower degree in the key, and efficiently solve the system via the linearization technique. As a result, for HERA with 192 and 256 bits of security, respectively, we could break some parameters under the same assumption made by designers that the algebra constant $\omega$ for Gaussian elimination is $\omega=2$, i.e., Gaussian elimination on an $n\times n$ matrix takes $\mathcal{O}(n^{\omega})$ field operations. If using more conservative choices like $\omega\in\{2.8,3\}$, our attacks can also successfully reduce the security margins of some variants of \hera to only 1 round. However, the security of HERA with 80 and 128 bits of security is not affected by our attacks due to the high cost to find multiple collisions. In any case, our attacks reveal a weakness of HERA caused by the randomized key schedule and its small state size.
Lasso (Setty, Thaler, Wahby, ePrint 2023/1216) is a recent lookup argument that ensures that the prover cryptographically commits to only "small" values. This note describes BabySpartan, a SNARK for a large class of constraint systems that achieves the same property. The SNARK is a simple combination of SuperSpartan and Lasso. The specific class of constraint systems supported is a generalization of so-called Plonkish constraint systems (and a special case of customizable constraint systems (CCS)). Whereas a recent work called Jolt (Arun, Setty, and Thaler, ePrint 2023/1217) can be viewed as an application of Lasso to uniform computation, BabySpartan can be viewed as applying Lasso to non-uniform computation.
We present a secret-key encryption scheme based on random rank metric ideal linear codes with a simple decryption circuit. It supports unlimited homomorphic additions and plaintext absorptions as well as a fixed arbitrary number of homomorphic multiplications. We study a candidate bootstrapping algorithm that requires no multiplication but additions and plaintext absorptions only. This latter operation is therefore very efficient in our scheme, whereas bootstrapping is usually the main reason which penalizes the performance of other fully homomorphic encryption schemes. However, the security reduction of our scheme restricts the number of independent ciphertexts that can be published. In particular, this prevents to securely evaluate the bootstrapping algorithm as the number of ciphertexts in the key switching material is too large. Our scheme is nonetheless the first somewhat homomorphic encryption scheme based on random ideal codes and a first step towards full homomorphism. Random ideal codes give stronger security guarantees as opposed to existing constructions based on highly structured codes. We give concrete parameters for our scheme that shows that it achieves competitive sizes and performance, with a key size of 3.7 kB and a ciphertext size of 0.9 kB when a single multiplication is allowed.
A secret-shared shuffle (SSS) protocol permutes a secret-shared vector using a random secret permutation. It has found numerous applications, however, it is also an expensive operation and often a performance bottleneck. Chase et al. (Asiacrypt'20) recently proposed a highly efficient semi-honest two-party SSS protocol known as the CGP protocol. It utilizes purposely designed pseudorandom correlations that facilitate a communication-efficient online shuffle phase. That said, semi-honest security is insufficient in many real-world application scenarios since shuffle is usually used for highly sensitive applications. Considering this, recent works (CANS'21, NDSS'22) attempted to enhance the CGP protocol with malicious security over authenticated secret sharings. However, we find that these attempts are flawed, and malicious adversaries can still learn private information via malicious deviations. This is demonstrated with concrete attacks proposed in this paper. Then the question is how to fill the gap and design a maliciously secure CGP shuffle protocol. We answer this question by introducing a set of lightweight correlation checks and a leakage reduction mechanism. Then we apply our techniques with authenticated secret sharings to achieve malicious security. Notably, our protocol, while increasing security, is also efficient. In the two-party setting, experiment results show that our maliciously secure protocol introduces an acceptable overhead compared to its semi-honest version and is more efficient than the state-of-the-art maliciously secure SSS protocol from the MP-SPDZ library.
A multisignature scheme is used to aggregate signatures by multiple parties on a common message $m$ into a single short signature on $m$. Multisignatures are used widely in practice, most notably, in proof-of-stake consensus protocols. In existing multisignature schemes, the verifier needs the public keys of all the signers in order to verify a multisignature issued by some subset of signers. We construct new practical multisignature schemes with three properties: (i) the verifier only needs to store a constant size public key in order to verify a multisignature by an arbitrary subset of parties, (ii) signature size is constant beyond the description of the signing set, and (iii) signers generate their secret signing keys locally, that is, without a distributed key generation protocol. Existing schemes satisfy properties (ii) and (iii). The new capability is property (i) which dramatically reduces the verifier's memory requirements from linear in the number of signers to constant. We give two pairing-based constructions: one in the random oracle model and one in the plain model. We also show that by relaxing property (iii), that is, allowing for a simple distributed key generation protocol, we can further improve efficiency while continuing to satisfy properties (i) and (ii). We give a pairing-based scheme and a lattice-based scheme in this relaxed model.
Traditional key stretching lacks a strict time guarantee due to the ease of parallelized password guessing by attackers. This paper introduces Sloth, a key stretching method leveraging the Secure Element (SE) commonly found in modern smartphones to provide a strict rate limit on password guessing. While this would be straightforward with full access to the SE, Android and iOS only provide a very limited API. Sloth utilizes the existing developer SE API and novel cryptographic constructions to build an effective rate-limit for password guessing on recent Android and iOS devices. Our approach ensures robust security even for short, randomly-generated, six-character alpha-numeric passwords against adversaries with virtually unlimited computing resources. Our solution is compatible with approximately 96% of iPhones and 45% of Android phones and Sloth seamlessly integrates without device or OS modifications, making it immediately usable by app developers today. We formally define the security of Sloth and evaluate its performance on various devices. Finally, we present HiddenSloth, a deniable encryption scheme, leveraging Sloth and the SE to withstand multi-snapshot adversaries.
We introduce a new cryptographic primitive, called Completely Anonymous Signed Encryption (CASE). CASE is a public-key authenticated encryption primitive, that offers anonymity for senders as well as receivers. A "case-packet" should appear, without a (decryption) key for opening it, to be a blackbox that reveals no information at all about its contents. To decase a case-packet fully - so that the message is retrieved and authenticated - a verifcation key is also required. Defining security for this primitive is subtle. We present a relatively simple Chosen Objects Attack (COA) security definition. Validating this definition, we show that it implies a comprehensive indistinguishability-preservation definition in the real-ideal paradigm. To obtain the latter definition, we extend the Cryptographic Agents framework of [2, 3] to allow maliciously created objects. We also provide a novel and practical construction for COA-secure CASE under standard assumptions in public-key cryptography, and in the standard model. We believe CASE can be a staple in future cryptographic libraries, thanks to its robust security guarantees and efficient instantiations based on standard assumptions.
We introduce an efficient SNARK for towers of binary fields. Adapting Brakedown (CRYPTO '23), we construct a multilinear polynomial commitment scheme suitable for polynomials over tiny fields, including that with 2 elements. Our commitment scheme, unlike those of previous works, treats small-field polynomials with zero embedding overhead. We further introduce binary-field adaptations of HyperPlonk's (EUROCRYPT '23) product and permutation checks, as well as of Lasso's lookup. Our scheme's binary PLONKish variant captures standard hash functions—like Keccak-256 and Grøstl—extremely efficiently. With recourse to thorough performance benchmarks, we argue that our scheme can efficiently generate precisely those Keccak-256-proofs which critically underlie modern efforts to scale Ethereum.
CRYSTALS-Kyber is a key-encapsulation mechanism, whose security is based on the hardness of solving the learning-with-errors (LWE) problem over module lattices. As in its specification, Kyber prescribes the usage of the Number Theoretic Transform (NTT) for efficient polynomial multiplication. Side-channel assisted attacks against Post-Quantum Cryptography (PQC) algorithms like Kyber remain a concern in the ongoing standardization process of quantum-computer-resistant cryptosystems. Among the attacks, correlation power analysis (CPA) is emerging as a popular option because it does not require detailed knowledge about the attacked device and can reveal the secret key even if the recorded power traces are extremely noisy. In this paper, we present a two-step attack to achieve a full-key recovery on lattice-based cryptosystems that utilize NTT for efficient polynomial multiplication. First, we use CPA to recover a portion of the secret key from the power consumption of these polynomial multiplications in the decryption process. Then, using the information, we are able to fully recover the secret key by constructing an LWE problem with a smaller lattice rank and solving it with lattice reduction algorithms. Our attack can be expanded to other cryptosystems using NTT-based polynomial multiplication, including Saber. It can be further parallelized and experiments on simulated traces show that the whole process can be done within 20 minutes on a 16-core machine with 200 traces. Compared to other CPA attacks targeting NTT in the cryptosystems, our attack achieves lower runtime in practice. Furthermore, we can theoretically decrease the number of traces needed by using lattice reduction if the same measurement is used. Our lattice attack also outperforms the state-of-the-art result on integrating side-channel hints into lattices, however, the improvement heavily depends on the implementation of the NTT chosen by the users.
Verification of program safety is often reducible to proving the unsatisfiability (i.e., validity) of a formula in Satisfiability Modulo Theories (SMT): Boolean logic combined with theories that formalize arbitrary first-order fragments. Zero-knowledge (ZK) proofs allow SMT formulas to be validated without revealing the underlying formulas or their proofs to other parties, which is a crucial building block for proving the safety of proprietary programs. Recently, Luo et al. (CCS 2022) studied the simpler problem of proving the unsatisfiability of pure Boolean formulas, but it does not support safety proofs generated by SMT solvers. This work presents ZKSMT, a novel framework for proving the validity of SMT formulas in ZK. We design a virtual machine (VM) tailored to efficiently represent the verification process of SMT validity proofs in ZK. Our VM can support the vast majority of popular theories when proving program safety while being complete and sound. To demonstrate this, we instantiate the commonly used theories of equality and linear integer arithmetic in our VM with theory-specific optimizations for proving them in ZK. ZKSMT achieves high practicality even when running on realistic SMT formulas generated by Boogie, a common tool for software verification. It achieves a three-order-of-magnitude improvement compared to a baseline that executes the proof verification code in a general ZK system.
Regular access to unpredictable and bias-resistant randomness is important for applications such as blockchains, voting, and secure distributed computing. Distributed random beacon protocols address this need by distributing trust across multiple nodes, with the majority of them assumed to be honest. These protocols have found applications in blockchain technology, leading to the proposal of several distributed random beacon protocols, with some already implemented. However, many current random beacon systems rely on threshold cryptographic setups or exhibit high computational costs, while others assume partial or bounded synchronous networks. To overcome these limitations, we propose HashRand, a computation and communication-efficient asynchronous random beacon protocol that uses a secure Hash function to generate beacons and pairwise secure channels. HashRand has a per-node communication complexity of $\mathcal{O}(\lambda n \log(n))$ bits per beacon. The computational efficiency of HashRand is attributed to the two orders of magnitude lower time of a one-way Hash computation compared to discrete log exponentiation. Interestingly, besides reduced overhead, HashRand achieves Post-Quantum security by leveraging the secure Hash function against quantum adversaries, setting it apart from other random beacon protocols that use discrete log cryptography. In a geo-distributed testbed of $n=160$ nodes, HashRand produces 1 beacon every second, which is at least 4x higher than Spurt. We also demonstrate the practical utility of HashRand by implementing a Post-Quantum secure Asynchronous SMR protocol, which has a response rate of over 122k txns per second over a WAN at $n=40$ nodes.
Given a set of matrices $\mathbf{A} := \{A_0, \dotsc, A_{k-1}\}$, and a matrix $M$ guaranteed to be the product of some ordered subset of $\mathbf{L}\subset\mathbf{A}$, can $\mathbf{L}$ be efficiently recovered? We begin by observing that the answer is positive under some assumptions on $\mathbf{A}$. Noting that appropriate transformations seem to make $\mathbf{L}$'s recovery difficult we provide the blueprint of two new public-key cryptosystems based upon this problem. We term those constructions "blueprints because, given their novelty, we are still uncertain of their exact security. Yet, we daringly conjecture that even if attacks are found on the proposed constructions, these attacks could be thwarted by adjustments in the key generation, key size or the encryption mechanism, thereby resulting on the long run in fully-fledged public-key cryptosystems that do not seem to belong to any of the mainstream public-key encryption paradigms known to date.
The LowMC family of SPN block cipher proposed by Albrecht et al. was designed specifically for MPC-/FHE-/ZKP-friendly use cases. It is especially used as the underlying block cipher of PICNIC, one of the alternate third-round candidate digital signature algorithms for NIST post-quantum cryptography standardization. The security of PICNIC is highly related to the difficulty of recovering the secret key of LowMC from a given plaintext/ciphertext pair, which raises new challenges for security evaluation under extremely low data complexity. In this paper, we improve the attacks on LowMC under low data complexity, i.e. 1 or 2 chosen plaintext/ciphertext pairs. For the difference enumeration attack with 2 chosen plaintexts, we propose new algebraic methods to better exploit the nonlinear relation inside the introduced variables based on the attack framework proposed by Liu et al. at ASIACRYPT 2022. With this technique, we significantly extend the number of attack rounds for LowMC with partial nonlinear layers and improve the success probability from around 0.5 to over 0.9. The security margin of some instances can be reduced to only 3/4 rounds. For the key-recovery attack using a single plaintext, we adopt a different linearization strategy to reduce the huge memory consumption caused by the polynomial methods for solving multivariate equation systems. The memory complexity reduces drastically for all 5-/6-round LowMC instances with full nonlinear layers at the sacrifice of a small factor of time complexity. For 5-round LowMC instances with a block size of 129, the memory complexity decreases from $2^{86.46}$ bits to $2^{48.18}$ bits while the time complexity even slightly reduces. Our results indicate that the security for different instances of LowMC under extremely low data complexity still needs further exploration.
Parallel repetition refers to a set of valuable techniques used to reduce soundness error of probabilistic proofs while saving on certain efficiency measures. Parallel repetition has been studied for interactive proofs (IPs) and multi-prover interactive proofs (MIPs). In this paper we initiate the study of parallel repetition for probabilistically checkable proofs (PCPs). We show that, perhaps surprisingly, parallel repetition of a PCP can increase soundness error, in fact bringing the soundness error to one as the number of repetitions tends to infinity. This "failure" of parallel repetition is common: we find that it occurs for a wide class of natural PCPs for NP-complete languages. We explain this unexpected phenomenon by providing a characterization result: the parallel repetition of a PCP brings the soundness error to zero if and only if a certain "MIP projection" of the PCP has soundness error strictly less than one. We show that our characterization is tight via a suitable example. Moreover, for those cases where parallel repetition of a PCP does bring the soundness error to zero, the aforementioned connection to MIPs offers preliminary results on the rate of decay of the soundness error. Finally, we propose a simple variant of parallel repetition, called consistent parallel repetition (CPR), which has the same randomness complexity and query complexity as the plain variant of parallel repetition. We show that CPR brings the soundness error to zero for every PCP (with non-trivial soundness error). In fact, we show that CPR decreases the soundness error at an exponential rate in the repetition parameter.
Integral, impossible-differential (ID), and zero-correlation (ZC) attacks are three of the most important attacks on block ciphers. However, manually finding these attacks can be a daunting task, which is why automated methods are becoming increasingly important. Most automatic tools regarding integral, ZC, and ID attacks have focused only on finding distinguishers rather than complete attacks. At EUROCRYPT~2023, Hadipour et al. proposed a generic and efficient constraint programming (CP) model based on satisfiability for finding ID, ZC, and integral distinguishers. This new model can be extended to a unified CP model for finding full key recovery attacks. However, it has limitations, including determining the contradiction location beforehand and a cell-wise model unsuitable for weakly aligned ciphers like Ascon and PRESENT. They also deferred developing a CP model for the partial-sum technique in key recovery as future work. In this paper, we enhance Hadipour et al.'s method in several ways. First, we remove the limitation of determining the contradiction location in advance. Second, we show how to extend the distinguisher model to a bit-wise model, considering the internal structure of S-boxes and keeping the model based on satisfiability. Third, we introduce a CP model for the partial-sum technique for the first time. To show the usefulness and versatility of our approach, we apply it to various designs, from strongly aligned ones like ForkSKINNY and QARMAv2 to weakly aligned ones such as Ascon and PRESENT, yielding significantly improved results. To mention a few of our results, we improve the integral distinguisher of QARMAv2-128 (resp. QARMAv2-64) by 7 (resp. 5) rounds, and the integral distinguisher of ForkSKINNY by 1 round, only thanks to our cell-wise distinguisher modelings. By using our new bit-wise modeling, our tool can find a group of $2^{155}$ 5-round ID and ZC distinguishers for Ascon in only one run, taking a few minutes on a regular laptop. The new CP model for the partial-sum technique enhances integral attacks on all SKINNY variants, notably improving the best attack on SKINNY-$n$-$n$ in the single-key setting by 1 round. We also enhance ID attacks on ForkSKINNY and provide the first analysis of this cipher in a limited reduced-round setting. Our methods are generic and applicable to other block ciphers.
An $(n, t)$-Non-Interactive Verifiable Secret Sharing (NI-VSS) scheme allows a dealer to share a secret among $n$ parties, s.t. all the parties can verify the validity of their shares and only a set of them, i.e., more than $t$, can access the secret. In this paper, we present $\Pi$, as a unified framework for building NI-VSS schemes in the majority honest setting. Notably, $\Pi$ does not rely on homomorphic commitments; instead requires a Random Oracle (RO) and any commitment scheme that extra to its core attributes hiding and binding, it might be homomorphic and/or PQ-secure. - When employing Discrete Logarithm (DL)-based commitments, $\Pi$ enables the construction of two novel NI-VSS schemes in the RO model, named $\Pi_P$ and $\Pi_F$. In comparison to the well-known Pedersen and Feldman VSS schemes, both $\Pi_P$ and $\Pi_F$ require $O(1)$ exponentiations in the verification process, as opposed to $O(t)$, albeit at the expense of a slightly slower sharing phase and increased communication. - By instantiating $\Pi$ with a hash-based commitment scheme, we obtain an efficient NI-VSS scheme in the quantum RO model, labeled $\Pi_{LA}$ (pronounced [paɪla]). $\Pi_{LA}$ outperforms the recent construction by Atapoor, Baghery, Cozzo, and Pedersen from Asiacrypt'23 by a constant factor in all metrics. $\Pi_{LA}$ can also be viewed as an amplified version of the $\it{simple}$ NI-VSS scheme, proposed by Gennaro, Rabin, and Rabin, at PODC'98. - Building upon $\Pi_F$, we construct a Publicly VSS (PVSS) scheme, labeled $\Pi_S$, that can be seen as a new variant of Schoenmakers' scheme from Crypto'99. To this end, we first define the Polynomial Discrete Logarithm (PDL) problem, as a generalization of DL and then build a variant of the Schnorr Proof of Knowledge (PoK) scheme based on the new hardness assumption. We think the PDL relation and the associated PoK scheme can be independently interesting for Shamir-based threshold protocols. We believe $\Pi$ is general enough to be employed in various contexts such as lattices, isogenies, and an extensive array of practical use cases.
We present staircase attack, the first attack on the incentive mechanism of the Proof-of-Stake (PoS) protocol used in Ethereum 2.0 beacon chain. Our attack targets the penalty of the incentive mechanism that penalizes inactive participation. Our attack can make honest validators suffer from penalties, even if they strictly follow the specification of the protocol. We show both theoretically and experimentally that if the adversary controls 29.6% stake in a moderate-size system, the attack can be launched continuously, so eventually all honest validators will lose their incentives. In contrast, the adversarial validators can still receive incentives, and the stake owned by the adversary can eventually exceed the $1/3$ threshold (system assumption), posing a threat to the security properties of the system. In practice, the attack feasibility is directly related to two parameters: the number of validators and the parameter MAX_ATTESTATION, the maximum number of attestations (i.e., votes) that can be included in each block. We further modify our attack such that, with current system setup (850,000 validators and MAX_ATTESTATION=128), our attack can be launched continuously with a probability of 80.25%. As a result, the incentives any honest validator receives are only 28.9% of its fair share.
Digital signatures are a cornerstone of security and trust in cryptography, providing authenticity, integrity, and non-repudiation. Despite their benefits, traditional digital signature schemes suffer from inherent immutability, offering no provision for a signer to retract a previously issued signature. This paper introduces the concept of a withdrawable signature scheme, which allows for the retraction of a signature without revealing the signer's private key or compromising the security of other signatures the signer created before. This property, defined as ``withdrawability'', is particularly relevant in decentralized systems, such as e-voting, blockchain-based smart contracts, and escrow services, where signers may wish to revoke or alter their commitment. The core idea of our construction of a withdrawable signature scheme is to ensure that the parties with a withdrawable signature are not convinced whether the signer signed a specific message. This ability to generate a signature while preventing validity from being verified is a fundamental requirement of our scheme, epitomizing the property of \textit{withdrawability}. After formally defining security notions for withdrawable signatures, we present two constructions of the scheme based on the pairing and the discrete logarithm. We provide security proof that both constructions are unforgeable under insider corruption and satisfy the criteria of withdrawability. We anticipate our new type of signature will significantly enhance flexibility and security in digital transactions and communications.
The intuitions behind succinct proof systems are often difficult to separate from some of the deep cryptographic techniques that are used in their construction. In this paper, we show that, using some simple abstractions, a number of commonly-used tools used in the construction of succinct proof systems may be viewed as basic consequences of linear algebra over finite fields. We introduce notation which considerably simplifies these constructions and slowly build a toolkit of useful techniques that can be combined to create different protocols. We also show a simple 'probabilistic calculus' which specifies how to combine these tools and bounds on their resulting security. To show the power of these abstractions and toolkit, we give a short proof of the security of the FRI protocol. Along the way, we discuss some natural generalizations of protocols in the literature and propose a conjecture related to proximity testing using linear error-correcting codes that is of independent interest.
Post-quantum signature schemes based on the MPC-in-the-Head (MPCitH) paradigm are recently attracting significant attention as their security solely depends on the one-wayness of the underlying primitive, providing diversity for the hardness assumption in post-quantum cryptography. Kim et al. proposed AIM as an MPCitH-friendly one-way function characterized by large algebraic S-boxes and parallel design, which lead to short signature size (CCS 2023). Recently, Liu et al. proposed a fast exhaustive search attack on AIM (ePrint 2023), which degrades the security of AIM by up to 13 bits. While communicating with the authors, they pointed out another possible vulnerability on AIM. In this paper, we propose AIM2 which mitigates all the vulnerabilities, and analyze its security against algebraic attacks.
Recent works on lattice-based extractable polynomial commitments can be grouped into two classes: (i) non-interactive constructions that stem from the functional commitment by Albrecht, Cini, Lai, Malavolta and Thyagarajan (CRYPTO 2022), and (ii) lattice adaptations of the Bulletproofs protocol (S&P 2018). The former class enjoys security in the standard model, albeit a knowledge assumption is desired. In contrast, Bulletproof-like protocols can be made secure under falsifiable assumptions, but due to technical limitations regarding subtractive sets, they only offer inverse-polynomial soundness error. This issue becomes particularly problematic when transforming these protocols to the non-interactive setting using the Fiat-Shamir paradigm. In this work, we propose the first lattice-based non-interactive extractable polynomial commitment scheme which achieves polylogarithmic proof size and verifier runtime (in the length of the committed message) under standard assumptions. At the core of our work lies a new tree-based commitment scheme, along with an efficient proof of polynomial evaluation inspired by FRI (ICALP 2018). Natively, the construction is secure under a “multi-instance version” of the Power-Ring BASIS assumption (Eprint 2023/846). We then fully reduce security to the Module-SIS assumption by introducing several re-randomisation techniques which can be of independent interest.
The Algorand consensus protocol is interesting both in theory and in practice. On the theoretical side, to achieve adaptive security, it introduces the novel idea of player replaceability, where each step of the protocol is executed by a different randomly selected committee whose members remain secret until they send their first and only message. The protocol provides consistency under arbitrary network conditions and liveness under intermittent network partitions. On the practical side, the protocol is used to secure the Algorand cryptocurrency, whose total value is approximately $850M at the time of writing. The Algorand protocol in use differs substantially from the protocols described in the published literature on Algorand. Despite its significance, it lacks a formal analysis. In this work, we describe and analyze the Algorand consensus protocol as deployed today in Algorand’s ecosystem. We show that the overall protocol framework is sound by characterizing network conditions and parameter settings under which the protocol can be proven secure.
We study the problem of committee selection in the context of proof-of-stake consensus mechanisms or distributed ledgers. These settings determine a family of participating parties---each of which has been assigned a non-negative "stake"---and are subject to an adversary that may corrupt a subset of the parties. The challenge is to select a committee of participants that accurately reflects the proportion of corrupt and honest parties, as measured by stake, in the full population. The trade-off between committee size and the probability of selecting a committee that over-represents the corrupt parties is a fundamental factor in both security and efficiency of proof-of-stake consensus, as well as committee-run layer-two protocols. We propose and analyze several new committee selection schemes that improve upon existing techniques by adopting low-variance assignment of certain committee members that hold significant stake. These schemes provide notable improvements to the size--security trade-off arising from the stake distributions of many deployed ledgers.
Contact discovery is a crucial component of social applications, facilitating interactions between registered contacts. This work introduces Arke, a novel approach to contact discovery that addresses the limitations of existing solutions in terms of privacy, scalability, and reliance on trusted third parties. Arke ensures the unlinkability of user interactions, mitigates enumeration attacks, and operates without single points of failure or trust. Notably, Arke is the first contact discovery system whose performance is independent of the total number of users and the first that can operate in a Byzantine setting. It achieves its privacy goals through an unlinkable handshake mechanism built on top of an identity-based non-interactive key exchange. By leveraging a custom distributed architecture, Arke forgoes the expense of consensus to achieve scalability while maintaining consistency in a Byzantine fault tolerant environment. Performance evaluations demonstrate that Arke can support enough throughput to operate at a planetary scale while maintaining sub-second latencies in a large geo-distributed setting.
Decentralized Finance (DeFi) is a new paradigm in the creation, distribution, and utilization of financial services via the integration of blockchain technology. Our research conducts a comprehensive introduction and meticulous classification of various DeFi applications. Beyond that, we thoroughly analyze these risks from both technical and economic perspectives, spanning multiple layers. We point out research gaps and revenues, covering technical advancements, innovative economics, and sociology and ecology optimization.
We present new protocols for threshold Schnorr signatures that work in an asynchronous communication setting, providing robustness and optimal resilience. These protocols provide unprecedented performance in terms of communication and computational complexity. In terms of communication complexity, for each signature, a single party must transmit a few dozen group elements and scalars across the network (independent of the size of the signing committee). In terms of computational complexity, the amortized cost for one party to generate a signature is actually less than that of just running the standard Schnorr signing or verification algorithm (at least for moderately sized signing committees, say, up to 100). For example, we estimate that with a signing committee of 49 parties, at most 16 of which are corrupt, we can generate 50,000 Schnorr signatures per second (assuming each party can dedicate one standard CPU core and 500Mbs of network bandwidth to signing). Importantly, this estimate includes both the cost of an offline precomputation phase (which just churns out message independent "presignatures") and an online signature generation phase. Also, the online signing phase can generate a signature with very little network latency (just one to three rounds, depending on how throughput and latency are balanced). To achieve this result, we provide two new innovations. One is a new secret sharing protocol (again, asynchronous, robust, optimally resilient) that allows the dealer to securely distribute shares of a large batch of ephemeral secret keys, and to publish the corresponding ephemeral public keys. To achieve better performance, our protocol minimizes public-key operations, and in particular, is based on a novel technique that does not use the traditional technique based on "polynomial commitments". The second innovation is a new algorithm to efficiently combine ephemeral public keys contributed by different parties (some possibly corrupt) into a smaller number of secure ephemeral public keys. This new algorithm is based on a novel construction of a so-called "super-invertible matrix" along with a corresponding highly-efficient algorithm for multiplying this matrix by a vector of group elements. As protocols for verifiably sharing a secret key with an associated public key and the technology of super-invertible matrices both play a major role in threshold cryptography and multi-party computation, our two new innovations should have applicability well beyond that of threshold Schnorr signatures.
Despite active research on secret-sharing schemes for arbitrary access structures for more than 35 years, we do not understand their share size $-$ the best known upper bound for an arbitrary n-party access structure is $2^{O(n)}$ while the best known lower bound is $\Omega(n/\log(n))$. Consistent with our knowledge, the share size can be anywhere between these bounds. To better understand this question, one can study specific families of secret-sharing schemes. For example, linear secret-sharing schemes, in which the sharing and reconstruction are computed by linear mappings, have been studied in many papers, e.g., it is known that they require shares of size at least $2^{0.5n}$. Secret-sharing schemes in which the sharing and/or reconstruction are computed by low-degree polynomials have been recently studied by Paskin-Cherniavsky and Radune [ITC 2020] and by Beimel, Othman, and Peter [CRYPTO 2021]. It was shown that secret-sharing schemes with sharing and reconstruction computed by polynomials of degree 2 are more efficient than linear schemes (i.e., schemes in which the sharing and reconstruction are computed by polynomials of degree one). Prior to our work, it was not known if using polynomials of higher degree can reduce the share size. We show that this is indeed the case, i.e., we construct secret-sharing schemes with reconstruction by degree-$d$ polynomials, where as the reconstruction degree $d$ increases, the share size for arbitrary access structures decreases. As a step in our construction, we construct conditional disclosure of secrets (CDS) protocols. For example, we construct 2-server CDS protocols for functions $f : [N ] \times [N ] \to \{0, 1\}$ with reconstruction computed by degree-d polynomials with message size $N^{O(\log \log d/ \log d)}$. Combining our results with a lower bound of Beimel et al. [CRYPTO 2021], we show that increasing the degree of the reconstruction function in CDS protocols provably reduces the message size. To construct our schemes, we define sparse matching vectors, show constructions of such vectors, and design CDS protocols and secret-sharing schemes with degree-$d$ reconstruction from sparse matching vectors.
Designing novel symmetric-key primitives for advanced protocols like secure multiparty computation (MPC), fully homomorphic encryption (FHE) and zero-knowledge proof systems (ZK), has been an important research topic in recent years. Many such existing primitives adopt quite different design strategies from conventional block ciphers. Notable features include that many of these ciphers are defined over a large finite field, and that a power map is commonly used to construct the nonlinear component due to its efficiency in these applications as well as its strong resistance against the differential and linear cryptanalysis. In this paper, we target the MPC-friendly ciphers AIM and RAIN used for the post-quantum signature schemes AIMer (CCS 2023 and NIST PQC Round 1 Additional Signatures) and Rainier (CCS 2022), respectively. Specifically, we can find equivalent representations of 2-round RAIN and full-round AIM, respectively, which make them vulnerable to either the polynomial method, or the crossbred algorithm, or the fast exhaustive search attack. Consequently, we can break 2-round RAIN with the 128/192/256-bit key in only $2^{111}/2^{170}/2^{224}$ bit operations. For full-round AIM with the 128/192/256-bit key, we could break them in $2^{136.2}/2^{200.7}/2^{265}$ bit operations, which are equivalent to about $2^{115}/2^{178}/2^{241}$ calls of the underlying primitives. In particular, our analysis indicates that AIM does not reach the required security levels by the NIST competition.
We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions.
In this paper, we introduce a new approach to efficiently compute TFHE bootstrapping keys for (predefined) multiple users. Hence, a fixed number of users can enjoy the same level of efficiency as in the single key setting, keeping their individual input privacy. Our construction relies on a novel algorithm called homomorphic indicator, which can be of independent interest. We provide a detailed analysis of the noise growth and a set of secure parameters suitable to be used in practice. Moreover, we compare the complexity of our technique with other state-of-the-art constructions and show which method performs better in what parameter sets, based on our noise analysis. We also provide a prototype implementation of our technique. To the best of our knowledge, this is the first implementation of TFHE in the multiparty setting.
Protocols for state-machine replication (SMR) often trade off performance for resilience to network delay. In particular, protocols for asynchronous SMR tolerate arbitrary network delay but sacrifice throughput/latency when the network is fast, while partially synchronous protocols have good performance in a fast network but fail to make progress if the network experiences high delay. Existing hybrid protocols are resilient to arbitrary network delay and have good performance when the network is fast, but suffer from high overhead (``thrashing'') if the network repeatedly switches between being fast and slow (e.g., in a network that is typically fast but has intermittent message delays). We propose Abraxas, a generic approach for constructing a hybrid protocol based on any protocol $\Pi_\mathsf{fast}$ and any asynchronous protocol $\Pi_\mathsf{slow}$ to achieve (1)~security and performance equivalent to $\Pi_\mathsf{slow}$ under arbitrary network behavior; (2)~performance equivalent to $\Pi_\mathsf{fast}$ when conditions are favorable. We instantiate Abraxas with the best existing protocols for $\Pi_\mathsf{fast}$ (Jolteon) and $\Pi_\mathsf{slow}$ (2-chain VABA), and show experimentally that the resulting protocol significantly outperforms Ditto, the previous state-of-the-art hybrid protocol.
We present new protocols for *Asynchronous Verifiable Secret Sharing* for Shamir (i.e., threshold $t<n$) sharing of secrets. Our protocols: * Use only "lightweight" cryptographic primitives, such as hash functions; * Can share secrets over rings such as $\mathbb{Z}_{p^k}$ as well as finite fields $\mathbb{F}_q$; * Provide *optimal resilience*, in the sense that they tolerate up to $t < n/3$ corruptions, where $n$ is the total number of parties; * Are *complete*, in the sense that they guarantee that if any honest party receives their share then all honest parties receive their shares; * Employ *batching* techniques, whereby a dealer shares many secrets in parallel, and achieves an amortized communication complexity that is linear in $n$, at least on the "happy path", where no party *provably* misbehaves.
We propose the first aggregate signature scheme such that: (1) its security is based on the standard lattice assumptions in the random oracle model; (2) the aggregate signature size is logarithmic; (3) it is not one-time; and (4) it supports non-interactive aggregation. To obtain such a scheme, we combine the most compact SNARK (Succinct Non-interactive ARgument of Knowledge) system and a SNARK-friendly signature scheme. As a result, our aggregated signature size is sufficiently compact. For example, the size required to aggregate $2^{20}$ signatures is only a few hundred kilobytes. This result shows that our scheme is superior to the existing lattice-based schemes in compressing many signatures.
Partially Oblivious Pseudorandom Functions (POPRFs) are 2-party protocols that allow a client to learn pseudorandom function (PRF) evaluations on inputs of its choice from a server. The client submits two inputs, one public and one private. The security properties ensure that the server cannot learn the private input, and the client cannot learn more than one evaluation per POPRF query. POPRFs have many applications including password-based key exchange and privacy-preserving authentication mechanisms. However, most constructions are based on classical assumptions, and those with post quantum security suffer from large efficiency drawbacks. In this work, we construct a novel POPRF from lattice assumptions and the “Crypto Dark Matter” PRF candidate (TCC’18) in the random oracle model. At a conceptual level, our scheme exploits the alignment of this family of PRF candidates, relying on mixed modulus computations, and programmable bootstrapping in the torus fully homomorphic encryption scheme (TFHE). We show that our construction achieves malicious client security based on circuit-private FHE, and client privacy from the semantic security of the FHE scheme. We further explore a heuristic approach to extend our scheme to support verifiability, based on the difficulty of computing cheating circuits in low depth. This would yield a verifiable (P)OPRF. We provide a proof-of-concept implementation and preliminary benchmarks of our construction. For the core online OPRF functionality, we require amortised 10.0KB communication per evaluation and a one-time per-client setup communication of 2.5MB.
The Regular Syndrome Decoding (RSD) problem, a variant of the Syndrome Decoding problem with a particular error distribution, was introduced almost 20 years ago by Augot et al. . In this problem, the error vector is divided into equally sized blocks, each containing a single noisy coordinate. More recently, the last five years have seen increased interest in this assumption due to its use in MPC and ZK applications. Generally referred to as "LPN with regular noise" in this context, the assumption allows to achieve better efficiency when compared to plain LPN. We present the first attack on RSD relying on Gröbner bases techniques. After a careful theoretical analysis of the underlying polynomial system, we propose concrete attacks that are able to take advantage of the regular noise distribution. In particular, we can identify several examples of concrete parameters where our techniques outperform other algorithms.
In the recent work of (Cheon & Lee, Eurocrypt'22), the concept of a degree-$D$ packing method was formally introduced, which captures the idea of embedding multiple elements of a smaller ring into a larger ring, so that element-wise multiplication in the former is somewhat "compatible" with the product in the latter. Then, several optimal bounds and results are presented, and furthermore, the concept is generalized from one multiplication to degrees larger than two. These packing methods encompass several constructions seen in the literature in contexts like secure multiparty computation and fully homomorphic encryption. One such construction is the concept of reverse multiplication-friendly embeddings (RMFEs), which are essentially degree-2 packing methods. In this work we generalize the notion of RMFEs to \emph{degree-$D$ RMFEs} which, in spite of being "more algebraic" than packing methods, turn out to be essentially equivalent. Then, we present a general construction of degree-$D$ RMFEs by generalizing the ideas on algebraic geometry used to construct traditional degree-$2$ RMFEs which, by the aforementioned equivalence, leads to explicit constructions of packing methods. Furthermore, our theory is given in an unified manner for general Galois rings, which include both rings of the form $\mathbb{Z}_{p^k}$ and fields like $\mathbb{F}_{p^k}$, which have been treated separately in prior works. We present multiple concrete sets of parameters for degree-$D$ RMFEs (including $D=2$), which can be useful for future works. Finally, we apply our RMFEs to the task of non-interactively generating high degree correlations for secure multiparty computation protocols. This requires the use of Shamir secret sharing for a large number of parties, which is known to require large-degree Galois ring extensions. Our RMFE enables the generation of such preprocessing data over small rings, without paying for the multiplicative overhead incurred by using Galois ring extensions of large degree. For our application we also construct along the way, as a side contribution of potential independent interest, a pseudo-random secret-sharing solution for non-interactive generation of packed Shamir-sharings over Galois rings with structured secrets, inspired by the PRSS solutions from (Benhamouda et al, TCC 2021).
We introduce a new efficient transparent interactive zero-knowledge argument system that is based on the new input transformation concept which we will introduce in this paper. The core of this concept is a mechanism that converts input parameters into a format that can be processed directly by the circuit so that the circuit output can be verified through direct computation of the circuit. Our benchmark result shows our approach can significantly improve verifier runtime performance by more than one order of magnitude over the state of the art while keeping the prover runtime and communication cost competitive with that of the state of the art. In addition, the direct computation mechanism in our protocol allows the prover to add specifically designed gates to optimize the evaluation process. This is because the circuit is verified by verifiers linearly "computing" the circuit, which also enables us to bypass the "inactive part" of the circuit to further improve its performance. Last but not least, our protocol is also memory efficient. The theoretical memory cost of our protocol is just $O(n^{\frac{1}{2}})$, where $n$ stands for the number of gates in a circuit. Unlike memory-efficient voice-based protocols, our protocol offers both high verifier runtime performance and small communication cost and can be easily made non-interactive with the Fiat-Shamir heuristic.
Zero-knowledge elementary databases (ZK-EDBs) enable a prover to commit a database ${D}$ of key-value $(x,v)$ pairs and later provide a convincing answer to the query ``send me the value $D(x)$ associated with $x$'' without revealing any extra knowledge (including the size of ${D}$). After its introduction, several works extended it to allow more expressive queries, but the expressiveness achieved so far is still limited: only a relatively simple queries--range queries over the keys and values-- can be handled by known constructions. In this paper we introduce a new notion called zero knowledge functional elementary databases (ZK-FEDBs), which allows the most general functional queries. Roughly speaking, for any Boolean circuit $f$, ZK-FEDBs allows the ZK-EDB prover to provide convincing answers to the queries of the form ``send me all records ${(x,v)}$ in ${{D}}$ satisfying $f(x,v)=1$,'' without revealing any extra knowledge (including the size of ${D}$). We present a construction of ZK-FEDBs in the random oracle model and generic group model, whose proof size is only linear in the length of record and the size of query circuit, and is independent of the size of input database $D$. Our technical constribution is two-fold. Firstly, we introduce a new variant of zero-knowledge sets (ZKS) which supports combined operations on sets, and present a concrete construction that is based on groups with unknown order. Secondly, we develop a tranformation that tranforms the query of Boolean circuit into a query of combined operations on related sets, which may be of independent interest.
Critical and widely used cryptographic protocols have repeatedly been found to contain flaws in their design and their implementation. A prominent class of such vulnerabilities is logical attacks, e.g. attacks that exploit flawed protocol logic. Automated formal verification methods, based on the Dolev-Yao (DY) attacker, formally define and excel at finding such flaws, but operate only on abstract specification models. Fully automated verification of existing protocol implementations is today still out of reach. This leaves open whether such implementations are secure. Unfortunately, this blind spot hides numerous attacks, such as recent logical attacks on widely used TLS implementations introduced by implementation bugs. We answer by proposing a novel and effective technique that we call DY model-guided fuzzing, which precludes logical attacks against protocol implementations. The main idea is to consider as possible test cases the set of abstract DY executions of the DY attacker, and use a novel mutation-based fuzzer to explore this set. The DY fuzzer concretizes each abstract execution to test it on the program under test. This approach enables reasoning at a more structural and security-related level of messages represented as formal terms (e.g. decrypt a message and re-encrypt it with a different key) as opposed to random bit-level modifications that are much less likely to produce relevant logical adversarial behaviors. We implement a full-fledged and modular DY protocol fuzzer. We demonstrate its effectiveness by fuzzing three popular TLS implementations, resulting in the discovery of four novel vulnerabilities.
Micciancio and Sorrel (ICALP 2018) proposed a bootstrapping algorithm that can refresh many messages at once with sublinearly many homomorphic operations per message. However, despite the attractive asymptotic cost, it is unclear if their algorithm could ever be practical, which reduces the impact of their results. In this work, we follow their general framework, but propose an amortized bootstrapping that is conceptually simpler and asymptotically cheaper. We reduce the number of homomorphic operations per refreshed message from $O(3^\rho \cdot n^{1/\rho} \cdot \log n)$ to $O(\rho \cdot n^{1/\rho})$, and the noise overhead from $\tilde{O}(n^{2 + 3 \cdot \rho})$ to $\tilde{O}(n^{1 + \rho})$. We also make it more general, by handling non-binary messages and applying programmable bootstrapping. To obtain a concrete instantiation of our bootstrapping algorithm, we propose a double-CRT (aka RNS) version of the GSW scheme, including a new operation, called shrinking, used to speed-up homomorphic operations by reducing the dimension and ciphertext modulus of the ciphertexts. We also provide a C++ implementation of our algorithm, thus showing for the first time the practicability of the amortized bootstrapping. Moreover, it is competitive with existing bootstrapping algorithms, being even around 3.4 times faster than an equivalent non-amortized version of our bootstrapping.
Payment Channel Hub (PCH) is a promising solution to the scalability issue of first-generation blockchains or cryptocurrencies such as Bitcoin. It supports off-chain payments between a sender and a receiver through an intermediary (called the tumbler). Relationship anonymity and value privacy are desirable features of privacy-preserving PCHs, which prevent the tumbler from identifying the sender and receiver pairs as well as the payment amounts. To our knowledge, all existing Bitcoin-compatible PCH constructions that guarantee relationship anonymity allow only a (predefined) fixed payment amount. Thus, to achieve payments with different amounts, they would require either multiple PCH systems or running one PCH system multiple times. Neither of these solutions would be deemed practical. In this paper, we propose the first Bitcoin-compatible PCH that achieves relationship anonymity and supports variable amounts for payment. To achieve this, we have several layers of technical constructions, each of which could be of independent interest to the community. First, we propose $\textit{BlindChannel}$, a novel bi-directional payment channel protocol for privacy-preserving payments, where {one of the channel parties} is unable to see the channel balances. Then, we further propose $\textit{BlindHub}$, a three-party (sender, tumbler, receiver) protocol for private conditional payments, where the tumbler pays to the receiver only if the sender pays to the tumbler. The appealing additional feature of BlindHub is that the tumbler cannot link the sender and the receiver while supporting a variable payment amount. To construct BlindHub, we also introduce two new cryptographic primitives as building blocks, namely $\textit{Blind Adaptor Signature}$(BAS), and $\textit{Flexible Blind Conditional Signature}$. BAS is an adaptor signature protocol built on top of a blind signature scheme. Flexible Blind Conditional Signature is a new cryptographic notion enabling us to provide an atomic and privacy-preserving PCH. Lastly, we instantiate both BlindChannel and BlindHub protocols and present implementation results to show their practicality.
Updatable encryption (UE) enables a cloud server to update ciphertexts using client-generated tokens. There are two types of UE: ciphertext-independent (c-i) and ciphertext-dependent (c-d). In terms of construction and efficiency, c-i UE utilizes a single token to update all ciphertexts. The update mechanism relies mainly on the homomorphic properties of exponentiation, which limits the efficiency of encryption and updating. Although c-d UE may seem inconvenient as it requires downloading parts of the ciphertexts during token generation, it allows for easy implementation of the Dec-then-Enc structure. This methodology significantly simplifies the construction of the update mechanism. Notably, the c-d UE scheme proposed by Boneh et al. (ASIACRYPT’20) has been reported to be 200 times faster than prior UE schemes based on DDH hardness, which is the case for most existing c-i UE schemes. Furthermore, c-d UE ensures a high level of security as the token does not reveal any information about the key, which is difficult for c-i UE to achieve. However, previous security studies on c-d UE only addressed selective security; the studies for adaptive security remain an open problem. In this study, we make three significant contributions to ciphertextdependent updatable encryption (c-d UE). Firstly, we provide stronger security notions compared to previous work, which capture adaptive security and also consider the adversary’s decryption capabilities under the adaptive corruption setting. Secondly, we propose a new c-d UE scheme that achieves the proposed security notions. The token generation technique significantly differs from the previous Dec-then-Enc structure, while still preventing key leakages. At last, we introduce a packing technique that enables the simultaneous encryption and updating of multiple messages within a single ciphertext. This technique helps alleviate the cost of c-d UE by reducing the need to download partial ciphertexts during token generation.
Updatable Encryption (UE) allows to rotate the encryption key in the outsourced storage setting while minimizing the bandwith used. The server can update ciphertexts to the new key using a token provided by the client. UE schemes should provide strong confidentiality guarantees against an adversary that can corrupt keys and tokens. This paper studies the problem of building UE in the group action framework. We introduce a new notion of Mappable Effective Group Action (MEGA) and show that we can build CCA secure UE from a MEGA by generalizing the SHINE construction of Boyd etal at Crypto 2020. Unfortunately, we do not know how to instantiate this new construction in the post-quantum setting. Doing so would solve the open problem of building a CCA secure post-quantum UE scheme. Isogeny-based group actions are the most studied post-quantum group actions. Unfortunately, the resulting group actions are not mappable. We show that we can still build UE from isogenies by introducing a new algebraic structure called Effective Triple Orbital Group Action (ETOGA). We prove that UE can be built from an ETOGA and show how to instantiate this abstract structure from isogeny-based group actions. This new construction solves two open problems in ciphertext-independent post-quantum UE. First, this is the first post-quantum UE scheme that supports an unbounded number of updates. Second, our isogeny-based UE scheme is the first post-quantum UE scheme not based on lattices. The security of this new scheme holds under an extended version of the weak pseudorandomness of the standard isogeny group action.
Quantum computing is considered among the next big leaps in computer science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the symmetric key ciphers against a potent quantum adversary. Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256). In total, we develop a pool of 20 implementations per AES variant (thus totaling in 60), by taking the state-of-the-art advancements in the relevant fields into account. In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 97 percent for each variant of AES. We show that the qubit count - Toffoli depth product is reduced from theirs by more than 86 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in details, fix the bugs (arising from some problem of the quantum computing tool used and not related to their coding) and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the Asiacrypt'22 paper by Huang and Sun and the Asiacrypt'23 paper by Liu et al.) in terms of various quantum circuit complexity metrics (Toffoli depth, full depth, Toffoli/full depth - qubit count product, full depth - gate count product, etc.). Also, our bug-fixing of Jaques et al.'s Eurocrypt'20 implementations seem to improve from the authors' own bug-fixing, thanks to our architecture consideration. Equipped with the basic AES implementations, we further investigate the prospect of the Grover's search. We also propose three new implementations of the S-box, one new implementation of the MixColumn; as well as five new architecture (one is motivated by the architecture by Jaques et al. in Eurocrypt’20, and the rest four are entirely our innovation). Under the MAXDEPTH constraint (specified by NIST), the circuit depth metrics (Toffoli depth, T-depth and full depth) become crucial factors and parallelization for often becomes necessary. We provide the least depth implementation in this respect, that offers the best performance in terms of metrics for circuit complexity (like, depth-squared - qubit count product, depth - gate count product).
In this work, we study and formalize security notions for algorithm substitution attacks (ASAs) on em cryptographic puzzles. Puzzles are difficult problems that require an investment of computation, memory, or some other related resource. They are heavily used as a building block for the consensus networks used by cryptocurrencies. These include primitives such as proof-of-work, proof-of-space, and verifiable delay functions (VDFs). Due to economies of scale, these networks increasingly rely on a small number of companies to construct opaque hardware or software (e.g., GPU or FPGA images): this dependency raises concerns about cryptographic subversion. Unlike the algorithms considered by previous ASAs, cryptographic puzzles do not rely on secret keys and thus enable a very different set of attacks. We first explore the threat model for these systems and then propose concrete attacks that (1) selectively reduce a victim's solving capability ( e.g., hashrate) and (2) exfiltrate puzzle solutions to an attacker. We then propose defenses, several of which can be applied to existing cryptocurrency hardware with minimal changes. We also find that mining devices for many major proof-of-work cryptocurrencies already demonstrate errors exactly how a potentially subverted device would. Given that these attacks are relevant to all proof of work cryptocurrencies that have a combined market capitalization of around a few hundred billion dollars (2022), we recommend that all vulnerable mining protocols consider making the suggested adaptations today.
Proof-of-Stake (PoS) algorithms, implemented as foundational components of the consensus mechanism of distributed ledgers, are defective cryptosystems by nature. This paper presents intuitive arguments for why PoS, by trying to improve the energy efficiency of Proof-of-Work (PoW) when implemented as a Sybil control mechanism in distributed ledgers, introduces a set of significant new flaws. Such systems are plutocratic, oligopolistic, and permissioned.
Zero-knowledge (ZK) applications form a large group of use cases in modern cryptography, and recently gained in popularity due to novel proof systems. For many of these applications, cryptographic hash functions are used as the main building blocks, and they often dominate the overall performance and cost of these approaches. Therefore, in the last years several new hash functions were built in order to reduce the cost in these scenarios, including Poseidon and Rescue among others. These hash functions often look very different from more classical designs such as AES or SHA-2. For example, they work natively over prime fields rather than binary ones. At the same time, for example Poseidon and Rescue share some common features, such as being SPN schemes and instantiating the nonlinear layer with invertible power maps. While this allows the designers to provide simple and strong arguments for establishing their security, it also introduces crucial limitations in the design, which may affect the performance in the target applications. In this paper, we propose the Horst construction, in which the addition in a Feistel scheme (x, y) -> (y + F(x), x) is extended via a multiplication, i.e., (x, y) -> (y * G(x) + F(x), x). By carefully analyzing the performance metrics in SNARK and STARK protocols, we show how to combine an expanding Horst scheme with a Rescue-like SPN scheme in order to provide security and better efficiency in the target applications. We provide an extensive security analysis for our new design Griffin and a comparison with all current competitors.
This article proposes four optimizations of indifferentiable hashing onto (prime-order subgroups of) ordinary elliptic curves over finite fields $\mathbb{F}_{\!q}$. One of them is dedicated to elliptic curves $E$ provided that $q \equiv 2 \ (\mathrm{mod} \ 3)$. The second deals with $q \equiv 2, 4 \ (\mathrm{mod} \ 7)$ and an elliptic curve $E_7$ of $j$-invariant $-3^3 5^3$. The corresponding section plays a rather theoretical role, because (the quadratic twist of) $E_7$ is not used in real-world cryptography. The other two optimizations take place for the subgroups $\mathbb{G}_1$, $\mathbb{G}_2$ of pairing-friendly curves. The performance gain comes from the smaller number of required exponentiations in $\mathbb{F}_{\!q}$ for hashing to $E(\mathbb{F}_{\!q})$, $E_7(\mathbb{F}_{\!q})$, and $\mathbb{G}_2$ as well as from the absence of necessity to hash directly onto $\mathbb{G}_1$ in certain settings. In particular, the new results affect the pairing-friendly curve BLS12-381 (the most popular in practice at the moment) and a few ones from the American standard NIST SP 800-186. Among other things, a taxonomy of state-of-the-art hash functions to elliptic curves is presented. Finally, it is discussed how to hash over highly $2$-adic fields $\mathbb{F}_{\!q}$.
Guillou-Quisquater (GQ) signature is an efficient RSA-based digital signature scheme amongst the most famous Fiat-Shamir follow-ons owing to its good simplicity. However, there exist two bottlenecks for GQ hindering its application in industry or academia: the RSA trapdoor $n=pq$ in the key generation phase and its high bandwidth caused by the storage-consuming representation of RSA group elements (3072 bits per one element in 128-bit security). In this paper, we first formalize the definition and security proof of class group based GQ signature (CL-GQ), which eliminates the trapdoor in key generation phase and improves the bandwidth efficiency from the RSA-based GQ signature. Then, we construct a trustless GQ multi-signature scheme by applying non-malleable equivocable commitments and our well-designed compact non-interactive zero-knowledge proofs (NIZK). Our scheme has a well-rounded performance compared to existing multiparty GQ, Schnorr and ECDSA schemes, in the aspects of bandwidth (no range proof or multiplication-to-addition protocol required), rather few interactions (only 4 rounds in signing), provable security in \textit{dishonest majority model} and identifiable abort property. Another interesting finding is that, our NIZK is highly efficient (only one round required) by using the Bezout formula, and this trick can also optimize the ZK proof of Paillier ciphertext which greatly improves the speed of Yi's Blind ECDSA (AsiaCCS 2019).
Fault-tolerant distributed systems move the trust in a single party to a majority of parties participating in the protocol. This makes blockchain based crypto-currencies possible: they allow parties to agree on a total order of transactions without a trusted third party. To trust a distributed system, the security of the protocol and the correctness of the implementation must be indisputable. We present the first machine checked proof that guarantees both safety and liveness for a consensus algorithm. We verify a Proof of Stake (PoS) Nakamoto-style blockchain (NSB) protocol, using the foundational proof assistant Coq. In particular, we consider a PoS NSB in a synchronous network with a static set of corrupted parties. We define execution semantics for this setting and prove chain growth, chain quality, and common prefix which together implies both safety and liveness.
Shor's quantum algorithm, running in quantum computers, can efficiently solve integer factorization problem and discrete logarithm problem in polynomial time. This poses an urgent and serious threat to long-term security with recent accelerated evolution of quantum computing. However, National Institute of Standards and Technology (NIST) plans to release its standard of post-quantum cryptography between 2022 and 2024. It is crucially important to propose an early solution, which is likely secure against quantum attacks and classical attacks, and likely to comply with the future NIST standard. A robust combiner combines a set of 2 or more cryptography primitives into a new primitive of the same type, and guarantees that if anyone of the ingredient primitive is secure, then the resulting primitive is secure. This work proposes the first construction of robust combiner for Key Encapsulation Mechanism (KEM), with optimal amortized performance. From our robust combiner of KEMs, we construct efficient stateful hybrid Key Exchange Protocol (KEP), which is more suitable for two parties who will communicate with each other frequently.