• Semilinear Transformations in Coding Theory: A New Technique in Code-Based Cryptography
    by Wenshuo Guo on December 7, 2021 at 1:29 am

    This paper presents a new technique for disturbing the algebraic structure of linear codes in code-based cryptography. Specifically, we introduce the so-called semilinear transformations in coding theory and then creatively apply them to the construction of code-based cryptosystems. Note that $\mathbb{F}_{q^m}$ can be viewed as an $\mathbb{F}_q$-linear space of dimension $m$, a semilinear transformation $\varphi$ is therefore defined as an $\mathbb{F}_q$-linear automorphism of $\mathbb{F}_{q^m}$. Then we impose this transformation to a linear code $\mathcal{C}$ over $\mathbb{F}_{q^m}$. It is clear that $\varphi(\mathcal{C})$ forms an $\mathbb{F}_q$-linear space, but generally does not preserve the $\mathbb{F}_{q^m}$-linearity any longer. Inspired by this observation, a new technique for masking the structure of linear codes is developed in this paper. Meanwhile, we endow the underlying Gabidulin code with the so-called partial cyclic structure to reduce the public-key size. Compared to some other code-based cryptosystems, our proposal admits a much more compact representation of public keys. For instance, 2592 bytes are enough to achieve the security of 256 bits, almost 403 times smaller than that of Classic McEliece entering the third round of the NIST PQC project.

  • The Effect of False Positives: Why Fuzzy Message Detection Leads to Fuzzy Privacy Guarantees?
    by István András Seres on December 6, 2021 at 4:15 pm

    Fuzzy Message Detection (FMD) is a recent cryptographic primitive invented by Beck et al. (CCS'21) where an untrusted server performs coarse message filtering for its clients in a recipient-anonymous way. In FMD --- besides the true positive messages --- the clients download from the server their cover messages determined by their false-positive detection rates. What is more, within FMD, the server cannot distinguish between genuine and cover traffic. In this paper, we formally analyze the privacy guarantees of FMD from three different angles. First, we analyze three privacy provisions offered by FMD: recipient unlinkability, relationship anonymity, and temporal detection ambiguity. Second, we perform a differential privacy analysis and coin a relaxed definition to capture the privacy guarantees FMD yields. Finally, we simulate FMD on real-world communication data. Our theoretical and empirical results assist FMD users in adequately selecting their false-positive detection rates for various applications with given privacy requirements.

  • Kicking-the-Bucket: Fast Privacy-Preserving Trading Using Buckets
    by Mariana Botelho da Gama on December 6, 2021 at 12:21 pm

    We examine bucket-based and volume-based algorithms for privacy-preserving asset trading in a financial dark pool. Our bucket-based algorithm places orders in quantised buckets, whereas the volume-based algorithm allows any volume size but requires more complex validation mechanisms. In all cases, we conclude that these algorithms are highly efficient and offer a practical solution to the commercial problem of preserving privacy of order information in a dark pool trading venue.

  • Public Key Encryption with Flexible Pattern Matching
    by Elie Bouscatié on December 6, 2021 at 10:01 am

    Many interesting applications of pattern matching (e.g. deep-packet inspection or medical data analysis) target very sensitive data. In particular, spotting illegal behaviour in internet traffic conflicts with legitimate privacy requirements, which usually forces users (e.g. children, employees) to blindly trust an entity that fully decrypts their traffic in the name of security. The compromise between traffic analysis and privacy can be achieved through searchable encryption. However, as the traffic data is a stream and as the patterns to search are bound to evolve over time (e.g. new virus signatures), these applications require a kind of searchable encryption that provides more flexibility than the classical schemes. We indeed need to be able to search for patterns of variable sizes in an arbitrary long stream that has potentially been encrypted prior to pattern identification. To stress these specificities, we call such a scheme a stream encryption supporting pattern matching. Recent papers use bilinear groups to provide public key constructions supporting these features. These solutions are lighter than more generic ones (e.g. fully homomorphic encryption) while retaining the adequate expressivity to support pattern matching without harming privacy more than needed. However, all existing solutions in this family have weaknesses with respect to efficiency and security that need to be addressed. Regarding efficiency, their public key has a size linear in the size of the alphabet, which can be quite large, in particular for applications that naturally process data as bytestrings. Regarding security, they all rely on a very strong computational assumption that is both interactive and specially tailored for this kind of scheme. In this paper, we tackle these problems by providing two new constructions using bilinear groups to support pattern matching on encrypted streams. Our first construction shares the same strong assumption but dramatically reduces the size of the public key by removing the dependency on the size of the alphabet, while nearly halving the size of the ciphertext. On a typical application with large patterns, our public key is two order of magnitude smaller that the one of previous schemes, which demonstrates the practicality of our approach. Our second construction manages to retain most of the good features of the first one while exclusively relying on a simple (static) variant of DDH, which solves the security problem of previous works.

  • GenoPPML – a framework for genomic privacy-preserving machine learning
    by Sergiu Carpov on December 6, 2021 at 9:53 am

    We present a framework GenoPPML for privacy-preserving machine learning in the context of sensitive genomic data processing. The technology combines secure multiparty computation techniques based on the recently proposed Manticore secure multiparty computation framework for model training and fully homomorphic encryption based on TFHE for model inference. The framework was successfully used to solve breast cancer prediction problems on gene expression datasets coming from distinct private sources while preserving their privacy - the solution winning 1st place for both Tracks I and III of the genomic privacy competition iDASH'2020. Extensive benchmarks and comparisons to existing works are performed. Our 2-party logistic regression computation is $11\times$ faster than the one in De Cock et al. on the same dataset and it uses only a single CPU core.

  • ABBY: Automating the creation of fine-grained leakage models
    by Omid Bazangani on December 6, 2021 at 9:14 am

    Side-channel leakage simulators allow testing the resilience of cryptographic implementations to power side-channel attacks without a dedicated setup. The main challenge in their large-scale deployment is the limited support for target devices, a direct consequence of the effort required for reverse engineering microarchitecture implementations. We introduce ABBY, the first solution for the automated creation of fine-grained leakage models. The main innovation of ABBY is the training framework, which can automatically characterize the microarchitecture of the target device and is portable to other platforms. Evaluation of ABBY on real-world crypto implementations exhibits comparable performance to other state-of-the-art leakage simulators.

  • Approximate Homomorphic Encryption with Reduced Approximation Error
    by Andrey Kim on December 6, 2021 at 3:59 am

    The Cheon-Kim-Kim-Song (CKKS) homomorphic encryption scheme is currently the most efficient method to perform approximate homomorphic computations over real and complex numbers. Although the CKKS scheme can already be used to achieve practical performance for many advanced applications, e.g., in machine learning, its broader use in practice is hindered by several major usability issues, most of which are brought about by relatively high approximation errors and the complexity of dealing with them. We present a reduced-error CKKS variant that removes the approximation errors due to the Learning With Errors (LWE) noise in the encryption and key switching operations. We propose and implement its Residue Number System (RNS) instantiation that has a lower error than the original CKKS scheme implementation based on multiprecision integer arithmetic. While formulating the RNS instantiation, we also develop an intermediate RNS variant that has a smaller approximation error than the prior RNS variant of CKKS. The high-level idea of our main RNS-related improvements is to remove the approximate scaling error using a novel procedure that computes level-specific scaling factors. The rescaling operations and scaling factor adjustments in our implementation are done automatically. We implement both RNS variants in PALISADE and compare their approximation error and efficiency to the prior RNS variant. Our results for uniform ternary secret key distribution, which is the most efficient setting included in the community homomorphic encryption security standard, show that the reduced-error CKKS RNS implementation typically has an approximation error that is 6 to 9 bits smaller for computations with multiplications than the prior RNS variant. The results for the sparse secret setting, which was used for the original CKKS scheme, imply that our reduced-error CKKS RNS implementation has an approximation error up to 12 bits smaller than the prior RNS variant.

  • SHealS and HealS: isogeny-based PKEs from akey validation method for SIDH
    by Tako Boris Fouotsa on December 6, 2021 at 3:57 am

    In 2016, Galbraith et al. presented an adaptive attack on the SIDH key exchange protocol. In SIKE, one applies a variant of the Fujisaki-Okamoto transform to force Bob to reveal his encryption key to Alice, which Alice then uses to re-encrypt Bob's ciphertext and verify its validity. Therefore, Bob can not reuse his encryption keys. There have been two other proposed countermeasures enabling static-static private keys: k-SIDH and its variant by Jao and Urbanik. These countermeasures are relatively expensive since they consist in running multiple parallel instances of SIDH. In this paper, firstly, we propose a new countermeasure to the GPST adaptive attack on SIDH. Our countermeasure does not require key disclosure as in SIKE, nor multiple parallel instances as in k-SIDH. We translate our countermeasure into a key validation method for SIDH-type schemes. Secondly, we use our key validation to design HealSIDH, an efficient SIDH-type static-static key interactive exchange protocol. Thirdly, we derive a PKE scheme SHealS using HealSIDH. SHealS uses larger primes compared to SIKE, has larger keys and ciphertexts, but only $4$ isogenies are computed in a full execution of the scheme, as opposed to $5$ isogenies in SIKE. We prove that SHealS is IND-CPA secure relying on a new assumption we introduce and we conjecture its IND-CCA security. We suggest HealS, a variant of SHealS using a smaller prime, providing smaller keys and ciphertexts. As a result, HealSIDH is a practically efficient SIDH based (interactive) key exchange incorporating a "direct" countermeasure to the GPST adaptive attack.

  • A formula for disaster: a unified approach to elliptic curve special-point-based attacks
    by Vladimir Sedlacek on December 6, 2021 at 3:55 am

    The Refined Power Analysis, Zero-Value Point, and Exceptional Procedure attacks introduced side-channel techniques against specific cases of elliptic curve cryptography. The three attacks recover bits of a static ECDH key adaptively, collecting information on whether a certain multiple of the input point was computed. We unify and generalize these attacks in a common framework, and solve the corresponding problem for a broader class of inputs. We also introduce a version of the attack against windowed scalar multiplication methods, recovering the full scalar instead of just a part of it. Finally, we systematically analyze elliptic curve point addition formulas from the Explicit-Formulas Database, classify all non-trivial exceptional points, and find them in new formulas. These results indicate the usefulness of our tooling, which we released publicly, for unrolling formulas and finding special points, and potentially for independent future work.

  • On the Bottleneck Complexity of MPC with Correlated Randomness
    by Claudio Orlandi on December 6, 2021 at 3:54 am

    At ICALP 2018, Boyle et al. introduced the notion of the bottleneck complexity of a secure multi-party computation (MPC) protocol. This measures the maximum communication complexity of any one party in the protocol, aiming to improve load-balancing among the parties. In this work, we study the bottleneck complexity of MPC in the preprocessing model, where parties are given correlated randomness ahead of time. We present two constructions of bottleneck-efficient MPC protocols, whose bottleneck complexity is independent of the number of parties: 1. A protocol for computing abelian programs, based only on one-way functions. 2. A protocol for selection functions, based on any linearly homomorphic encryption scheme. Compared with previous bottleneck-efficient constructions, our protocols can be based on a wider range of assumptions, and avoid the use of fully homomorphic encryption.

  • Interpreting and Mitigating Leakage-abuse Attacks in Searchable Symmetric Encryption
    by Lei Xu on December 6, 2021 at 3:54 am

    Searchable symmetric encryption (SSE) enables users to make confidential queries over always encrypted data while confining information disclosure to pre-defined leakage profiles. Despite the well-understood performance and potentially broad applications of SSE, recent leakage-abuse attacks (LAAs) are questioning its real-world security implications. They show that a passive adversary with certain prior information of a database can recover queries by exploiting the legitimately admitted leakage. While several countermeasures have been proposed, they are insufficient for either security, i.e., handling only specific leakage like query volume, or efficiency, i.e., incurring large storage and bandwidth overhead. We aim to fill this gap by advancing the understanding of LAAs from a fundamental algebraic perspective. Our investigation starts by revealing that the index matrices of a plaintext database and its encrypted image can be linked by linear transformation. The invariant characteristics preserved under the transformation encompass and surpass the information exploited by previous LAAs. They allow one to unambiguously link encrypted queries with corresponding keywords, even with only partial knowledge of the database. Accordingly, we devise a new powerful attack and conduct a series of experiments to show its effectiveness. In response, we propose a new security notion to thwart LAAs in general, inspired by the principle of local differential privacy (LDP). Under the notion, we further develop a practical countermeasure with tunable privacy and efficiency guarantee. Experiment results on representative real-world datasets show that our countermeasure can reduce the query recovery rate of LAAs, including our own.

  • The Need for Speed: A Fast Guessing Entropy Calculation for Deep Learning-based SCA
    by Guilherme Perin on December 6, 2021 at 3:52 am

    In recent years, the adoption of deep learning drastically improved profiling side-channel attacks (SCA). Although guessing entropy is a highly informative metric for profiling SCA, it is time-consuming, especially if computed for all epochs during training. This paper shows that guessing entropy can be efficiently computed during training by reducing the number of validation traces. Our solution significantly speeds up the process, impacting hyperparameter search and profiling attack performances. Our fast guessing entropy calculation is up to 16 times faster and results in more hyperparameter tuning experiments, allowing us to find more efficient deep learning models.

  • Practical Asynchronous Distributed Key Generation
    by Sourav Das on December 6, 2021 at 3:52 am

    Distributed Key Generation (DKG) is a technique to bootstrap threshold cryptosystems without a central trust. DKG can be a building block to decentralized protocols such as randomness beacons, threshold signatures, and general multiparty computation. Many previous DKG protocols assume the synchronous model and asynchronous DKG received attention only recently. Existing asynchronous DKG protocols have either poor efficiency or limited functionality, resulting in a lack of concrete implementations. In this paper, we present a simple and concretely efficient asynchronous DKG (ADKG) protocol. In a network of $n$ nodes, our ADKG protocol can tolerate up to $t<n/3$ malicious nodes and have an expected $O(\kappa n^3)$ communication cost, where $\kappa$ is the security parameter. Our ADKG protocol produces a field element as the secret and is thus compatible with off-the-shelf threshold cryptosystems. We implement our ADKG protocol and evaluate it using a network of up to 128 nodes in geographically distributed AWS instances. Our evaluation shows that our protocol takes as low as 3 and 9.5 seconds to terminate for 32 and 64 nodes, respectively. Also, each node sends only 0.7 Megabytes and 2.9 Megabytes of data during the two experiments, respectively.

  • Garbling, Stacked and Staggered: Faster k-out-of-n Garbled Function Evaluation
    by David Heath on December 6, 2021 at 3:51 am

    Stacked Garbling (SGC) is a Garbled Circuit (GC) improvement that efficiently and securely evaluates programs with conditional branching. SGC reduces bandwidth consumption such that communication is proportional to the size of the single longest program execution path, rather than to the size of the entire program. Crucially, the parties expend increased computational effort compared to classic GC. Motivated by procuring a subset in a menu of computational services or tasks, we consider GC evaluation of k-out-of-n branches, whose indices are known (or eventually revealed) to the GC evaluator E. Our stack-and-stagger technique amortizes GC computation in this setting. We retain the communication advantage of SGC, while significantly improving computation and wall-clock time. Namely, each GC party garbles (or evaluates) the total of n branches, a significant improvement over the O(nk) garblings/evaluations needed by standard SGC. We present our construction as a garbling scheme. Our technique brings significant overall performance improvement in various settings, including those typically considered in the literature: e.g. on a 1Gbps LAN we evaluate 16-out-of-128 functions ~7.68x faster than standard stacked garbling.

  • SoK: Validating Bridges as a Scaling Solution for Blockchains
    by Patrick McCorry on December 6, 2021 at 3:50 am

    Off-chain protocols are a promising solution to the cryptocurrency scalability dilemma. It focuses on moving transactions from a blockchain network like Ethereum to another off-chain system while ensuring users can transact with assets that reside on the underlying blockchain. Several startups have collectively raised over $100m to implement off-chain systems which rely on a validating bridge smart contract to self-enforce the safety of user funds and liveness of transaction execution. It promises to offer a Coinbase-like experience as users can transact on an off-chain system while still retaining the underlying blockchain’s security for all processed transactions. Unfortunately, the literature for validating bridges is highly disparate across message boards, chat rooms and for-profit ventures that fund its rapid development. This Systematization of Knowledge focuses on presenting the emerging field in an accessible manner and to bring forth the immediate research problems that must be solved before we can extend Ethereum’s security to new (and experimental) off-chain systems.

  • IRShield: A Countermeasure Against Adversarial Physical-Layer Wireless Sensing
    by Paul Staat on December 6, 2021 at 3:49 am

    Wireless radio channels are known to contain information about the surrounding propagation environment, which can be extracted using established wireless sensing methods. Thus, today's ubiquitous wireless devices are attractive targets for passive eavesdroppers to launch reconnaissance attacks. In particular, by overhearing standard communication signals, eavesdroppers obtain estimations of wireless channels which can give away sensitive information about indoor environments. For instance, by applying simple statistical methods, adversaries can infer human motion from wireless channel observations, allowing to remotely monitor premises of victims. In this work, building on the advent of intelligent reflecting surfaces (IRSs), we propose IRShield as a novel countermeasure against adversarial wireless sensing. IRShield is designed as a plug-and-play privacy-preserving extension to existing wireless networks. At the core of IRShield, we design an IRS configuration algorithm to obfuscate wireless channels. We validate the effectiveness with extensive experimental evaluations. In a state-of-the-art human motion detection attack using off-the-shelf Wi-Fi devices, IRShield lowered detection rates to 5% or less.

  • Low-Bandwidth Threshold ECDSA via Pseudorandom Correlation Generators
    by Damiano Abram on December 6, 2021 at 3:49 am

    Digital signature schemes are a fundamental component of secure distributed systems, and the theft of a signing-key might have huge real-world repercussions e.g., in applications such as cryptocurrencies. Threshold signature schemes mitigate this problem by distributing shares of the secret key on several servers and requiring that enough of them interact to be able to compute a signature. In this paper, we provide a novel threshold protocol for ECDSA, arguably the most relevant signature scheme in practice. Our protocol is the first one where the communication complexity of the preprocessing phase is only logarithmic in the number of ECDSA signatures to be produced later, and it achieves therefore a so-called silent preprocessing. Our protocol achieves active security against any number of arbitrarily corrupted parties.

  • Cryptanalysis of a Type of White-Box Implementations of the SM4 Block Cipher
    by Jiqiang Lu on December 6, 2021 at 3:48 am

    The SM4 block cipher was first released in 2006 as SMS4 used in the Chinese national standard WAPI, and became a Chinese national standard in 2016 and an ISO international standard in 2021. White-box cryptography aims primarily to protect the secret key used in a cryptographic software implementation in the white-box scenario that assumes an attacker to have full access to the execution environment and execution details of an implementation. Since white-box cryptography has many real-life applications nowadays, a few white-box implementations of the SM4 block cipher has been proposed with its increasingly wide use, among which a type of constructions is dominated, that use an affine (or extremely even linear) diagonal block encoding to protect the original output of an SM4 round function and use the encoding or its inverse to protect the original input of the S-box layer of the next round, such as Xiao and Lai's implementation in 2009, Shang's implementation in 2016, Yao and Chen's and Wu et al.'s implementations in 2020. In this paper, we show that this type of white-box SM4 constructions is rather insecure against collision-based attacks, by devising attacks on Xiao and Lai's, Shang's, Yao and Chen's and Wu et al.'s implementations with a time complexity of respectively about $2^{19.4}$, $2^{35.6}$, $2^{19.4}$ and $2^{17.1}$ to recover a round key, and thus their security is much lower than previously published or expected. Thus, such white-box SM4 constructions should be avoided unless being enhanced somehow.

  • Searchable Encryption for Conjunctive Queries with Extended Forward and Backward Privacy
    by Cong Zuo on December 6, 2021 at 3:47 am

    Recent developments in the field of Dynamic Searchable Symmetric Encryption (DSSE) with forward and backward privacy have attracted much attention from both research and industrial communities. However, most forward and backward private DSSE schemes support single keyword queries only, which impedes its prevalence in practice. Until recently, Patranabis et al. (NDSS 2021) introduced a forward and backward private DSSE for conjunctive queries (named ODXT) based on the Oblivious Cross-Tags (OXT) framework. Unfortunately, its security is not comprehensive for conjunctive queries, and it deploys “lazy deletion”, which incurs more communication cost. Besides, it cannot delete a file in certain circumstances. To address these problems, we introduce two forward and backward private DSSE schemes with conjunctive queries (named SDSSE-CQ and SDSSE-CQ-S). To analysis their security, we present two new levels of backward privacy (named Type-O and Type-O$^-$, where Type-O$^-$ is more secure than Type-O), which describe the leakages of conjunctive queries with OXT framework more accurately. Finally, the security and experimental evaluation demonstrate that our proposed schemes achieve better security with comparable computation and communication increase in comparison with ODXT.

  • Cryptonite: A Framework for Flexible Time-Series Secure Aggregation with Online Fault Tolerance
    by Ryan Karl on December 5, 2021 at 11:49 pm

    Private stream aggregation (PSA) allows an untrusted data aggregator to compute statistics over a set of multiple participants' data while ensuring the data remains private. Existing works rely on a trusted third party to enable an aggregator to achieve fault tolerance, that requires interactive recovery, but in the real world this may not be practical or secure. We develop a new formal framework for PSA that accounts for user faults, and can support non-interactive recovery, while still supporting strong individual privacy guarantees. We first must define a new level of security in the presence of faults and malicious adversaries because the existing definitions do not account for faults and the security implications of the recovery. After this we develop the first protocol that provably reaches this level of security, i.e., individual inputs are private even after the aggregator's recovery, and reach new levels of scalability and communication efficiency over existing work seeking to support fault tolerance. The techniques we develop are general, and can be used to augment any PSA scheme to support non-interactive fault recovery.

  • A New Adaptive Attack on SIDH
    by Tako Boris Fouotsa on December 5, 2021 at 10:34 pm

    The SIDH key exchange is the main building block of SIKE, the only isogeny based scheme involved in the NIST standardization process. In 2016, Galbraith et al. presented an adaptive attack on SIDH. In this attack, a malicious party manipulates the torsion points in his public key in order to recover an honest party's static secret key, when having access to a key exchange oracle. In 2017, Petit designed a passive attack (which was improved by de Quehen et al. in 2020) that exploits the torsion point information available in SIDH public key to recover the secret isogeny when the endomorphism ring of the starting curve is known. In this paper, firstly, we generalize the torsion point attacks by de Quehen et al. Secondly, we introduce a new adaptive attack vector on SIDH-type schemes. Our attack uses the access to a key exchange oracle to recover the action of the secret isogeny on larger subgroups. This leads to an unbalanced SIDH instance for which the secret isogeny can be recovered in polynomial time using the generalized torsion point attacks. Our attack is different from the GPST adaptive attack and constitutes a new cryptanalytic tool for isogeny based cryptography. This result proves that the torsion point attacks are relevant to SIDH parameters in an adaptive attack setting. We suggest attack parameters for some SIDH primes and discuss some countermeasures.

  • Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets
    by Alex Ozdemir on December 5, 2021 at 1:09 am

    A zk-SNARK is a powerful cryptographic primitive that provides a succinct and efficiently checkable argument that the prover has a witness to a public NP statement, without revealing the witness. However, in their native form, zk-SNARKs only apply to a secret witness held by a single party. In practice, a collection of parties often need to a prove a statement where the secret witness is distributed or shared among them. We implement and experiment with *collaborative zk-SNARKs*: proofs over the secrets of multiple, mutually distrusting parties. We construct these by lifting conventional zk-SNARKs into secure protocols among $N$ provers to jointly produce a single proof over the distributed witness. We optimize the proof generation algorithm in pairing-based zk-SNARKs so that algebraic techniques for multiparty computation (MPC) yield efficient proof generation protocols. For some zk-SNARKs, optimization is more challenging. This suggests MPC "friendliness" as an additional criterion for evaluating zk-SNARKs. We implement 3 collaborative proofs and evaluate the concrete cost of proof generation. We find that over a good network, security against a malicious minority of provers can be achieved with *approximately the same runtime* as a single prover. Security against $N-1$ malicious provers requires only a $2\times$ slowdown. This efficiency is unusual: most computations slow down by several orders of magnitude when securely distributed. It is also significant: most server-side applications that can tolerate the cost of a single-prover proof should also be able to tolerate the cost of a collaborative proof.

  • Autoguess: A Tool for Finding Guess-and-Determine Attacks and Key Bridges
    by Hosein Hadipour on December 4, 2021 at 2:25 pm

    The guess-and-determine technique is one of the most widely used techniques in cryptanalysis to recover unknown variables in a given system of relations. In such attacks, a subset of the unknown variables is guessed such that the remaining unknowns can be deduced using the information from the guessed variables and the given relations. This idea can be applied in various areas of cryptanalysis such as finding the internal state of stream ciphers when a sufficient amount of output data is available, or recovering the internal state and the secret key of a block cipher from very few known plaintexts. Another important application is the key-bridging technique in key-recovery attacks on block ciphers, where the attacker aims to find the minimum number of required sub-key guesses to deduce all involved sub-keys via the key schedule. Since the complexity of the guess-and-determine technique directly depends on the number of guessed variables, it is essential to find the smallest possible guess basis, i.e., the subset of guessed variables from which the remaining variables can be deduced. In this paper, we present Autoguess, an easy-to-use general tool to search for a minimal guess basis. We propose several new modeling techniques to harness SAT/SMT, MILP, and Gröbner basis solvers. We demonstrate their usefulness in guess-and-determine attacks on stream ciphers and block ciphers, as well as finding key-bridges in key recovery attacks on block ciphers. Moreover, integrating our CP models for the key-bridging technique into the previous CP-based frameworks to search for distinguishers, we propose a unified and general CP model to search for key recovery friendly distinguishers which supports both linear and nonlinear key schedules.

  • Algebraic Adversaries in the Universal Composability Framework
    by Michel Abdalla on December 4, 2021 at 11:03 am

    The algebraic-group model (AGM), which lies between the generic group model and the standard model of computation, provides a means by which to analyze the security of cryptosystems against so-called algebraic adversaries. We formalize the AGM within the framework of universal composability, providing formal definitions for this setting and proving an appropriate composition theorem. This extends the applicability of the AGM to more-complex protocols, and lays the foundations for analyzing algebraic adversaries in a composable~fashion. Our results also clarify the meaning of composing proofs in the AGM with other proofs and they highlight a natural form of independence between idealized groups that seems inherent to the AGM and has not been made formal before---these insights also apply to the composition of game-based proofs in the AGM. We show the utility of our model by proving several important protocols universally composable for algebraic adversaries, specifically: (1) the Chou-Orlandi protocol for oblivious transfer, and (2) the SPAKE2 and CPace protocols for password-based authenticated key exchange.

  • Plactic signatures
    by Daniel R. L. Brown on December 3, 2021 at 7:13 pm

    Plactic signatures use the plactic monoid (semistandard tableaus with Knuth’s associative multiplication) and full-domain hashing (SHAKE).

  • XORBoost: Tree Boosting in the Multiparty Computation Setting
    by Kevin Deforth on December 3, 2021 at 4:30 pm

    We present a novel protocol XORBoost for both training gradient boosted tree models and for using these models for inference in the multiparty computation (MPC) setting. Similarly to [AEV20], our protocol supports training for generically split datasets (vertical and horizontal splitting, or combination of those) while keeping all the information about the features and thresholds associated with the nodes private, thus, having only the depths and the number of the binary trees as public parameters of the model. By using optimization techniques reducing the number of oblivious permutation evaluations as well as the quicksort and real number arithmetic algorithms from the recent Manticore MPC framework [CDG+21], we obtain a scalable implementation operating under information-theoretic security model in the honest-but-curious setting with a trusted dealer. On a training dataset of 25,000 samples and 300 features in the 2-player setting, we are able to train 10 regression trees of depth 4 in less than 1.5 minutes per tree (using histograms of 128 bins).

  • Tight Security for Key-Alternating Ciphers with Correlated Sub-Keys
    by Stefano Tessaro on December 3, 2021 at 4:00 pm

    A substantial effort has been devoted to proving optimal bounds for the security of key-alternating ciphers with independent sub-keys in the random permutation model (e.g., Chen and Steinberger, EUROCRYPT '14; Hoang and Tessaro, CRYPTO '16). While common in the study of multi-round constructions, the assumption that sub-keys are truly independent is not realistic, as these are generally highly correlated and generated from shorter keys. In this paper, we show the existence of non-trivial distributions of limited independence for which a t-round key-alternating cipher achieves optimal security. Our work is a natural continuation of the work of Chen et al. (CRYPTO '14) which considered the case of t = 2 when all-subkeys are identical. Here, we show that key-alternating ciphers remain secure for a large class of (t-1)-wise and (t-2)-wise independent distribution of sub-keys. Our proofs proceed by generalizations of the so-called Sum-Capture Theorem, which we prove using Fourier-analytic techniques.

  • On Time-Lock Cryptographic Assumptions in Abelian Hidden-Order Groups
    by Aron van Baarsen on December 3, 2021 at 3:13 pm

    In this paper we study cryptographic finite abelian groups of unknown order and hardness assumptions in these groups. Abelian groups necessitate multiple group generators, which may be chosen at random. We formalize this setting and hardness assumptions therein. Furthermore, we generalize the algebraic group model and strong algebraic group model from cyclic groups to arbitrary finite abelian groups of unknown order. Building on these formalizations, we present techniques to deal with this new setting, and prove new reductions. These results are relevant for class groups of imaginary quadratic number fields and time-lock cryptography build upon them.

  • RandChain: A Scalable and Fair Decentralised Randomness Beacon
    by Runchao Han on December 3, 2021 at 11:58 am

    We propose RANDCHAIN, a Decentralised Randomness Beacon (DRB) that is the first to achieve both scalability (i.e., a large number of participants can join) and fairness (i.e., each participant controls comparable power on deciding random outputs). Unlike existing DRBs where participants are collaborative, i.e., aggregating their local entropy into a single output, participants in RANDCHAIN are competitive, i.e., competing with each other to generate the next output. The competitive design reduces the communication complexity from at least O(n2) to O(n) without trusted party, breaking the scalability limit in existing DRBs. To build RANDCHAIN, we introduce Sequential Proof-of-Work (SeqPoW), a cryptographic puzzle that takes a random and unpredictable number of sequential steps to solve. We implement RANDCHAIN and evaluate its performance on up to 1024 nodes, demonstrating its superiority (1.3 seconds per output with a constant bandwidth of 200KB/s per node) compared to state-of-the-art DRBs RandHerd (S&P’18) and HydRand (S&P’20).

  • On Fingerprinting Attacks and Length-Hiding Encryption
    by Kai Gellert on December 3, 2021 at 11:51 am

    It is well-known that already the length of encrypted messages may reveal sensitive information about encrypted data. Fingerprinting attacks enable an adversary to determine web pages visited by a user and even the language and phrases spoken in voice-over-IP conversations. Prior research has established the general perspective that a length-hiding padding which is long enough to improve security significantly incurs an unfeasibly large bandwidth overhead. We argue that this perspective is a consequence of the choice of the security models considered in prior works, which are based on classical indistinguishability of two messages, and that this does not reflect the attacker model of typical fingerprinting attacks well. Furthermore, these models also consider a model where the attacker is restricted to choosing messages of bounded length difference, depending on a given length-hiding padding of the encryption scheme. This restriction seems difficult to enforce in practice, because application layer protocols are typically unaware of the concrete length-hiding padding applied by an underlying encryption protocol, such as TLS. We also do not want to make application-layer messages dependent on the underlying encryption scheme, but instead want to provide length hiding encryption that satisfies the requirements of the given application. Therefore we propose a new perspective on length hiding encryption, which aims to capture security against fingerprinting attacks more accurately. This makes it possible to concretely quantify the security provided by length-hiding padding against fingerprinting attacks, depending on the real message distribution of an application. We find that for many real-world applications (such as webservers with static content, DNS requests, Google search terms, or Wikipedia page visits) and their specific message distributions, even length-hiding padding with relatively small bandwidth overhead of only 2-5% can already significantly improve security against fingerprinting attacks. This gives rise to a new perspective on length-hiding encryption, which helps understanding how and under what conditions length-hiding encryption can be used to improve security.

  • Short Identity-Based Signatures with Tight Security from Lattices
    by Jiaxin Pan on December 3, 2021 at 10:52 am

    We construct a short and adaptively secure identity-based signature scheme tightly based on the well-known Short Integer Solution (SIS) assumption. Although identity-based signature schemes can be tightly constructed from either standard signature schemes against adaptive corruptions in the multi-user setting or a two-level hierarchical identity-based encryption scheme, neither of them is known with short signature size and tight security based on the SIS assumption. Here ``short'' means the signature size is independent of the message length, which is in contrast to the tree-based (tight) signatures. Our approach consists of two steps: Firstly, we give two generic transformations (one with random oracles and the other without) from non-adaptively secure identity-based signature schemes to adaptively secure ones tightly. Our idea extends the similar transformation for digital signature schemes. Secondly, we construct a non-adaptively secure identity-based signature scheme based on the SIS assumption in the random oracle model.

  • ppSAT: Towards Two-Party Private SAT Solving
    by Ning Luo on December 3, 2021 at 8:00 am

    We design and implement a privacy-preserving Boolean satisfiability (ppSAT) solver, which allows mutually distrustful parties to evaluate the conjunction of their input formulas while maintaining privacy. We first define a family of security guarantees reconcilable with the (known) exponential complexity of SAT solving, and then construct an oblivious variant of the classic DPLL algorithm which can be integrated with existing secure two-party computation (2PC) techniques. We further observe that most known SAT solving heuristics are unsuitable for 2PC, as they are highly data-dependent in order to minimize the number of exploration steps. Faced with how best to trade off between the number of steps and the cost of obliviously executing each one, we design three efficient oblivious heuristics, one deterministic and two randomized. As a result of this effort we are able to evaluate our ppSAT solver on small but practical instances arising from the haplotype inference problem in bioinformatics. We conclude by looking towards future directions for making ppSAT solving more practical, most especially the integration of conflict-driven clause learning (CDCL).

  • Orientations and the supersingular endomorphism ring problem
    by Benjamin Wesolowski on December 3, 2021 at 7:59 am

    We study two important families of problems in isogeny-based cryptography and how they relate to each other: computing the endomorphism ring of supersingular elliptic curves, and inverting the action of class groups on oriented supersingular curves. We prove that these two families of problems are closely related through polynomial-time reductions, assuming the generalised Riemann hypothesis. We identify two classes of essentially equivalent problems. The first class corresponds to the problem of computing the endomorphism ring of oriented curves. The security of a large family of cryptosystems (such as CSIDH) reduces to (and sometimes from) this class, for which there are heuristic quantum algorithms running in subexponential time. The second class corresponds to computing the endomorphism ring of orientable curves. The security of essentially all isogeny-based cryptosystems reduces to (and sometimes from) this second class, for which the best known algorithms are still exponential. Some of our reductions not only generalise, but also strengthen previously known results. For instance, it was known that in the particular case of curves defined over $\mathbb F_p$, the security of CSIDH reduces to the endomorphism ring problem in subexponential time. Our reductions imply that the security of CSIDH is actually equivalent to the endomorphism ring problem, under polynomial time reductions (circumventing arguments that proved such reductions unlikely).

  • CoTree: Push the Limits of Conquerable Space in Collision-Optimized Side-Channel Attacks
    by Changhai Ou on December 3, 2021 at 7:59 am

    By introducing collision information into side-channel distinguishers, the existing collision-optimized attacks exploit collision detection algorithm to transform the original candidate space under consideration into a significantly smaller collision chain space, thus achieving more efficient key recovery. However, collision information is detected very repeatedly since collision chains are created from the same sub-chains, i.e., with the same candidates on their first several sub-keys. This aggravates when exploiting more collision information. The existing collision detection algorithms try to alleviate this, but the problem is still very serious. In this paper, we propose a highly-efficient detection algorithm named Collision Tree (CoTree) for collision-optimized attacks. CoTree exploits tree structure to store the chains creating from the same sub-chain on the same branch. It then exploits a top-down tree building procedure and traverses each node only once when detecting their collisions with a candidate of the sub-key currently under consideration. Finally, it launches a bottom-up branch removal procedure to remove the chains unsatisfying the collision conditions from the tree after traversing all candidates (within given threshold) of this sub-key, thus avoiding the traversal of the branches satisfying the collision condition. These strategies make our CoTree significantly alleviate the repetitive collision detection, and our experiments verify that it significantly outperforms the existing works.

  • Composable Notions for Anonymous and Authenticated Communication
    by Fabio Banfi on December 3, 2021 at 7:59 am

    The task of providing authenticated communication while retaining anonymity requires to achieve two apparently conflicting goals: How can different senders authenticate their messages without revealing their identity? Despite the paradoxical nature of this problem, there exist many cryptographic schemes designed to achieve both goals simultaneously, but the security notions from the literature are mainly game-based. The goal of this paper is to provide new composable security notions for such (public-key) cryptosystems, and it can be interpreted as the dual of the work by Kohlweiss et al. (PETS 2013). We do so by defining possible ideal resources, for many senders and one receiver, which provide some trade-off between authenticity and anonymity (of the senders), and use them to define new composable security notions using the framework of constructive cryptography. Then we systematically review three different protocols and identify which of these notions each satisfies. We consider protocols based on (1) a new type of scheme which we call bilateral signatures (syntactically related to designated verifier signatures), (2) partial signatures (and the related anonymous signatures), and (3) ring signatures.

  • High Order Countermeasures for Elliptic-Curve Implementations with Noisy Leakage Security
    by Sonia Belaïd on December 3, 2021 at 7:58 am

    Elliptic-curve implementations protected with state-of-the-art countermeasures against side-channel attacks might still be vulnerable to advanced attacks that recover secret information from a single leakage trace. The effectiveness of these attacks is boosted by the emergence of deep learning techniques for side-channel analysis which relax the control or knowledge an adversary must have on the target implementation. In this paper, we provide generic countermeasures to withstand these attacks for a wide range of regular elliptic-curve implementations. We first introduce a framework to formally model a regular algebraic program which consists in a sequence of algebraic operations indexed by key-dependent values. We then introduce a generic countermeasure to protect these types of programs against advanced single-trace side-channel attacks. Our scheme achieves provable security in the noisy leakage model under a formal assumption on the leakage of randomized variables. To demonstrate the applicability of our solution, we provide concrete examples on several widely deployed scalar multiplication algorithms and report some benchmarks for a protected implementation on a smart card.

  • Le Mans: Dynamic and Fluid MPC for Dishonest Majority
    by Rahul Rachuri on December 3, 2021 at 7:58 am

    Most MPC protocols require the set of parties to be active for the entire duration of the computation. Deploying MPC for use cases such as complex and resource-intensive scientific computations increases the barrier of entry for potential participants. The model of Fluid MPC (Crypto 2021) tackles this issue by giving parties the flexibility to participate in the protocol only when their resources are free. As such, the set of parties is dynamically changing over time. In this work, we extend Fluid MPC, which only considered an honest majority, to the setting where the majority of participants at any point in the computation may be corrupt. We do this by presenting variants of the SPDZ protocol, which support dynamic participants. Firstly, we describe a universal preprocessing for SPDZ, which allows a set of $n$ parties to compute some correlated randomness, such that later on, any subset of the parties can use this to take part in an online secure computation. We complement this with a Dynamic SPDZ online phase, designed to work with our universal preprocessing, as well as a protocol for securely realising the preprocessing. Our preprocessing protocol is designed to efficiently use pseudorandom correlation generators, thus, the parties' storage and communication costs can be almost independent of the function being evaluated. We then extend this to support a fluid online phase, where the set of parties can dynamically evolve during the online phase. Our protocol achieves maximal fluidity and security with abort, similarly to the previous, honest majority construction. Achieving this requires a careful design and techniques to guarantee a small state complexity, allowing us to switch between committees efficiently.

  • On Quantum Query Complexities of Collision-Finding in Non-Uniform Random Functions
    by Tianci Peng on December 3, 2021 at 7:58 am

    Collision resistance and collision finding are now extensively exploited in Cryptography, especially in the case of quantum computing. For any function $f:[M]\to[N]$ with $f(x)$ uniformly distributed over $[N]$, Zhandry has shown that the number $\Theta(N^{1/3})$ of queries is both necessary and sufficient for finding a collision in $f$ with constant probability. However, there is still a gap between the upper and the lower bounds of query complexity in general non-uniform distributions. In this paper, we investigate the quantum query complexity of collision-finding problem with respect to general non-uniform distributions. Inspired by previous work, we pose the concept of collision domain and a new parameter $\gamma$ that heavily depends on the underlying non-uniform distribution. We then present a quantum algorithm that uses $O(\gamma^{1/6})$ quantum queries to find a collision for any non-uniform random function. By making a transformation of a problem in non-uniform setting into a problem in uniform setting, we are also able to show that $\Omega(\gamma^{1/6}\log^{-1/2}\gamma)$ quantum queries are necessary in collision-finding in any non-uniform random function. The upper bound and the lower bound in this work indicates that the proposed algorithm is nearly optimal with query complexity in general non-uniform case.

  • SNARKBlock: Federated Anonymous Blocklisting from Hidden Common Input Aggregate Proofs
    by Michael Rosenberg on December 3, 2021 at 7:57 am

    Moderation is an essential tool to fight harassment and prevent spam. The use of strong user identities makes moderation easier, but trends towards strong identity pose serious privacy issues, especially when identities are linked across social media platforms. Zero-knowledge blocklists allow cross-platform blocking of users but, counter-intuitively, do not link users identities inter- or intra-platform, or to the fact they were blocked. Unfortunately, existing approaches~(Tsang et al. '10), require that servers do work linear in the size of the blocklist for each verification of a non-membership proof. We design and implement SNARKBlock, a new protocol for zero-knowledge blocklisting with server-side verification that is logarithmic in the size of the blocklist. SnarkBlock is also the first approach to support ad-hoc, federated blocklisting: websites can mix and match their own blocklists from other blocklists and dynamically choose which identity providers they trust. Our core technical advance, of separate interest, is $\mathsf{HICIAP}$, a zero-knowledge proof that aggregates $n$ Groth16 proofs into one $O(\log n)$-sized proof which also shows that the input proofs share a common hidden input.

  • Shared Permutation for Syndrome Decoding: New Zero-Knowledge Protocol and Code-Based Signature
    by Thibauld Feneuil on December 3, 2021 at 7:57 am

    Zero-knowledge proofs are an important tool for many cryptographic protocols and applications. The threat of a coming quantum computer motivates the research for new zero-knowledge proof techniques for (or based on) post-quantum cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) of random linear codes. This problem is known to be NP-hard and the cryptanalysis state of affairs has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. As a simple public-coin three-round protocol, it can be converted to a post-quantum signature scheme through the famous Fiat-Shamir transform. The main drawback of this protocol is its high soundness error of 2/3, meaning that it should be repeated $\approx 1.7\lambda$ times to reach a $\lambda$-bit security. In this paper, we improve this three-decade-old state of affairs by introducing a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Our protocol achieves a soundness error of 1/n for an arbitrary n in complexity O(n). Our construction requires the verifier to trust some of the variables sent by the prover which can be ensured through a cut-and-choose approach. We provide an optimized version of our zero-knowledge protocol which achieves arbitrary soundness through parallel repetitions and merged cut-and-choose phase. While turning this protocol into a signature scheme, we achieve a signature size of 17 KB for 128-bit security. This represents a significant improvement over previous constructions based on the syndrome decoding problem for random linear codes.

  • Shorter Lattice-Based Group Signatures via ``Almost Free'' Encryption and Other Optimizations
    by Vadim Lyubashevsky on December 3, 2021 at 7:57 am

    We present an improved lattice-based group signature scheme whose parameter sizes and running times are independent of the group size. The signature length in our scheme is around $200$KB, which is approximately a $3$X reduction over the previously most compact such scheme, based on any quantum-safe assumption, of del Pino et al. (ACM CCS 2018). The improvement comes via several optimizations of some basic cryptographic components that make up group signature schemes, and we think that they will find other applications in privacy-based lattice cryptography.

  • Ascon PRF, MAC, and Short-Input MAC
    by Christoph Dobraunig on December 3, 2021 at 7:56 am

    The cipher suite Ascon v1.2 already provides authenticated encryption schemes, hash, and extendable output functions. Furthermore, the underlying permutation is also used in two instances of Isap v2.0, an authenticated encryption scheme designed to provide enhanced robustness against side-channel and fault attacks. In this paper, we enrich the functionality one can get out of Ascon's permutation by providing efficient Pseudorandom Functions (PRFs), a Message Authentication Code (MAC) and a fast short-input PRF for messages up to 128 bits.

  • Improved Security Bound of \textsf{(E/D)WCDM}
    by Nilanjan Datta on December 3, 2021 at 7:56 am

    In CRYPTO'16, Cogliati and Seurin proposed a block cipher based nonce based MAC, called {\em Encrypted Wegman-Carter with Davies-Meyer} (\textsf{EWCDM}), that gives $2n/3$ bit MAC security in the nonce respecting setting and $n/2$ bit security in the nonce misuse setting, where $n$ is the block size of the underlying block cipher. However, this construction requires two independent block cipher keys. In CRYPTO'18, Datta et al. came up with a single-keyed block cipher based nonce based MAC, called {\em Decrypted Wegman-Carter with Davies-Meyer} (\textsf{DWCDM}), that also provides $2n/3$ bit MAC security in the nonce respecting setting and $n/2$ bit security in the nonce misuse setting. However, the drawback of \textsf{DWCDM} is that it takes only $2n/3$ bit nonce. In fact, authors have shown that \textsf{DWCDM} cannot achieve beyond the birthday bound security with $n$ bit nonces. In this paper, we prove that \textsf{DWCDM} with $3n/4$ bit nonces provides MAC security up to $O(2^{3n/4})$ MAC queries against all nonce respecting adversaries. We also improve the MAC bound of \textsf{EWCDM} from $2n/3$ bit to $3n/4$ bit. The backbone of these two results is a refined treatment of extended mirror theory that systematically estimates the number of solutions to a system of bivariate affine equations and non-equations, which we apply on the security proofs of the constructions to achieve $3n/4$ bit security.

  • Integral Attacks on Pyjamask-96 and Round-Reduced Pyjamask-128
    by Jiamin Cui on December 3, 2021 at 7:55 am

    In order to provide benefits in the areas of fully homomorphic encryption (FHE), multi-party computation (MPC), post-quantum signature schemes, or efficient masked implementations for side-channel resistance, reducing the number of multiplications has become a quite popular trend for the symmetric cryptographic primitive designs. With an aggressive design strategy exploiting the extremely simple and low-degree S-box and low number of rounds, Pyjamask, the fundamental block cipher of the AEAD with the same name, has the smallest number of AND gates per bit among all the existing block ciphers (except LowMC or Rasta which work on unconventional plaintext/key sizes). Thus, although the AEAD Pyjamask stuck at the second round of the NIST lightweight cryptography standardization process, the block cipher Pyjamask itself still attracts a lot of attention. Not very unexpectedly, the low degree and the low number of rounds are the biggest weakness of Pyjamask. At FSE 2020, Dobraunig et al. successfully mounted an algebraic and higher-order differential attack on full Pyjamask-96, one member of the Pyjamask block cipher family. However, the drawback of this attack is that it has to use the full codebook, which makes the attack less appealing. In this paper, we take integral attacks as our weapon, which are also sensitive to the low degree. Based on a new 11-round integral distinguisher found by state-of-the-art detection techniques, and combined with the relationship between round keys that reduces the involved keys, we give the key recovery attack on the full Pyjamask-96 without the full codebook for the first time. Further, the algebraic and higher-order differential technique does not work for Pyjamask-128, the other member of the Pyjamask block cipher family. To better understand the security margin of Pyjamask-128, we present the first third-party cryptanalysis on Pyjamask-128 up to 11 out of 14 rounds.

  • Multicast Key Agreement, Revisited
    by Alexander Bienstock on December 3, 2021 at 7:54 am

    Multicast Key Agreement (MKA) is a long-overlooked natural primitive of large practical interest. In traditional MKA, an omniscient group manager privately distributes secrets over an untrusted network to a dynamically-changing set of group members. The group members are thus able to derive shared group secrets across time, with the main security requirement being that only current group members can derive the current group secret. There indeed exist very efficient MKA schemes in the literature that utilize symmetric-key cryptography. However, they lack formal security analyses, efficiency analyses regarding dynamically changing groups, and more modern, robust security guarantees regarding user state leakages: forward secrecy (FS) and post-compromise security (PCS). The former ensures that group secrets prior to state leakage remain secure, while the latter ensures that after such leakages, users can quickly recover security of group secrets via normal protocol operations. More modern Secure Group Messaging (SGM) protocols allow a group of users to asynchronously and securely communicate with each other, as well as add and remove each other from the group. SGM has received significant attention recently, including in an effort by the IETF Messaging Layer Security (MLS) working group to standardize an eponymous protocol. However, the group key agreement primitive at the core of SGM protocols, Continuous Group Key Agreement (CGKA), achieved by the TreeKEM protocol in MLS, suffers from bad worst-case efficiency and heavily relies on less efficient (than symmetric-key cryptography) public-key cryptography. We thus propose that in the special case of a group membership change policy which allows a single member to perform all group additions and removals, an upgraded version of classical Multicast Key Agreement (MKA) may serve as a more efficient substitute for CGKA in SGM. We therefore present rigorous, stronger MKA security definitions that provide increasing levels of security in the case of both user and group manager state leakage, and that are suitable for modern applications, such as SGM. We then construct a formally secure MKA protocol with strong efficiency guarantees for dynamic groups. Finally, we run experiments which show that the left-balanced binary tree structure used in TreeKEM can be replaced with red-black trees in MKA for better efficiency.

  • Impeccable Circuits III
    by Shahram Rasoolzadeh on December 3, 2021 at 7:53 am

    As a recent fault-injection attack, SIFA defeats most of the known countermeasures. Although error-correcting codes have been shown effective against SIFA, they mainly require a large redundancy to correct a few bits. In this work, we propose a hybrid construction with the ability to detect and correct injected faults at the same time. We provide a general implementation methodology which guarantees the correction of up to $t_c$-bit faults and the detection of at most $t_d$ faulty bits. Exhaustive evaluation of our constructions, by the open-source fault diagnostic tool VerFI, indicate the success of our designs in achieving the desired goals.

  • Taming the many EdDSAs
    by Konstantinos Chalkias on December 2, 2021 at 9:54 pm

    This paper analyses security of concrete instantiations of EdDSA by identifying exploitable inconsistencies between standardization recommendations and Ed25519 implementations. We mainly focus on current ambiguity regarding signature verification equations, binding and malleability guarantees, and incompatibilities between randomized batch and single verification. We give a formulation of Ed25519 signature scheme that achieves the highest level of security, explaining how each step of the algorithm links with the formal security properties. We develop optimizations to allow for more efficient secure implementations. Finally, we designed a set of edge-case test-vectors and run them by some of the most popular Ed25519 libraries. The results allowed to understand the security level of those implementations and showed that most libraries do not comply with the latest standardization recommendations. The methodology allows to test compatibility of different Ed25519 implementations which is of practical importance for consensus-driven applications.

  • Fully projective radical isogenies in constant-time
    by Jesús-Javier Chi-Domínguez on December 2, 2021 at 4:35 pm

    At PQCrypto-2020, Castryck and Decru proposed CSURF (CSIDH on the surface) as an improvement to the CSIDH protocol. Soon after that, at Asiacrypt-2020, together with Vercauteren they introduced radical isogenies as a further improvement. The main improvement in these works is that both CSURF and radical isogenies require only one torsion point to initiate a chain of isogenies, in comparison to Vélu isogenies which require a torsion point per isogeny. Both works were implemented using non-constant-time techniques, however, in a realistic scenario, a constant-time implementation is necessary to mitigate risks of timing attacks. The analysis of constant-time CSURF and radical isogenies was left as an open problem by Castryck, Decru, and Vercauteren. In this work, we analyze this problem. A straightforward constant-time implementation of CSURF and radical isogenies encounters too many issues to be cost-effective, but we resolve some of these issues with new optimization techniques. We introduce projective radical isogenies to save costly inversions and present a hybrid strategy for the integration of radical isogenies in CSIDH implementations. These improvements make radical isogenies almost twice as efficient in constant-time, in terms of finite field multiplications. Using these improvements, we then measure the algorithmic performance in a benchmark of CSIDH, CSURF and CRADS (an implementation using radical isogenies) for different prime sizes. Our implementation provides a more accurate comparison between CSIDH, CSURF and CRADS than the original benchmarks, by using state-of-the-art techniques for all three implementations. Our experiments illustrate that the speed-up of constant-time CSURF-512 with radical isogenies is reduced to about 3% in comparison to the fastest state-of-the-art constant-time CSIDH-512 implementation. The performance is worse for larger primes, as radical isogenies scale worse than Vélu isogenies.

  • Digital Signatures with Memory-Tight Security in the Multi-Challenge Setting
    by Denis Diemert on December 2, 2021 at 3:45 pm

    The standard security notion for digital signatures is "single-challenge" (SC) EUF-CMA security, where the adversary outputs a single message-signature pair and "wins" if it is a forgery. Auerbach et al. (CRYPTO 2017) introduced memory-tightness of reductions and argued that the right security goal in this setting is actually a stronger "multi-challenge" (MC) definition, where an adversary may output many message-signature pairs and "wins" if at least one is a forgery. Currently, no construction from simple standard assumptions is known to achieve full tightness with respect to time, success probability, and memory simultaneously. Previous works showed that memory-tight signatures cannot be achieved via certain natural classes of reductions (Auerbach et al., CRYPTO 2017; Wang et al., EUROCRYPT 2018). These impossibility results may give the impression that the construction of memory-tight signatures is difficult or even impossible. We show that this impression is false, by giving the first constructions of signature schemes with full tightness in all dimensions in the MC setting. To circumvent the known impossibility results, we first introduce the notion of canonical reductions in the SC setting. We prove a general theorem establishing that every signature scheme with a canonical reduction is already memory-tightly secure in the MC setting, provided that it is strongly unforgeable, the adversary receives only one signature per message, and assuming the existence of a tightly-secure pseudorandom function. We then achieve memory-tight many-signatures-per-message security in the MC setting by a simple additional generic transformation. This yields the first memory-tightly, strongly EUF-CMA-secure signature schemes in the MC setting. Finally, we show that standard security proofs often already can be viewed as canonical reductions. Concretely, we show this for signatures from lossy identification schemes (Abdalla et al., EUROCRYPT 2012), two variants of RSA Full-Domain Hash (Bellare and Rogaway, EUROCRYPT 1996), and two variants of BLS signatures (Boneh et al., ASIACRYPT 2001).

  • Polynomial IOPs for Linear Algebra Relations
    by Alan Szepieniec on December 2, 2021 at 2:23 pm

    This paper proposes new Polynomial IOPs for arithmetic circuits. They rely on the monomial coefficient basis to represent the matrices and vectors arising from the arithmetic constraint satisfaction system, and build on new protocols for establishing the correct computation of linear algebra relations such as matrix-vector products and Hadamard products. Our protocols give rise to concrete proof systems with succinct verification when compiled down with a cryptographic compiler whose role is abstracted away in this paper. Depending only on the compiler, the resulting SNARKs are either transparent or rely on a trusted setup.

  • Compressed Permutation Oracles (And the Collision-Resistance of Sponge/SHA3)
    by Dominique Unruh on December 2, 2021 at 1:36 pm

    We generalize Zhandry's compressed oracle technique to invertible random permutations. (That is, to a quantum random oracle where the adversary has access to a random permutation and its inverse.) This enables security proofs with lazy sampling, i.e., where oracle outputs are chosen only when needed. As an application of our technique, we show the collision-resistance of the sponge construction based on invertible permutations. In particular, this shows the collision-resistance of SHA3 (in the random oracle model).

  • Some remarks on how to hash faster onto elliptic curves
    by Dmitrii Koshelev on December 2, 2021 at 1:25 pm

    In this article we propose three optimizations of indifferentiable hashing onto (prime order subgroups of) ordinary elliptic curves over finite fields $\mathbb{F}_{\!q}$. One of them is dedicated to elliptic curves $E$ provided that $q \equiv 11 \ (\mathrm{mod} \ 12)$. The other two optimizations take place respectively for the subgroups $\mathbb{G}_1$, $\mathbb{G}_2$ of some pairing-friendly curves. The performance gain comes from the smaller number of required exponentiations in $\mathbb{F}_{\!q}$ for hashing to $E(\mathbb{F}_{\!q})$, $\mathbb{G}_2$ (resp. from the absence of necessity to hash directly onto $\mathbb{G}_1$). In particular, our results affect the pairing-friendly curve BLS12-381 (the most popular in practice at the moment) and the (unique) French curve FRP256v1 as well as almost all Russian standardized curves and a few ones from the draft NIST SP 800-186.

  • An Efficient Transformation Capabilities of Single Database Private Block Retrieval
    by Radhakrishna Bhat on December 2, 2021 at 10:39 am

    Private Information Retrieval (PIR) is one of the promising techniques to preserve user privacy in the presence of trusted-but- curious servers. The information-theoretically private query construction assures the highest user privacy over curious and unbounded computation servers. Therefore, the need for information-theoretic private retrieval was fulfilled by various schemes in a variety of PIR settings. To augment previous work, we propose a combination of new bit connection methods called rail-shape and signal-shape and new quadratic residuosity assumption based family of trapdoor functions for generic single database Private Block Retrieval (PBR). The main goal of this work is to show that the possibility of mapping from computationally bounded privacy to information-theoretic privacy or vice-versa in a single database setting using newly constructed bit connection and trapdoor function combinations. The proposed bit connection and trapdoor function combinations have achieved the following results. • Single Database information-theoretic PBR (SitPBR): The proposed combinations are used to construct SitPBR in which the user privacy is preserved through the generation of information-theoretic queries and data privacy is preserved using quadratic residuosity assumption. • Single Database computationally bounded PBR (ScPBR): The proposed combinations are used to construct ScPBR in which both user privacy and data privacy are preserved using a well-known intractability assumption called quadratic residuosity assumption. • Map(SitPBR)&#8594;ScPBR: The proposed combinations can be used to transform (or map) SitPBR into ScPBR scheme by choosing appropriate function parameters. • Map(ScPBR)&#8594;SitPBR: The proposed combinations can be used to transform (or map) ScPBR into SitPBR scheme by choosing appropriate function parameters. All the proposed schemes are single round, memoryless and plain database schemes (at their basic constructions).

  • Symmetric Key Exchange with Full Forward Security and Robust Synchronization
    by Colin Boyd on December 2, 2021 at 10:21 am

    We construct lightweight authenticated key exchange protocols based on pre-shared keys, which achieve full forward security and rely only on simple and efficient symmetric-key primitives. All of our protocols have rigorous security proofs in a strong security model, all have low communication complexity, and are particularly suitable for resource-constrained devices. We describe three protocols that apply linear key evolution to provide different performance and security properties. Correctness in parallel and concurrent protocol sessions is difficult to achieve for linearly key-evolving protocols, emphasizing the need for assurance of availability alongside the usual confidentiality and authentication security goals. We introduce synchronization robustness as a new formal security goal, which essentially guarantees that parties can re-synchronize efficiently. All of our new protocols achieve this property. Since protocols based on linear key evolution cannot guarantee that all concurrently initiated sessions successfully derive a key, we also propose two constructions with non-linear key evolution based on puncturable PRFs. These are instantiable from standard hash functions and require O( C log(|CTR|)) memory, where C is the number of concurrent sessions and |CTR| is an upper bound on the total number of sessions per party. These are the first protocols to simultaneously achieve full forward security, synchronization robustness, and concurrent correctness.

  • Towards Practical and Round-Optimal Lattice-Based Threshold and Blind Signatures
    by Shweta Agrawal on December 2, 2021 at 9:00 am

    Threshold and blind signature schemes have found numerous applications in cryptocurrencies, e-cash, e-voting and other privacy-preserving technologies. In this work, we make advances in bringing lattice-based constructions for these primitives closer to practice. 1. Threshold Signatures. For round optimal threshold signatures, we improve the only known construction by Boneh et al. [CRYPTO'18] as follows: a. Efficiency. We reduce the amount of noise flooding from $2^{\Omega(\lambda)}$ down to $\sqrt{Q_S}$, where $Q_S$ is the bound on the number of generated signatures and $\lambda$ is the security parameter. By using lattice hardness assumptions over polynomial rings, this allows to decrease signature bit-lengths from $\widetilde{O}(\lambda^3)$ to $\widetilde{O}(\lambda)$. b. Towards Adaptive Security. The construction of Boneh et al. satisfies only selective security, where all the corrupted parties must be announced before any signing queries are made. We improve this in two ways: in the ROM, we obtain partial adaptivity where signing queries can be made before the corrupted parties are announced but the set of corrupted parties must be announced all at once. In the standard model, we obtain full adaptivity, where parties can be corrupted at any time but this construction is in a weaker pre-processing model where signers must be provided correlated randomness of length proportional to the number of signatures, in an offline pre-processing phase. 2. Blind Signatures. For blind signatures, we improve the state of art lattice-based construction by Hauck et al.[CRYPTO'20] as follows: a. Round Complexity. We improve the round complexity from three to two -- this is optimal. b. Efficiency. Again, we reduce the amount of noise flooding from $2^{\Omega(\lambda)}$ down to $\sqrt{Q_S}$, where $Q_S$ is the bound on the number of signatures and $\lambda$ is the security parameter. c. Number of Signing Queries. Unlike the scheme from Hauck et al., our construction enjoys a proof that is not restricted to a polylogarithmic number of signatures. Using lattice hardness assumptions over rings, we obtain signatures of bit-lengths bounded as $\widetilde{O}(\lambda)$. In contrast, the signature bit-length in the scheme from Hauck et al. is $\Omega(\lambda^3 + Q_S \cdot \lambda)$. Concretely, we can obtain blind/threshold signatures of size $\approx 3$ KB using a variant of Dilithium-G with $\approx 128$ bit-security, for adversaries limited to getting $256$ signatures. In contrast, parameters provided by Hauck et al. lead to blind signatures of $\approx 7.73$ MB, for adversaries limited to getting 7 signatures, while concrete parameters are not provided for the construction of threshold signatures by Boneh et al.

  • Towards Post-Quantum Security for Cyber-Physical Systems: Integrating PQC into Industrial M2M Communication
    by Sebastian Paul on December 2, 2021 at 8:53 am

    The threat of a cryptographically relevant quantum computer contributes to an increasing interest in the field of post-quantum cryptography (PQC). Compared to existing research efforts regarding the integration of PQC into the Transport Layer Security (TLS) protocol, industrial communication protocols have so far been neglected. Since industrial cyber-physical systems (CPS) are typically deployed for decades, protection against such long-term threats is needed. In this work, we propose two novel solutions for the integration of post-quantum (PQ) primitives (digital signatures and key establishment) into the industrial protocol Open Platform Communications Unified Architecture (OPC UA): a hybrid solution combining conventional cryptography with PQC and a solution solely based on PQC. Both approaches provide mutual authentication between client and server and are realized with certificates fully compliant to the X.509 standard. We implement the two solutions and measure and evaluate their performance across three different security levels. All selected algorithms (Kyber, Dilithium, and Falcon) are candidates for standardization by the National Institute of Standards and Technology (NIST). We show that Falcon is a suitable option - especially - when using floating-point hardware provided by our ARM-based evaluation platform. Our proposed hybrid solution provides PQ security for early adopters but comes with additional performance and communication requirements. Our solution solely based on PQC shows superior performance across all evaluated security levels in terms of handshake duration compared to conventional OPC UA but comes at the cost of increased handshake sizes. In addition to our performance evaluation, we provide a proof of security in the symbolic model for our two PQC-based variants of OPC UA. For this proof, we use the cryptographic protocol verifier ProVerif and formally verify confidentiality and authentication properties of our quantum-resistant variants.

  • Polar Sampler: Discrete Gaussian Sampling over the Integers Using Polar Codes
    by Jiabo Wang on December 2, 2021 at 4:53 am

    Cryptographic constructions based on hard lattice problems have emerged as a front runner for the standardization of post quantum public key cryptography. As the standardization process takes place, optimizing specific parts of proposed schemes, e.g., Bernoulli sampling and Integer Gaussian sampling, becomes a worthwhile endeavor. In this work, we propose a novel Bernoulli sampler based on polar codes, dubbed ``polar sampler". The polar sampler is information theoretically optimum in the sense that the number of uniformly random bits it consumes approaches the entropy bound asymptotically. It also features quasi-linear complexity and constant-time implementation. An integer Gaussian sampler is developed using multilevel polar samplers. Our algorithm becomes effective when sufficiently many samples are required at each query to the sampler. Security analysis is given based on Kullback-Leibler divergence and R\'enyi divergence. Experimental and asymptotic comparisons between our integer Gaussian sampler and state-of-the-art samplers verify its efficiency in terms of entropy consumption, running time and memory cost. We envisage that the proposed Bernoulli sampler can find other applications in cryptography in addition to Gaussian sampling.

  • Structural and Statistical Analysis of Multidimensional Linear Approximations of Random Functions and Permutations
    by Tomer Ashur on December 2, 2021 at 2:42 am

    The goal of this paper is to investigate linear approximations of random functions and permutations. Our motivation is twofold. First, before the distinguishability of a practical cipher from an ideal one can be analysed, the cryptanalyst must have an accurate understanding of the statistical behaviour of the ideal cipher. Secondly, this issue has been neglected both in old and in more recent studies, particularly when multiple linear approximations are being used simultaneously. Traditional models have been based on the average behaviour and simplified using other assumptions such as independence of the linear approximations. Multidimensional cryptanalysis was introduced to avoid making artificial assumptions about statistical independence of linear approximations. On the other hand, it has the drawback of including many trivial approximations that do not contribute to the attack but just cause a waste of time and memory. We show for the first time in this paper that the trivial approximations reduce the degree of freedom of the related χ² distribution. Previously, the affine linear cryptanalysis was proposed to allow removing trivial approximations and, at the same time, admitting a solid statistical model. In this paper, we identify another type of multidimensional linear approximation, called Davies-Meyer approximation, which has similar advantages, and present full statistical models for both the affine and the Davies-Meyer type of multidimensional linear approximations. The new models given in this paper are realistic, accurate and easy to use. They are backed up by standard statistical tools such as Pearson’s χ² test and finite population correction and demonstrated to work accurately using practical examples.

  • Towards Using Blockchain Technology to Prevent Diploma Fraud
    by Qiang Tang on December 2, 2021 at 2:41 am

    After its debut with Bitcoin in 2009, Blockchain has attracted enormous attention and been used in many different applications as a trusted black box. Many applications focus on exploiting the Blockchain-native features (e.g. trust from consensus, and smart contracts) while paying less attention to the application-specific requirements. In this paper, we initiate a systematic study on the applications in the education and training sector, where Blockchain is leveraged to combat diploma fraud. We present a general system structure for digitized diploma management systems and identify both functional and non-functional requirements. Our analysis show that all existing Blockchain-based systems fall short in meeting these requirements. Inspired by the analysis, we propose a Blockchain-facilitated solution by leveraging some basic cryptographic primitives and data structures. Following-up analysis show that our solution respects all the identified requirements very well.

  • Can Round-Optimal Lattice-Based Blind Signatures be Practical?
    by Shweta Agrawal on December 2, 2021 at 2:40 am

    Blind signatures have numerous applications in privacy-preserving technologies. While there exist many practical blind signatures from number-theoretic assumptions, the situation is far less satisfactory from post-quantum assumptions. In this work, we make advances towards making lattice-based blind signatures practical. We introduce two round-optimal constructions in the random oracle model, and provide guidance towards their concrete realization as well as efficiency estimates. The first scheme relies on the homomorphic evaluation of a lattice-based signature scheme. This requires an ${\sf HE}$-compatible lattice-based signature. For this purpose, we show that the rejection step in Lyubashevsky's signature is unnecessary if the working modulus grows linearly in $\sqrt{Q}$, where $Q$ is an a priori bound on the number of signature queries. Compared to the state of art scheme from Hauck et al [CRYPTO'20], this blind signature compares very favorably in all aspects except for signer cost. Compared to a lattice-based instantiation of Fischlin's generic construction, it is much less demanding on the user and verifier sides. The second scheme relies on the Gentry, Peikert and Vaikuntanathan signature [STOC'08] and non-interactive zero-knowledge proofs for linear relations with small unknowns, which are significantly more efficient than their general purpose counterparts. Its security stems from a new and arguably natural assumption which we introduce: ${\sf one}$-${\sf more}$-${\sf ISIS}$. This assumption can be seen as a lattice analogue of the one-more-RSA assumption by Bellare et al [JoC'03]. To gain confidence, we provide a detailed overview of diverse attack strategies. The resulting blind signature beats all the aforementioned from most angles and obtains practical overall performance.

  • Communication-Efficient Proactive MPC for Dynamic Groups with Dishonest Majorities
    by Karim Eldefrawy on December 2, 2021 at 2:38 am

    Secure multiparty computation (MPC) has recently been increasingly adopted to secure cryptographic keys in enterprises, cloud infrastructure, and cryptocurrency and blockchain-related settings such as wallets and exchanges. Using MPC in blockchains and other distributed systems highlights the need to consider dynamic settings. In such dynamic settings, parties, and potentially even parameters of underlying secret sharing and corruption tolerance thresholds of sub-protocols, may change over the lifetime of the protocol. In particular, stronger threat models -- in which \emph{mobile} adversaries control a changing set of parties (up to $t$ out of $n$ involved parties at any instant), and may eventually corrupt \emph{all $n$ parties} over the course of a protocol's execution -- are becoming increasingly important for such real world deployments; secure protocols designed for such models are known as Proactive MPC (PMPC). In this work, we construct the first efficient PMPC protocol for \emph{dynamic} groups (where the set of parties changes over time) secure against a \emph{dishonest majority} of parties. Our PMPC protocol only requires $O(n^2)$ (amortized) communication per secret, compared to existing PMPC protocols that require $O(n^4)$ and only consider static groups with dishonest majorities. At the core of our PMPC protocol is a new efficient technique to perform multiplication of secret shared data (shared using a bivariate scheme) with $O(n \sqrt{n})$ communication with security against a dishonest majority without requiring pre-computation. We also develop a new efficient bivariate batched proactive secret sharing (PSS) protocol for dishonest majorities, which may be of independent interest. This protocol enables multiple dealers to contribute different secrets that are efficiently shared together in one batch; previous batched PSS schemes required all secrets to come from a single dealer.

  • Proof-Carrying Data without Succinct Arguments
    by Benedikt Bünz on December 1, 2021 at 9:22 pm

    Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a *split accumulation scheme*, which is a weak form of accumulation that we introduce. Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) *constant number of group and field operations*. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD. Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments. Our results are supported by a modular and efficient implementation.

  • Multiradical isogenies
    by Wouter Castryck on December 1, 2021 at 1:11 pm

    We argue that for all integers $N \geq 2$ and $g \geq 1$ there exist "multiradical" isogeny formulae, that can be iteratively applied to compute $(N^k, \ldots, N^k)$-isogenies between principally polarized $g$-dimensional abelian varieties, for any value of $k \geq 2$. The formulae are complete: each iteration involves the extraction of $g(g+1)/2$ different $N$th roots, whence the epithet multiradical, and by varying which roots are chosen one computes all $N^{g(g+1)/2}$ extensions to an $(N^k, \ldots, N^k)$-isogeny of the incoming $(N^{k-1}, \ldots, N^{k-1})$-isogeny. Our group-theoretic argumentation is heuristic, but it is supported by concrete formulae for several prominent families. As our main application, we illustrate the use of multiradical isogenies by implementing a hash function from $(3,3)$-isogenies between Jacobians of superspecial genus-$2$ curves, showing that it outperforms its $(2,2)$-counterpart by an asymptotic factor $\approx 9$ in terms of speed.

  • Delegating Supersingular Isogenies over $\mathbb{F}_{p^2}$ with Cryptographic Applications
    by Robi Pedersen on December 1, 2021 at 5:53 am

    Although isogeny-based cryptographic schemes enjoy the lowest key sizes amongst current post-quantum cryptographic candidates, they unfortunately come at a high computational cost, which makes their deployment on the ever-growing number of resource-constrained devices difficult. Speeding up the expensive post-quantum cryptographic operations by delegating these computations from a weaker client to untrusted powerful external servers is a promising approach. Following this, we present in this work mechanisms allowing computationally restricted devices to securely and verifiably delegate isogeny computations to potentially untrusted third parties. In particular, we propose two algorithms that can be seamlessly integrated into existing isogeny-based protocols and which lead to a much lower cost for the delegator than the full, local computation. For example, compared to the local computation cost, we reduce the public-key computation step of SIDH/SIKE by a factor 5 and the zero-knowledge proof of identity from Jao and De Feo by a factor 16 for the prover, while it becomes almost free for the verifier, respectively, at the NIST security level 1.

  • Convexity of division property transitions: theory, algorithms and compact models
    by Aleksei Udovenko on November 30, 2021 at 8:47 pm

    Integral cryptanalysis is a powerful tool for attacking symmetric primitives, and division property is a state-of-the-art framework for finding integral distinguishers. This work describes new theoretical and practical insights into traditional bit-based division property. We focus on analyzing and exploiting monotonicity/convexity of division property and its relation to the graph indicator. In particular, our investigation leads to a new compact representation of propagation, which allows CNF/MILP modeling for larger S-Boxes, such as 16-bit Super-Sboxes of lightweight block ciphers or even 32-bit random S-boxes. This solves the challenge posed by Derbez and Fouque (ToSC 2020), who questioned the possibility of SAT/SMT/MILP modeling of 16-bit Super-Sboxes. As a proof-of-concept, we model the Super-Sboxes of the 8-round LED by CNF formulas, which was not feasible by any previous approach. Our analysis is further supported by an elegant algorithmic framework. We describe simple algorithms for computing division property of a set of $n$-bit vectors in time $O(n2^n)$, reducing such sets to minimal/maximal elements in time $O(n2^n)$, computing division property propagation table of an $n\times m$-bit S-box and its compact representation in time $O((n+m)2^{n+m})$. In addition, we develop an advanced algorithm tailored to "heavy" bijections, allowing to model, for example, a randomly generated 32-bit S-box.

  • Proximity Searchable Encryption for the Iris Biometric
    by Chloe Cachet on November 30, 2021 at 6:47 pm

    Biometric databases collect people's information and allow users to perform proximity searches (finding all records within a bounded distance of the query point) with few cryptographic protections. This work studies proximity searchable encryption applied to the iris biometric. Prior work proposed inner product functional encryption as a technique to build proximity biometric databases (Kim et al., SCN 2018). This is because binary Hamming distance is computable using an inner product. This work identifies and closes two gaps to using inner product encryption for biometric search: - Biometrics naturally use long vectors often with thousands of bits. Many inner product encryption schemes generate a random matrix whose dimension scales with vector size and have to invert this matrix. As a result, setup is not feasible on commodity hardware unless we reduce the dimension of the vectors. We explore state of the art techniques to reduce the dimension of the iris biometric and show that all known techniques harm the accuracy of the resulting system. That is, for small vector sizes multiple unrelated biometrics are returned in the search. For length 64 vectors, at a 90% probability of the searched biometric being returned, 10% of stored records are erroneously returned on average. Rather than changing the feature extractor, we introduce a new cryptographic technique that allows one to generate several smaller matrices. For vectors of length 1024 this reduces time to run setup from 30 days to 4 minutes. At this vector length, for the same 90% probability of the searched biometric being returned, .02% of stored records are erroneously returned on average. - Prior inner product approaches leak distance between the query and all stored records. We refer to these as distance-revealing. We show a natural construction from function hiding, secret-key, predicate, inner product encryption (Shen, Shi, and Waters, TCC 2009). Our construction only leaks access patterns, and which returned records are the same distance from the query. We refer to this scheme as distance-hiding. We implement and benchmark one distance-revealing and one distance-hiding scheme. The distance-revealing scheme can search a small (hundreds) database in 4 minutes while the distance-hiding scheme is not yet practical, requiring 4 hours.

  • Oblivious Key-Value Stores and Amplification for Private Set Intersection
    by Gayathri Garimella on November 30, 2021 at 11:01 am

    Many recent private set intersection (PSI) protocols encode input sets as polynomials. We consider the more general notion of an oblivious key-value store (OKVS), which is a data structure that compactly represents a desired mapping $k_i \mapsto v_i$. When the $v_i$ values are random, the OKVS data structure hides the $k_i$ values that were used to generate it. The simplest (and size-optimal) OKVS is a polynomial $p$ that is chosen using interpolation such that $p(k_i)=v_i$. We initiate the formal study of oblivious key-value stores, and show new constructions resulting in the fastest OKVS to date. Similarly to cuckoo hashing, current analysis techniques are insufficient for finding {\em concrete} parameters to guarantee a small failure probability for our OKVS constructions. Moreover, it would cost too much to run experiments to validate a small upper bound on the failure probability. We therefore show novel techniques to amplify an OKVS construction which has a failure probability $p$, to an OKVS with a similar overhead and failure probability $p^c$. Setting $p$ to be moderately small enables to validate it by running a relatively small number of $O(1/p)$ experiments. This validates a $p^c$ failure probability for the amplified OKVS. Finally, we describe how OKVS can significantly improve the state of the art of essentially all variants of PSI. This leads to the fastest two-party PSI protocols to date, for both the semi-honest and the malicious settings. Specifically, in networks with moderate bandwidth (e.g., 30 - 300 Mbps) our malicious two-party PSI protocol has 40\% less communication and is 20-40\% faster than the previous state of the art protocol, even though the latter only has heuristic confidence.

  • Log-S-unit lattices using Explicit Stickelberger Generators to solve Approx Ideal-SVP
    by Olivier Bernard on November 29, 2021 at 11:27 pm

    In 2020, Bernard and Roux-Langlois introduced the Twisted-PHS algorithm to solve Approx-SVP for ideal lattices on any number field, based on the PHS algorithm by Pellet-Mary, Hanrot and Stehlé in 2019. They performed experiments for prime conductors cyclotomic fields of degrees at most 70, reporting approximation factors reached in practice. The main obstacle for these experiments is the computation of a log-$\mathcal{S}$-unit lattice, which requires classical subexponential time. In this paper, our main contribution is to extend these experiments to 210 cyclotomic fields of any conductor $m$ and of degree up to $210$. Building upon new results from Bernard and Kučera on the Stickelberger ideal, we construct a maximal set of independent $\mathcal{S}$-units lifted from the maximal real subfield using explicit Stickelberger generators obtained via Jacobi sums. Hence, we obtain full-rank log-$\mathcal{S}$-unit sublattices fulfilling the role of approximating the full Tw-PHS lattice. Notably, our obtained approximation factors match those from Bernard and Roux-Langlois using the original log-$\mathcal{S}$-unit lattice in small dimensions. As a side result, we use the knowledge of these explicit Stickelberger elements to remove almost all quantum steps in the CDW algorithm, by Cramer, Ducas and Wesolowski in 2021, under the mild restriction that the plus part of the class number verifies $h^{+}_{m}\leq O(\sqrt{m})$.

  • On the Use of the Legendre Symbol in Symmetric Cipher Design
    by Alan Szepieniec on November 29, 2021 at 1:04 pm

    This paper proposes the use of Legendre symbols as component gates in the design of ciphers tailored for use in cryptographic proof systems. Legendre symbols correspond to high-degree maps, but can be evaluated much faster. As a result, a cipher that uses Legendre symbols can offer the same security as one that uses high-degree maps but without incurring the penalty of a comparatively slow evaluation time. After discussing the design considerations induced by the use of Legendre symbol gates, we present a concrete design that follows this strategy, along with an elaborate security analysis thereof. This cipher is called Grendel.

  • Concurrently Composable Non-Interactive Secure Computation
    by Andrew Morgan on November 29, 2021 at 12:26 pm

    We consider the feasibility of non-interactive secure two-party computation (NISC) in the plain model satisfying the notion of superpolynomial-time simulation (SPS). While stand-alone secure SPS-NISC protocols are known from standard assumptions (Badrinarayanan et al., Asiacrypt 2017), it has remained an open problem to construct a concurrently composable SPS-NISC. Prior to our work, the best protocols require 5 rounds (Garg et al., Eurocrypt 2017), or 3 simultaneous-message rounds (Badrinarayanan et al., TCC 2017). In this work, we demonstrate the first concurrently composable SPS-NISC. Our construction assumes the existence of: - a non-interactive (weakly) CCA-secure commitment, - a stand-alone secure SPS-NISC with subexponential security, and satisfies the notion of "angel-based" UC security (i.e., UC with a superpolynomial-time helper) with perfect correctness. We additionally demonstrate that both of the primitives we use (albeit only with polynomial security) are necessary for such concurrently composable SPS-NISC with perfect correctness. As such, our work identifies essentially necessary and sufficient primitives for concurrently composable SPS-NISC with perfect correctness in the plain model.

  • Quantum Time/Memory/Data Tradeoff Attacks
    by Orr Dunkelman on November 29, 2021 at 12:26 pm

    One of the most celebrated and useful cryptanalytic algorithms is Hellman's time/memory tradeoff (and its Rainbow Table variant), which can be used to invert random-looking functions on $N$ possible values with time and space complexities satisfying $TM^2=N^2$. In this paper we develop new upper bounds on their performance in the quantum setting. As a search problem, one can always apply to it the standard Grover's algorithm, but this algorithm does not benefit from the possible availability of a large memory in which one can store auxiliary advice obtained during a free preprocessing stage. In fact, at FOCS'20 it was rigorously shown that for memory size bounded by $M \leq O(\sqrt{N})$, even quantum advice cannot yield an attack which is better than Grover's algorithm. Our main result complements this lower bound by showing that in the standard Quantum Accessible Classical Memory (QACM) model of computation, we can improve Hellman's tradeoff curve to $T^{4/3}M^2=N^2$. When we generalize the cryptanalytic problem to a time/memory/data tradeoff attack (in which one has to invert $f$ for at least one of $D$ given values), we get the generalized curve $T^{4/3}M^2D^2=N^2$. A typical point on this curve is $D=N^{0.2}$, $M=N^{0.6}$, and $T=N^{0.3}$, whose time is strictly lower than both Grover's algorithm (which requires $T=N^{0.4}$ in this generalized search variant) and the classical Hellman algorithm (which requires $T=N^{0.4}$ for these $D$ and $M$).

  • SAND: an AND-RX Feistel lightweight block cipher supporting S-box-based security evaluations
    by Shiyao Chen on November 29, 2021 at 12:25 pm

    We revisit designing AND-RX block ciphers, that is, the designs assembled with the most fundamental binary operations---AND, Rotation and XOR operations and do not rely on existing units. Likely, the most popular representative is the NSA cipher \texttt{SIMON}, which remains one of the most efficient designs, but suffers from difficulty in security evaluation. As our main contribution, we propose \texttt{SAND}, a new family of lightweight AND-RX block ciphers. To overcome the difficulty regarding security evaluation, \texttt{SAND} follows a novel design approach, the core idea of which is to restrain the AND-RX operations to be within nibbles. By this, \texttt{SAND} admits an equivalent representation based on a $4\times8$ \textit{synthetic S-box} ($SSb$). This enables the use of classical S-box-based security evaluation approaches. Consequently, for all versions of \texttt{SAND}, (a) we evaluated security bounds with respect to differential and linear attacks, and in both single-key and related-key scenarios; (b) we also evaluated security against impossible differential and zero-correlation linear attacks. This better understanding of the security enables the use of a relatively simple key schedule, which makes the ASIC round-based hardware implementation of \texttt{SAND} to be one of the state-of-art Feistel lightweight ciphers. As to software performance, due to the natural bitslice structure, \texttt{SAND} reaches the same level of performance as \texttt{SIMON} and is among the most software-efficient block ciphers.

  • Facial Template Protection via Lattice-based Fuzzy Extractors
    by Kaiyi Zhang on November 29, 2021 at 12:25 pm

    With the growing adoption of facial recognition worldwide as a popular authentication method, there is increasing concern about the invasion of personal privacy due to the lifetime irrevocability of facial features. In principle, {\it Fuzzy Extractors} enable biometric-based authentication while preserving the privacy of biometric templates. Nevertheless, to our best knowledge, most existing fuzzy extractors handle binary vectors with Hamming distance, and no explicit construction is known for facial recognition applications where $\ell_2$-distance of real vectors is considered. In this paper, we utilize the dense packing feature of certain lattices (e.g., $\rm E_8$ and Leech) to design a family of {\it lattice-based} fuzzy extractors that docks well with existing neural network-based biometric identification schemes. We instantiate and implement the generic construction and conduct experiments on publicly available datasets. Our result confirms the feasibility of facial template protection via fuzzy extractors.

  • RSA Key Recovery from Digit Equivalence Information
    by Chitchanok Chuengsatiansup on November 29, 2021 at 12:24 pm

    The seminal work of Heninger and Shacham (Crypto 2009) demonstrated a method for reconstructing secret RSA keys from artial information of the key components. In this paper we further investigate this approach but apply it to a different context that appears in some side-channel attacks. We assume a fixed-window exponentiation algorithm that leaks the equivalence between digits, without leaking the value of the digits themselves. We explain how to exploit the side-channel information with the Heninger-Shacham algorithm. To analyse the complexity of the approach, we model the attack as a Markov process and experimentally validate the accuracy of the model. Our model shows that the attack is feasible in the commonly used case where the window size is 5.

  • Performance bounds for QC-MDPC codes decoders
    by Marco Baldi on November 29, 2021 at 12:24 pm

    Quasi-Cyclic Moderate-Density Parity-Check (QC-MDPC) codes are receiving increasing attention for their advantages in the context of post-quantum asymmetric cryptography based on codes. However, a fundamentally open question concerns modeling the performance of their decoders in the region of a low decoding failure rate (DFR). We provide two approaches for bounding the performance of these decoders, and study their asymptotic behavior. We first consider the well-known Maximum Likelihood (ML) decoder, which achieves optimal performance and thus provides a lower bound on the performance of any sub-optimal decoder. We provide lower and upper bounds on the performance of ML decoding of QC-MDPC codes and show that the DFR of the ML decoder decays polynomially in the QC-MDPC code length when all other parameters are fixed. Secondly, we analyze some hard to decode error patterns for Bit-Flipping (BF) decoding algorithms, from which we derive some lower bounds on the DFR of BF decoders applied to QC-MDPC codes.

  • Diving Deep into the Weak Keys of Round Reduced Ascon
    by Raghvendra Rohit on November 29, 2021 at 12:23 pm

    At ToSC 2021, Rohit \textit{et al.} presented the first distinguishing and key recovery attacks on 7 rounds Ascon without violating the designer's security claims of nonce-respecting setting and data limit of $2^{64}$ blocks per key. So far, these are the best attacks on 7 rounds Ascon. However, the distinguishers require (impractical) $2^{60}$ data while the data complexity of key recovery attacks exactly equals $2^{64}$. Whether there are any practical distinguishers and key recovery attacks (with data less than $2^{64}$) on 7 rounds Ascon is still an open problem. In this work, we give positive answers to these questions by providing a comprehensive security analysis of Ascon in the weak key setting. Our first major result is the 7-round cube distinguishers with complexities $2^{46}$ and $2^{33}$ which work for $2^{82}$ and $2^{63}$ keys, respectively. Notably, we show that such weak keys exist for any choice (out of 64) of 46 and 33 specifically chosen nonce variables. In addition, we improve the data complexities of existing distinguishers for 5, 6 and 7 rounds by a factor of $2^{8}, 2^{16}$ and $2^{27}$, respectively. Our second contribution is a new theoretical framework for weak keys of Ascon which is solely based on the algebraic degree. Based on our construction, we identify $2^{127.99}$, $2^{127.97}$ and $2^{116.34}$ weak keys (out of $2^{128}$) for 5, 6 and 7 rounds, respectively. Next, we present two key recovery attacks on 7 rounds with different attack complexities. The best attack can recover the secret key with $2^{63}$ data, $2^{69}$ bits of memory and $2^{115.2}$ time. Our attacks are far from threatening the security of full 12 rounds Ascon, but we expect that they provide new insights into Ascon's security.

  • Accelerator for Computing on Encrypted Data
    by Sujoy Sinha Roy on November 29, 2021 at 12:23 pm

    Fully homomorphic encryption enables computation on encrypted data, and hence it has a great potential in privacy-preserving outsourcing of computations. In this paper, we present a complete instruction-set processor architecture ‘Medha’ for accelerating the cloud-side operations of an RNS variant of the HEAAN homomorphic encryption scheme. Medha has been designed following a modular hardware design approach to attain a fast computation time for computationally expensive homomorphic operations on encrypted data. At every level of the implementation hierarchy, we explore possibilities for parallel processing. Starting from hardware-friendly parallel algorithms for the basic building blocks, we gradually build heavily parallel RNS polynomial arithmetic units. Next, many of these parallel units are interconnected elegantly so that their interconnections require the minimum number of nets, therefore making the overall architecture placement-friendly on the implementation platform. As homomorphic encryption is computation- as well as data-centric, the speed of homomorphic evaluations depends greatly on the way the data variables are handled. For Medha, we take a memory-conservative design approach and get rid of any off-chip memory access during homomorphic evaluations. Our instruction-set accelerator Medha is programmable and it supports all homomorphic evaluation routines of the leveled fully RNS-HEAAN scheme. For a reasonably large parameter with the polynomial ring dimension 214 and ciphertext coefficient modulus 438-bit (corresponding to 128-bit security), we implemented Medha in a Xilinx Alveo U250 card. Medha achieves the fastest computation latency to date and is almost 2.4× faster in latency and also somewhat smaller in area than a state-of-the-art reconfigurable hardware accelerator for the same parameter.

  • How to Claim a Computational Feat
    by Clémence Chevignard on November 29, 2021 at 12:22 pm

    Consider some user buying software or hardware from a provider. The provider claims to have subjected this product to a number of tests, ensuring that the system operates nominally. How can the user check this claim without running all the tests anew? The problem is similar to checking a mathematical conjecture. Many authors report having checked a conjecture $C(x)=\mbox{True}$ for all $x$ in some large set or interval $U$. How can mathematicians challenge this claim without performing all the expensive computations again? This article describes a non-interactive protocol in which the prover provides (a digest of) the computational trace resulting from processing $x$, for randomly chosen $x \in U$. With appropriate care, this information can be used by the verifier to determine how likely it is that the prover actually checked $C(x)$ over $U$. Unlike ``traditional'' interactive proof and probabilistically-checkable proof systems, the protocol is not limited to restricted complexity classes, nor does it require an expensive transformation of programs being executed into circuits or ad-hoc languages. The flip side is that it is restricted to checking assertions that we dub ``\emph{refutation-precious}'': expected to always hold true, and such that the benefit resulting from reporting a counterexample far outweighs the cost of computing $C(x)$ over all of $U$.

  • Performance Evaluation of Post-Quantum TLS 1.3 on Embedded Systems
    by Tasopoulos George on November 29, 2021 at 12:21 pm

    Transport Layer Security (TLS) constitutes one of the most widely used protocols for securing Internet communication and has found broad acceptance also in the Internet of Things (IoT) domain. As we progress towards a security environment resistant against quantum computer attacks, TLS needs to be transformed in order to support post-quantum cryptography schemes. However, post-quantum TLS is still not standardized and its overall performance, especially in resource constrained, IoT capable, embedded devices is not well understood. In this paper, we evaluate the time, memory and energy requirements of a post-quantum variant of TLS version 1.3 (PQ TLS 1.3), by integrating the pqm4 library implementations of NIST round 3 post-quantum algorithms Kyber, Saber, Dilithium and Falcon into the popular wolfSSL TLS 1.3 library. In particular, our experiments focus on low end, resource constrained embedded devices manifested in the ARM Cortex-M4 embedded platform NUCLEO-F439ZI (with hardware cryptographic accelerator) and NUCLEO-F429ZI (without hardware cryptographic accelerator) boards. These two boards only provide $180$ MHz clock rate, $2$ MB Flash Memory and $256$ KB SRAM. To the authors' knowledge this is the first thorough time delay, memory usage and energy consumption PQ TLS 1.3 evaluation using the NIST round 3 finalist algorithms for resource constrained embedded systems with and without cryptography hardware acceleration. The paper's results show that the post-quantum signatures Dilithium and Falcon and post-quantum KEMs Kyber and Saber perform in general well in TLS 1.3 on embedded devices in terms of both TLS handshake time and energy consumption. There is no significant difference between the TLS handshake time of Kyber and Saber; However, the handshake time with Falcon is much lower than that with Dilithium. In addition, hardware cryptographic accelerator for symmetric-key primitives improves the performances of TLS handshake time by about 6% on the client side and even by 19% on the server side, on high security levels.

  • Time-memory Trade-offs for Saber+ on Memory-constrained RISC-V
    by Jipeng Zhang on November 29, 2021 at 12:20 pm

    Saber is a module-lattice-based key encapsulation scheme that has been selected as a finalist in the NIST Post-Quantum Cryptography Standardization Project. As Saber computes on considerably large matrices and vectors of polynomials, its efficient implementation on memory-constrained IoT devices is very challenging. In this paper, we present an implementation of Saber with a minor tweak to the original Saber protocol for achieving reduced memory consumption and better performance. We call this tweaked implementation `Saber+', and the difference compared to Saber is that we use different generation methods of public matrix \(\boldsymbol{A}\) and secret vector \(\boldsymbol{s}\) for memory optimization. Our highly optimized software implementation of Saber+ on a memory-constrained RISC-V platform achieves 48\% performance improvement compared with the best state-of-the-art memory-optimized implementation of original Saber. Specifically, we present various memory and performance optimizations for Saber+ on a memory-constrained RISC-V microcontroller, with merely 16KB of memory available. We utilize the Number Theoretic Transform (NTT) to speed up the polynomial multiplication in Saber+. For optimizing cycle counts and memory consumption during NTT, we carefully compare the efficiency of the complete and incomplete-NTTs, with platform-specific optimization. We implement 4-layers merging in the complete-NTT and 3-layers merging in the 6-layer incomplete-NTT. An improved on-the-fly generation strategy of the public matrix and secret vector in Saber+ results in low memory footprint. Furthermore, by combining different optimization strategies, various time-memory trade-offs are explored. Our software implementation for Saber+ on selected RISC-V core takes just 3,809K, 3,594K, and 3,193K clock cycles for key generation, encapsulation, and decapsulation, respectively, while consuming only 4.8KB of stack at most.

  • Blockchain for IoT: A Critical Analysis Concerning Performance and Scalability
    by Ziaur Rahman on November 29, 2021 at 12:20 pm

    The world has been experiencing a mind-blowing expansion of blockchain technology since it was first introduced as an emerging means of cryptocurrency called bitcoin. Currently, it has been regarded as a pervasive frame of reference across almost all research domains, ranging from virtual cash to agriculture or even supply-chain to the Internet of Things. The ability to have a self-administering register with legitimate immutability makes blockchain appealing for the Internet of Things (IoT). As billions of IoT devices are now online in distributed fashion, the huge challenges and questions require to addressed in pursuit of urgently needed solutions. The present paper has been motivated by the aim of facilitating such efforts. The contribution of this work is to figure out those trade-offs the IoT ecosystem usually encounters because of the wrong choice of blockchain technology. Unlike a survey or review, the critical findings of this paper target sorting out specific security challenges of blockchain-IoT Infrastructure. The contribution includes how to direct developers and researchers in this domain to pick out the unblemished combinations of Blockchain enabled IoT applications. In addition, the paper promises to bring a deep insight on Ethereum, Hyperledger blockchain and IOTA technology to show their limitations and prospects in terms of performance and scalability.

  • Chaos and Logistic Map based Key Generation Technique for AES-driven IoT Security
    by Ziaur Rahman andIbrahim Khalil on November 29, 2021 at 12:20 pm

    Several efforts have been seen claiming the lightweight block ciphers as a necessarily suitable substitute in securing the Internet of Things. Currently, it has been able to envisage as a pervasive frame of reference almost all across the privacy preserving of smart and sensor-oriented appliances. Different approaches are likely to be inefficient, bringing desired degree of security considering the easiness and surely the process of simplicity but security. Strengthening the well-known symmetric key and block dependent algorithm using either chaos motivated logistic map or elliptic curve has shown a far-reaching potential to be a discretion in secure real-time communication. The popular feature of logistic maps, such as the un-foreseeability and randomness often expected to be used in dynamic key-propagation in sync with chaos and scheduling technique towards data integrity. As a bit alternation in keys, able to come up with oversize deviation, also would have consequence to leverage data confidentiality. Henceforth it may have proximity to time consumption, which may lead to a challenge to make sure instant data exchange between participating node entities. In consideration of delay latency required to both secure encryption and decryption, the proposed approach suggests a modification on the key-origination matrix along with S-box. It has plausibly been taken us to this point that the time required proportionate to the plain-text sent while the plain-text disproportionate to the probability happening a letter on the message made. In line with that the effort so far sought how apparent chaos escalates the desired key-initiation before message transmission.

  • Just how hard are rotations of $\mathbb{Z}^n$? Algorithms and cryptography with the simplest lattice
    by Huck Bennett on November 29, 2021 at 12:19 pm

    We study the computational problem of finding a shortest non-zero vector in a rotation of $\mathbb{Z}^n$, which we call $\mathbb{Z}$SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for $\mathbb{Z}$SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain special cases. However, despite all of this work, the fastest known algorithm that is proven to solve $\mathbb{Z}$SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in $2^{n + o(n)}$ time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for $\mathbb{Z}$SVP and instead ask what else we can say about the problem. E.g, can we find any non-trivial speedup over the best known SVP algorithm? And, what consequences would follow if $\mathbb{Z}$SVP actually is hard? Our results are as follows. 1) We show that $\mathbb{Z}$SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce $\mathbb{Z}$SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of $\mathbb{Z}$SVP from SVP. As a consequence of this reduction, we obtain a $2^{0.802n}$-time algorithm for $\mathbb{Z}$SVP, i.e., a non-trivial speedup over the best known algorithm for SVP on general lattices. 2) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) $\mathbb{Z}$SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of $\mathbb{Z}^n$ from either a lattice with all non-zero vectors longer than $\sqrt{n/\log n}$ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of $\mathbb{Z}^n$. The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ``$\mathbb{Z}^n$ has the largest smoothing parameter.'' 3) We show a distribution of bases $B$ for rotations of $\mathbb{Z}^n$ such that, if $\mathbb{Z}$SVP is hard for any input basis, then $\mathbb{Z}$SVP is hard on input $B$. This gives a satisfying theoretical resolution to the problem of sampling hard bases for $\mathbb{Z}^n$, which was studied by Blanks and Miller (PQCrypto, 2021). This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices (ia.cr/2021/1332), and they also used this to analyze the security of a public-key encryption scheme.) 4) We perform experiments to determine how practical basis reduction performs on different bases of $\mathbb{Z}^n$. These experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of reduction algorithms (i.e., larger block sizes) and study the ``provably hard'' distribution of bases described above. We also observe a threshold phenomenon in which ``basis reduction algorithms on $\mathbb{Z}^n$ nearly always find a shortest non-zero vector once they have found a vector with length less than $\sqrt{n}/2$,'' and we explore this further.

  • SoK: Plausibly Deniable Storage
    by Chen Chen on November 29, 2021 at 12:18 pm

    Data privacy is critical in instilling trust and empowering the societal pacts of modern technology-driven democracies. Unfortunately, it is under continuous attack by overreaching or outright oppressive governments, including some of the world's oldest democracies. Increasingly-intrusive anti-encryption laws severely limit the ability of standard encryption to protect privacy. New defense mechanisms are needed. Plausible deniability (PD) is a powerful property, enabling users to hide the existence of sensitive information in a system under direct inspection by adversaries. Popular encrypted storage systems such as TrueCrypt and other research efforts have attempted to also provide plausible deniability. Unfortunately, these efforts have often operated under less well-defined assumptions and adversarial models. Careful analyses often uncover not only high overheads but also outright security compromise. Further, our understanding of adversaries, the underlying storage technologies, as well as the available plausible deniable solutions have evolved dramatically in the past two decades. The main goal of this work is to systematize this knowledge. It aims to: - identify key PD properties, requirements, and approaches; - present a direly-needed unified framework for evaluating security and performance; - explore the challenges arising from the critical interplay between PD and modern system layered stacks; - propose a new "trace-oriented" PD paradigm, able to decouple security guarantees from the underlying systems and thus ensure a higher level of flexibility and security independent of the technology stack. This work is meant also as a trusted guide for system and security practitioners around the major challenges in understanding, designing, and implementing plausible deniability into new or existing systems.

  • Improving Deep Learning Networks for Profiled Side-Channel Analysis Using Performance Improvement Techniques
    by Damien Robissout on November 29, 2021 at 12:17 pm

    The use of deep learning techniques to perform side-channel analysis attracted the attention of many researchers as they obtained good performances with them. Unfortunately, the understanding of the neural networks used to perform side-channel attacks is not very advanced yet. In this paper, we propose to contribute to this direction by studying the impact of some particular deep learning techniques for tackling side-channel attack problems. More precisely, we propose to focus on three existing techniques: batch normalization, dropout and weight decay, not yet used in side-channel context. By combining adequately these techniques for our problem, we show that it is possible to improve the attack performance, i.e. the number of traces needed to recover the secret, by more than 55%. Additionally, they allow us to have a gain of more than 34% in terms of training time. We also show that an architecture trained with such techniques is able to perform attacks efficiently even in the context of desynchronized traces.

  • Securing Proof-of-Stake Nakamoto Consensus Under Bandwidth Constraint
    by Joachim Neu on November 29, 2021 at 12:14 pm

    Satoshi Nakamoto's Proof-of-Work (PoW) longest chain (LC) protocol was a breakthrough for Internet-scale open-participation consensus. Many Proof-of-Stake (PoS) variants of Nakamoto's protocol such as Ouroboros or Snow White aim to preserve the advantages of LC by mimicking PoW LC closely, while mitigating downsides of PoW by using PoS for Sybil resistance. Previous works have proven these PoS LC protocols secure assuming all network messages are delivered within a bounded delay. However, this assumption is not compatible with PoS when considering bandwidth constraints in the underlying communication network. This is because PoS enables the adversary to reuse block production opportunities and spam the network with equivocating blocks, which is impossible in PoW. The bandwidth constraint necessitates that nodes choose carefully which blocks to spend their limited download budget on. We show that 'download along the longest header chain', a natural download rule for PoW LC, emulated by PoS variants, is insecure for PoS LC. Instead, we propose 'download towards the freshest block' and prove that PoS LC with this download rule is secure in bandwidth constrained networks. Our result can be viewed as a first step towards the co-design of consensus and network layer protocols.

  • Information Dispersal with Provable Retrievability for Rollups
    by Kamilla Nazirkhanova on November 29, 2021 at 12:14 pm

    The ability to verifiably retrieve transaction or state data stored off-chain is crucial to blockchain scaling techniques such as rollups or sharding. We formalize the problem and design a storage- and communication-efficient protocol using linear erasure-correcting codes and homomorphic vector commitments. Motivated by application requirements for rollups, our solution departs from earlier Verifiable Information Dispersal schemes in that we do not require comprehensive termination properties or retrievability from any but only from some known sufficiently large set of storage nodes. Compared to Data Availability Oracles, under no circumstance do we fall back to returning empty blocks. Distributing a file of 28.8 MB among 900 storage nodes (up to 300 of which may be adversarial) requires in total approx. 95 MB of communication and storage and approx. 30 seconds of cryptographic computation on a single-threaded consumer-grade laptop computer. Our solution requires no modification to on-chain contracts of Validium rollups such as StarkWare's StarkEx. Additionally, it provides privacy of the dispersed data against honest-but-curious storage nodes.

  • Computing Square Roots Faster than the Tonelli-Shanks/Bernstein Algorithm
    by Palash Sarkar on November 29, 2021 at 9:48 am

    Let $p$ be a prime such that $p=1+2^nm$, where $n\geq 1$ and $m$ is odd. Given a square $u$ in $\mathbb{Z}_p$ and a non-square $z$ in $\mathbb{Z}_p$, we describe an algorithm to compute a square root of $u$ which requires $\mathfrak{T}+O(n^{3/2})$ operations (i.e., squarings and multiplications), where $\mathfrak{T}$ is the number of operations required to exponentiate an element of $\mathbb{Z}_p$ to the power $(m-1)/2$. This improves upon the Tonelli-Shanks (TS) algorithm which requires $\mathfrak{T}+O(n^{2})$ operations. Bernstein had proposed a table look-up based variant of the TS algorithm which requires $\mathfrak{T}+O((n/w)^{2})$ operations and $O(2^wn/w)$ storage, where $w$ is a parameter. A table look-up variant of the new algorithm requires $\mathfrak{T}+O((n/w)^{3/2})$ operations and the same storage. In concrete terms, the new algorithm is shown to require significantly fewer operations for particular values of $n$.

  • Three Input Exclusive-OR Gate Support For Boyar-Peralta's Algorithm (Extended Version)
    by Anubhab Baksi on November 28, 2021 at 5:09 pm

    The linear layer, which is basically a binary non-singular matrix, is an integral part of cipher construction in a lot of private key ciphers. As a result, optimising the linear layer for device implementation has been an important research direction for about two decades. The Boyar-Peralta's algorithm (SEA'10) is one such common algorithm, which offers significant improvement compared to the straightforward implementation. This algorithm only returns implementation with XOR2 gates, and is deterministic. Over the last couple of years, some improvements over this algorithm has been proposed, so as to make support for XOR3 gates as well as make it randomised. In this work, we take an already existing improvement (Tan and Peyrin, TCHES'20) that allows randomised execution and extend it to support three input XOR gates. This complements the other work done in this direction (Banik et al., IWSEC'19) that also supports XOR3 gates with randomised execution. Further, noting from another work (Maximov, Eprint'19), we include one additional tie-breaker condition in the original Boyar-Peralta's algorithm. Our work thus collates and extends the state-of-the-art, at the same time offers a simpler interface. We show several results that improve from the lastly best-known results.

  • On One-way Functions from NP-Complete Problems
    by Yanyi Liu on November 28, 2021 at 9:15 am

    We present the first natural $\NP$-complete problem whose average-case hardness w.r.t. the uniform distribution over instances is \emph{equivalent} to the existence of one-way functions (OWFs). The problem, which originated in the 1960s, is the \emph{Conditional Time-Bounded Kolmogorov Complexity Problem}: let $K^t(x \mid z)$ be the length of the shortest ``program'' that, given the ``auxiliary input'' $z$, outputs the string $x$ within time $t(|x|)$, and let $\mcktp[\zeta]$ be the set of strings $(x,z,k)$ where $|z| = \zeta(|x|)$, $|k| = \log |x|$ and $K^t(x \mid z)< k$, where, for our purposes, a ``program'' is defined as a RAM machine. Our main result shows that for every polynomial $t(n)\geq n^2$, there exists some polynomial $\zeta$ such that $\mcktp[\zeta]$ is $\NP$-complete. We additionally extend the result of Liu-Pass (FOCS'20) to show that for every polynomial $t(n)\geq 1.1n$, and every polynomial $\zeta(\cdot)$, mild average-case hardness of $\mcktp[\zeta]$ is equivalent to the existence of OWFs. Taken together, these results provide the following crisp characterization of what is required to base OWFs on $\NP \not \subseteq \BPP$: \emph{There exists concrete polynomials $t,\zeta$ such that ``Basing OWFs on $\NP \not \subseteq \BPP$'' is equivalent to providing a ``worst-case to (mild) average-case reduction for $\mcktp[\zeta]$''.} In other words, the ``holy-grail'' of Cryptography (i.e., basing OWFs on $\NP \not\subseteq \BPP$) is equivalent to a basic question in algorithmic information theory. As an independent contribution, we show that our $\NP$-completeness result can be used to shed new light on the feasibility of the \emph{polynomial-time bounded symmetry of information} assertion (Kolmogorov'68).

  • Multiparty Generation of an RSA Modulus
    by Megan Chen on November 27, 2021 at 4:18 pm

    We present a new multiparty protocol for the distributed generation of biprime RSA moduli, with security against any subset of maliciously colluding parties assuming oblivious transfer and the hardness of factoring. Our protocol is highly modular, and its uppermost layer can be viewed as a template that generalizes the structure of prior works and leads to a simpler security proof. We introduce a combined sampling-and-sieving technique that eliminates both the inherent leakage in the approach of Frederiksen et al. (Crypto'18), and the dependence upon additively homomorphic encryption in the approach of Hazay et al. (JCrypt'19). We combine this technique with an efficient, privacy-free check to detect malicious behavior retroactively when a sampled candidate is not a biprime, and thereby overcome covert rejection-sampling attacks and achieve both asymptotic and concrete efficiency improvements over the previous state of the art.

  • Practical complexities of probabilistic algorithms for solving Boolean polynomial systems
    by Stefano Barbero on November 27, 2021 at 12:22 am

    Solving a polynomial system over a finite field is an NP-complete problem of fundamental importance in both pure and applied mathematics. In~particular, the security of the so-called multivariate public-key cryptosystems, such as HFE of Patarin and UOV of Kipnis et~al., is based on the postulated hardness of solving quadratic polynomial systems over a finite field. Lokshtanov et al.~(2017) were the first to introduce a probabilistic algorithm that, in the worst-case, solves a Boolean polynomial system in time $O^{*}(2^{\delta n})$, for some $\delta \in (0, 1)$ depending only on the degree of the system, thus beating the brute-force complexity $O^{*}(2^n)$. Later, B\"jorklund et al.~(2019) and then Dinur~(2021) improved this method and devised probabilistic algorithms with a smaller exponent coefficient $\delta$. We survey the theory behind these probabilistic algorithms, and we illustrate the results that we obtained by implementing them in C. In~particular, for random quadratic Boolean systems, we estimate the practical complexities of the algorithms and their probabilities of success as their parameters change.

  • The classification of quadratic APN functions in 7 variables
    by Konstantin Kalgin on November 26, 2021 at 7:20 pm

    Almost perfect nonlinear functions possess the optimal resistance to the differential cryptanalysis and are widely studied. Most known APN functions are obtained as functions over finite fields $GF(2^n)$ and very little is known about combinatorial constructions of them in $\mathbb{F}_2^n$. In this work we propose two approaches for obtaining quadratic APN functions in $\mathbb{F}_2^n$. The first approach exploits a secondary construction idea, it considers how to obtain a quadratic APN function in $n+1$ variables from a given quadratic APN function in $n$ variables using special restrictions on new terms. The second approach is searching quadratic APN functions that have matrix form partially filled with standard basis vectors in a cyclic manner. This approach allowed us to find a new APN function in 7 variables. We proved that the updated list of quadratic APN functions in dimension 7 is complete up to CCZ-equivalence.

  • Efficient Lattice Gadget Decomposition Algorithm with Bounded Uniform Distribution
    by Sohyun Jeon on November 26, 2021 at 7:42 am

    A gadget decomposition algorithm is commonly used in many advanced lattice cryptography applications which support homomorphic operation over ciphertexts to control the noise growth. For a special structure of a gadget, the algorithm is digit decomposition. If such algorithm samples from a subgaussian distribution, that is, the output is randomized, it gives more benefits on output quality. One of important advantages is Pythagorean additivity which makes resulting noise contained in a ciphertext grow much less than naive digit decomposition. Therefore, the error analysis becomes cleaner and tighter than the use of other measures like $\ell_2$ and $\ell_\infty$. Even though such advantage can also be achieved by the use of discrete Gaussian sampling, it is not preferable for practical performance due to large factor in resulting noise and the complex computation of exponential function, whereas more relaxed probability condition is required for subgaussian distribution. Nevertheless, subgaussian sampling has barely received an attention so far, thus no practical algorithms was implemented before an efficient algorithm is presented by Genis et al., recently. In this paper, we present a practically efficient gadget decomposition algorithm where output follows a subgaussian distribution. We parallelize the existing practical subgaussian gadget decomposition algorithm, using bounded uniform distribution. Our algorithm is divided into two independent subalgorithms and only one algorithm depends on input. Therefore, the other algorithm can be considered as pre-computation. As an experimental result, our algorithm performs over 50\% better than the existing algorithm.

  • On Cryptocurrency Wallet Design
    by Ittay Eyal on November 25, 2021 at 6:19 pm

    The security of cryptocurrency and decentralized blockchain-maintained assets relies on their owners safeguarding secrets, typically cryptographic keys. This applies equally to individuals keeping daily-spending amounts and to large asset management companies. Loss of keys and attackers gaining control of keys resulted in numerous losses of funds. The security of individual keys was widely studied with practical solutions available, from mnemonic phrases to dedicated hardware. There are also techniques for securing funds by requiring combinations of multiple keys. However, to the best of our knowledge, a crucial question was never addressed: How is wallet security affected by the number of keys, their types, and how they are combined? This is the focus of this work. We present a model where each key has certain probabilities for being safe, lost, leaked, or stolen (available only to an attacker). The number of possible wallets for a given number of keys is the Dedekind number, prohibiting an exhaustive search with many keys. Nonetheless, we bound optimal-wallet failure probabilities with an evolutionary algorithm. We evaluate the security (complement of failure probability) of wallets based on the number and types of keys used. Our analysis covers a wide range of settings and reveals several surprises. The failure probability general trend drops exponentially with the number of keys, but has a strong dependency on its parity. In many cases, but not always, heterogeneous keys (not all with the same fault probabilities) allow for superior wallets than homogeneous keys. Nonetheless, in the case of 3 keys, the common practice of requiring any pair is optimal in many settings. Our formulation of the problem and initial results reveal several open questions, from user studies of key fault probabilities to finding optimal wallets with very large numbers of keys. But they also have an immediate practical outcome, informing cryptocurrency users on optimal wallet design.

  • Programmable Bootstrapping Enables Efficient Homomorphic Inference of Deep Neural Networks
    by Ilaria Chillotti on November 25, 2021 at 2:07 pm

    In many cases, machine learning and privacy are perceived to be at odds. Privacy concerns are especially relevant when the involved data are sensitive. This paper deals with the privacy-preserving inference of deep neural networks. We report on first experiments with a new library implementing a variant of the TFHE fully homomorphic encryption scheme. The underlying key technology is the programmable bootstrapping. It enables the homomorphic evaluation of any function of a ciphertext, with a controlled level of noise. Our results indicate for the first time that deep neural networks are now within the reach of fully homomorphic encryption. Importantly, in contrast to prior works, our framework does not necessitate re-training the model.

  • Evolving Secret Sharing in Almost Semi-honest Model
    by Jyotirmoy Pramanik on November 25, 2021 at 11:53 am

    Evolving secret sharing is a special kind of secret sharing where the number of shareholders is not known beforehand, i.e., at time t = 0. In classical secret sharing such a restriction was assumed inherently i.e., the the number of shareholders was given to the dealer’s algorithm as an input. Evolving secret sharing relaxes this condition. Pramanik and Adhikari left an open problem regarding malicious shareholders in the evolving setup, which we answer in this paper. We introduce a new cheating model, called the almost semi-honest model, where a shareholder who joins later can check the authenticity of share of previous ones. We use collision resistant hash function to construct such a secret sharing scheme with malicious node identification. Moreover, our scheme preserves the share size of Komargodski et al. (TCC 2016).

  • A Side-Channel Resistant Implementation of SABER
    by Michiel Van Beirendonck on November 25, 2021 at 9:55 am

    The candidates for the NIST Post-Quantum Cryptography standardization have undergone extensive studies on efficiency and theoretical security, but research on their side-channel security is largely lacking. This remains a considerable obstacle for their real-world deployment, where side-channel security can be a critical requirement. This work describes a side-channel resistant instance of Saber, one of the lattice-based candidates, using masking as a countermeasure. Saber proves to be very efficient to mask due to two specific design choices: power-of-two moduli, and limited noise sampling of learning with rounding. A major challenge in masking lattice-based cryptosystems is the integration of bit-wise operations with arithmetic masking, requiring algorithms to securely convert between masked representations. The described design includes a novel primitive for masked logical shifting on arithmetic shares, as well as adapts an existing masked binomial sampler for Saber. An implementation is provided for an ARM Cortex-M4 microcontroller, and its side-channel resistance is experimentally demonstrated. The masked implementation features a 2.5x overhead factor, significantly lower than the 5.7x previously reported for a masked variant of NewHope. Masked key decapsulation requires less than 3,000,000 cycles on the Cortex-M4 and consumes less than 12kB of dynamic memory, making it suitable for deployment in embedded platforms. We have made our implementation available at https://github.com/KULeuven-COSIC/SABER-masking.

  • Sublattice Attack on Poly-LWE with Wide Error Distributions
    by Hao Chen on November 25, 2021 at 1:43 am

    The fundamental problem in lattice-based cryptography is the hardness of the Ring-LWE, which has been based on the conjectured hardness of approximating ideal-SIVP or ideal-SVP. Though it is now widely conjectured both are hard in classical and quantum computation model” there is no sufficient attacks proposed and considered. In this paper we propose the subset quadruple attack on general structured LWE problems over any ring endowed with a positive definite inner product and an error distribution. Hence from the view of subset quadruple attacks, the error distributions of feasible non-negligible subset quadruples should be calculated to test the hardness. Sublattice pair with an ideal attack is a special case of subset quadruple attack. A lower bound for the Gaussian error distribution is proved to construct suitable feasible non-negligible sublattices. From the sublattice pair with an ideal attack we prove that the decision Poly-LWE over ${\bf Z}[x]/(x^n-p_n)$ with certain special inner products and arbitrary polynomially bounded widths of Gaussian error distributions can be solved with the polynomial time for the sufficiently large polynomially bounded modulus parameters $p_n$.\\ Keywords: Poly-LWE, Ring-LWE, Wide Error distribution, Subset quadruple attack, Sublattice pair with an ideal.

  • The Direction of Updatable Encryption Does Matter
    by Ryo Nishimaki on November 25, 2021 at 12:19 am

    We introduce a new definition for key updates, called backward-leak uni-directional key updates, in updatable encryption (UE). This notion is a variant of uni-directional key updates for UE. We show that existing secure UE schemes in the bi-directional key updates setting are not secure in the backward-leak uni-directional key updates setting. Thus, security in the backward-leak uni-directional key updates setting is strictly stronger than security in the bi-directional key updates setting. This result is in sharp contrast to the equivalence theorem by Jiang (Asiacrypt 2020), which says security in the bi-directional key updates setting is equivalent to security in the existing uni-directional key updates setting. We call the existing uni-directional key updates ``forward-leak uni-directional'' key updates to distinguish two types of uni-directional key updates in this paper. We also present two UE schemes with the following features. - The first scheme is post-quantum secure in the backward-leak uni-directional key updates setting under the learning with errors assumption. - The second scheme is secure in the no-directional key updates setting and based on indistinguishability obfuscation and one-way functions. This result solves the open problem left by Jiang (Asiacrypt 2020).

  • Improved Programmable Bootstrapping with Larger Precision and Efficient Arithmetic Circuits for TFHE
    by Ilaria Chillotti on November 24, 2021 at 8:59 pm

    Fully Homomorphic Encryption (FHE) schemes enable to compute over encrypted data. Among them, TFHE [CGGI17] has the great advantage of offering an efficient method for bootstrapping noisy ciphertexts, i.e., reduce the noise. Indeed, homomorphic computation increases the noise in ciphertexts and might compromise the encrypted message. TFHE bootstrapping, in addition to reducing the noise, also evaluates (for free) univariate functions expressed as look-up tables. It however requires to have the most significant bit of the plaintext to be known a priori, resulting in the loss of one bit of space to store messages. Furthermore it represents a non negligible overhead in terms of computation in many use cases. In this paper, we propose a solution to overcome this limitation, that we call Programmable Bootstrapping Without Padding (WoP-PBS). This approach relies on two building blocks. The first one is the multiplication à la BFV [FV12] that we incorporate into TFHE. This is possible thanks to a thorough noise analysis showing that correct multiplications can be computed using practical TFHE parameters. The second building block is the generalization of TFHE bootstrapping introduced in this paper. It offers the flexibility to select any chunk of bits in an encrypted plaintext during a bootstrap. It also enables to evaluate many LUTs at the same time when working with small enough precision. All these improvements are particularly helpful in some applications such as the evaluation of Boolean circuits (where a bootstrap is no longer required in each evaluated gate) and, more generally, in the efficient evaluation of arithmetic circuits even with large integers. Those results improve TFHE circuit bootstrapping as well. Moreover, we show that bootstrapping large precision integers is now possible using much smaller parameters than those obtained by scaling TFHE ones.

  • Leaking Arbitrarily Many Secrets: Any-out-of-Many Proofs and Applications to RingCT Protocols
    by Tianyu Zheng on November 24, 2021 at 4:29 am

    In this paper, we propose any-out-of-many proofs, a logarithmic zero-knowledge scheme for proving knowledge of arbitrarily many secrets out of a public list. Unlike existing $k$-out-of-$N$ proofs [S\&P'21, CRYPTO'21], our approach also hides the exact amount of secrets $k$, which can be used to achieve a higher anonymity level. Furthermore, we enhance the efficiency of our scheme through a transformation that can adopt the improved inner product argument in Bulletproofs [S\&P'18], only $2 \cdot \lceil log_2(N) \rceil + 13$ elements need to be sent in a non-interactive proof. We further use our proof scheme to implement both multiple ring signature schemes and RingCT protocols. For multiple ring signatures, we need to add a boundary constraint for the number $k$ to avoid the proof of an empty secret set. Thus, an improved version called bounded any-out-of-many proof is presented, which preserves all nice features of the original protocol such as high anonymity and logarithmic size. As for the RingCT, both the original and bounded proofs can be used safely. The result of the performance evaluation indicates that our RingCT protocol is more efficient and secure than others. We also believe our techniques are applicable in other privacy-preserving occasions.

  • Flexible Anonymous Transactions (FLAX): Towards Privacy-Preserving and Composable Decentralized Finance
    by Wei Dai on November 24, 2021 at 12:43 am

    Decentralized finance (DeFi) refers to interoperable smart contracts running on distributed ledgers offering financial services beyond payments. Recently, there has been an explosion of DeFi applications centered on Ethereum, with close to a hundred billion USD in total assets deposited as of September 2021. These applications provide financial services such as asset management, trading, and lending. The wide adoption of DeFi has raised important concerns, and among them is the key issue of privacy---DeFi applications store account balances in the clear, exposing financial positions to public scrutiny. In this work, we propose a framework of anonymous and composable DeFi on public-state smart contract platforms. First, we define a cryptographic primitive called a flexible anonymous transaction (FLAX) system with two distinctive features: (1) transactions authenticate additional information known as ``associated data'' and (2) transactions can be applied flexibly via a parameter that is determined at processing time, e.g. during the execution time of smart contracts. Second, we design an anonymous token standard (extending ERC20), which admits composable usage of anonymous funds by other contracts. Third, we demonstrate how the FLAX token standard can realize privacy-preserving variants of the Ethereum DeFi ecosystem of today---we show contract designs for asset pools, decentralized exchanges, and lending, covering the largest DeFi projects to date including Curve, Uniswap, Dai stablecoin, Aave, Compound, and Yearn. Lastly, we provide formal security definitions for FLAX and describe instantiations from existing designs of anonymous payments such as Zerocash, RingCT, Quisquis, and Zether.

  • Revisiting the Security of COMET Authenticated Encryption Scheme
    by Shay Gueron on November 23, 2021 at 7:03 pm

    COMETv1, by Gueron, Jha and Nandi, is a mode of operation for nonce-based authenticated encryption with associated data functionality. It was one of the second round candidates in the ongoing NIST Lightweight Cryptography Standardization Process. In this paper, we study a generalized version of COMETv1, that we call gCOMET, from provable security perspective. First, we present a comprehensive and complete security proof for gCOMET in the ideal cipher model. Second, we view COMET, the underlying mode of operation in COMETv1, as an instantiation of gCOMET, and derive its concrete security bounds. Finally, we propose another instantiation of gCOMET, dubbed COMETv2, and show that this version achieves better security guarantees as well as memory-efficient implementations as compared to COMETv1.

  • Post-Quantum Zero Knowledge, Revisited (or: How to do Quantum Rewinding Undetectably)
    by Alex Lombardi on November 23, 2021 at 2:27 pm

    A major difficulty in quantum rewinding is the fact that measurement is destructive: extracting information from a quantum state irreversibly changes it. This is especially problematic in the context of zero-knowledge simulation, where preserving the adversary's state is essential. In this work, we develop new techniques for quantum rewinding in the context of extraction and zero-knowledge simulation: (1) We show how to extract information from a quantum adversary by rewinding it without disturbing its internal state. We use this technique to prove that important interactive protocols, such as the Goldreich-Micali-Wigderson protocol for graph non-isomorphism and the Feige-Shamir protocol for NP, are zero-knowledge against quantum adversaries. (2) We prove that the Goldreich-Kahan protocol for NP is post-quantum zero knowledge using a simulator that can be seen as a natural quantum extension of the classical simulator. Our results achieve (constant-round) black-box zero-knowledge with negligible simulation error, appearing to contradict a recent impossibility result due to Chia-Chung-Liu-Yamakawa (FOCS 2021). This brings us to our final contribution: (3) We introduce coherent-runtime expected quantum polynomial time, a computational model that (a) captures all of our zero-knowledge simulators, (b) cannot break any polynomial hardness assumptions, and (c) is not subject to the CCLY impossibility. In light of our positive results and the CCLY negative results, we propose coherent-runtime simulation to be the right quantum analogue of classical expected polynomial-time simulation.

  • An End-to-End Bitstream Tamper Attack Against Flip-Chip FPGAs
    by Fahim Rahman on November 23, 2021 at 2:26 pm

    FPGA bitstream encryption and authentication can be defeated by various techniques and it is critical to understand how these vulnerabilities enable extraction and tampering of commercial FPGA bitstreams. We exploit the physical vulnerability of bitstream encryption keys to readout using failure analysis equipment and conduct an end-to-end bitstream tamper attack. Our work underscores the feasibility of supply chain bitstream tampering and the necessity of guarding against such attacks in critical systems.

  • Lightweight Swarm Authentication
    by George Teseleanu on November 23, 2021 at 2:25 pm

    In this paper we describe a provably secure authentication protocol for resource limited devices. The proposed algorithm performs whole-network authentication using very few rounds and in a time logarithmic in the number of nodes. Compared to one-to-one node authentication and previous proposals, our protocol is more efficient: it requires less communication and computation and, in turn, lower energy consumption.

  • Escaping from Consensus: Instantly Redactable Blockchain Protocols in Permissionless Setting
    by Xinyu Li on November 23, 2021 at 10:59 am

    Blockchain technologies have drawn a lot of attentions, and its immutability is paramount to applications requiring persistent records. However, tremendous real-world incidents have exposed the harm of strict immutability, such as the illicit data stored on Bitcoin and the loss of millions of dollars in vulnerable smart contracts. Moreover, “Right to be Forgotten” has been imposed in new General Data Protection Regulation (GDPR) of European Union, which is incompatible with blockchain’s immutability. Therefore, it is imperative to design efficient redactable blockchain in a controlled way. In this paper, we present a generic design of redactable blockchain protocol in the permissionless setting, applied to both proof-of-stake and proof-of-work blockchain. Our protocol can (1) maintain the same adversary bound requirement as the underlying blockchain, (2) support various network environments, (3) offer public verifiability for any redaction, and (4) achieve instant redaction, even only within one slot in the best case, which is desirable for redacting harmful data. Furthermore, we define the first ideal functionality of redactable blockchain and conduct security analysis following the language of universal composition. Finally, we develop a proof-of-concept implementation showing that the overhead remains minimal for both online and re-spawning nodes, which demonstrates the high efficiency of our design.

  • Public-key Authenticated Encryption with Keyword Search: Cryptanalysis, Enhanced Security, and Quantum-resistant Instantiation
    by Zi-Yuan Liu on November 23, 2021 at 10:02 am

    With the rapid development of cloud computing, an increasing number of companies are adopting cloud storage technology to reduce overhead. However, to ensure the privacy of sensitive data, the uploaded data need to be encrypted before being outsourced to the cloud. The concept of public-key encryption with keyword search (PEKS) was introduced by Boneh \textit{et al.} to provide flexible usage of the encrypted data. Unfortunately, most of the PEKS schemes are not secure against inside keyword guessing attacks (IKGA), so the keyword information of the trapdoor may be leaked to the adversary. To solve this issue, Huang and Li presented public key authenticated encryption with keyword search (PAEKS) in which the trapdoor generated by the receiver is only valid for authenticated ciphertexts. With their seminal work, many PAEKS schemes have been introduced for the enhanced security of PAEKS. Some of them further consider the upcoming quantum attacks. However, our cryptanalysis indicated that in fact, these schemes could not withstand IKGA. To fight against the attacks from quantum adversaries and support the privacy-preserving search functionality, we first introduce a novel generic PAEKS construction in this work. Then, we further present the first quantum-resistant PAEKS instantiation based on lattices. The security proofs show that our instantiation not only satisfies the basic requirements but also achieves enhanced security models, namely the multi-ciphertext indistinguishability and multi-trapdoor privacy. Furthermore, the comparative results indicate that with only some additional expenditure, the proposed instantiation provides more secure properties, making it suitable for more diverse application environments.

  • Vectorial Decoding Algorithm for Fast Correlation Attack and Its Applications to Stream Cipher Grain-128a
    by ZhaoCun Zhou on November 22, 2021 at 2:52 pm

    Fast correlation attacks, pioneered by Meier and Staffelbach, is an important cryptanalysis tool for LFSR-based stream cipher, which exploits the correlation between the LFSR state and key stream and targets at recovering the initial state of LFSR via a decoding algorithm. In this paper, we develop a vectorial decoding algorithm for fast correlation attack, which is a natural generalization of original binary approach. Our approach benefits from the contributions of all correlations in a subspace. We propose two novel criterions to improve the iterative decoding algorithm. We also give some cryptographic properties of the new FCA which allows us to estimate the efficiency and complexity bounds. Furthermore, we apply this technique to well-analyzed stream cipher Grain-128a. Based on a hypothesis, an interesting result for its security bound is deduced from the perspective of iterative decoding. Our analysis reveals the potential vulnerability for LFSRs over generic linear group and also for nonlinear functions with high SEI multidimensional linear approximations such as Grain-128a.

  • New Attacks on LowMC instances with a Single Plaintext/Ciphertext pair
    by Subhadeep Banik on November 22, 2021 at 1:02 pm

    Cryptanalysis of the LowMC block cipher when the attacker has access to a single known plaintext/ciphertext pair is a mathematically challenging problem. This is because the attacker is unable to employ most of the standard techniques in symmetric cryptography like linear and differential cryptanalysis. This scenario is particularly relevant while arguing the security of the \picnic digital signature scheme in which the plaintext/ciphertext pair generated by the LowMC block cipher serves as the public (verification) key and the corresponding LowMC encryption key also serves as the secret (signing) key of the signature scheme. In the paper by Banik et al. (IACR ToSC 2020:4), the authors used a linearization technique of the LowMC S-box to mount attacks on some instances of the block cipher. In this paper, we first make a more precise complexity analysis of the linearization attack. Then, we show how to perform a 2-stage MITM attack on LowMC. The first stage reduces the key candidates corresponding to a fraction of key bits of the master key. The second MITM stage between this reduced candidate set and the remaining fraction of key bits successfully recovers the master key. We show that the combined computational complexity of both these stages is significantly lower than those reported in the ToSC paper by Banik et al.

  • Perfect Trees: Designing Energy-Optimal Symmetric Encryption Primitives
    by Andrea Caforio on November 22, 2021 at 12:46 pm

    Energy efficiency is critical in battery-driven devices, and designing energy- optimal symmetric-key ciphers is one of the goals for the use of ciphers in such environments. In the paper by Banik et al. (IACR ToSC 2018), stream ciphers were identified as ideal candidates for low-energy solutions. One of the main conclusions of this paper was that Trivium, when implemented in an unrolled fashion, was by far the most energy-efficient way of encrypting larger quantity of data. In fact, it was shown that as soon as the number of databits to be encrypted exceeded 320 bits, Trivium consumed the least amount of energy on STM 90 nm ASIC circuits and outperformed the Midori family of block ciphers even in the least energy hungry ECB mode (Midori was designed specifically for energy efficiency). In this work, we devise the first heuristic energy model in the realm of stream ciphers that links the underlying algebraic topology of the state update function to the consumptive behaviour. The model is then used to derive a metric that exhibits a heavy negative correlation with the energy consumption of a broad range of stream cipher architectures, i.e., the families of Trivium-like, Grain-like and Subterranean-like constructions. We demonstrate that this correlation is especially pronounced for Trivium-like ciphers which leads us to establish a link between the energy consumption and the security guarantees that makes it possible to find several alternative energy- optimal versions of Trivium that meet the requirements but consume less energy. We present two such designs Trivium-LE(F) and Trivium-LE(S) that consume around 15% and 25% less energy respectively making them the to date most energy-efficient encryption primitives. They inherit the same security level as Trivium, i.e., 80-bit security. We further present Triad-LE as an energy-efficient variant satisfying a higher security level. The simplicity and wide applicability of our model has direct consequences for the conception of future hardware-targeted stream ciphers as for the first time it is possible to optimize for energy during the design phase. Moreover, we extend the reach of our model beyond plain encryption primitives and propose a novel energy-efficient message authentication code Trivium-LE-MAC.

  • An Alternative Approach for Computing Discrete Logarithms in Compressed SIDH
    by Kaizhan Lin, Weize Wang, Lin Wang, on November 22, 2021 at 12:28 pm

    Currently, public-key compression of supersingular isogeny Diffe-Hellman (SIDH) and its variant, supersingular isogeny key encapsulation (SIKE) involve pairing computation and discrete logarithm computation. In this paper, we propose novel methods to compute only 3 discrete logarithms instead of 4, in exchange for computing a lookup table effciently. The algorithms also allow us to make a trade-off between memory and effciency. Our implementation shows that the effciency of our algorithms is close to that of the previous work, and our algorithms perform better in some special cases.

  • Route Discovery in Private Payment Channel Networks
    by Zeta Avarikioti on November 22, 2021 at 11:36 am

    In this work, we are the first to explore route discovery in private channel networks. We first determine what ``ideal" privacy for a routing protocol means in this setting. We observe that protocols achieving this strong privacy definition exist by leveraging (topology hiding) Multi-Party Computation but they are (inherently) inefficient as route discovery must involve the entire network. We then present protocols with weaker privacy guarantees but much better efficiency. In particular, route discovery typically only involves small fraction of the nodes but some information on the topology and balances -- beyond what is necessary for performing the transaction -- is leaked. The core idea is that both sender and receiver gossip a message which then slowly propagates through the network, and the moment any node in the network receives both messages, a path is found. In our first protocol the message is always sent to all neighbouring nodes with a delay proportional to the fees of that edge. In our second protocol the message is only sent to one neighbour chosen randomly with a probability proportional to its degree. While the first instantiation always finds the cheapest path, the second might not, but it involves a smaller fraction of the network. % We discuss some extensions like employing bilinear maps so the gossiped messages can be re-randomized, making them unlikeable and thus improving privacy. We also discuss some extensions to further improve privacy by employing bilinear maps. Simulations of our protocols on the Lightning network topology (for random transactions and uniform fees) show that our first protocol (which finds the cheapest path) typically involves around 12\% of the 6376 nodes, while the second only touches around 18 nodes $(<0.3\%)$, and the cost of the path that is found is around twice the cost of the optimal one.

  • SIMC: ML Inference Secure Against Malicious Clients at Semi-Honest Cost
    by Nishanth Chandran on November 22, 2021 at 11:32 am

    Secure inference allows a model owner (or, the server) and the input owner (or, the client) to perform inference on machine learning model without revealing their private information to each other. A large body of work has shown efficient cryptographic solutions to this problem through secure 2- party computation. However, they assume that both parties are semi-honest, i.e., follow the protocol specification. Recently, Lehmkuhl et al. showed that malicious clients can extract the whole model of the server using novel model-extraction attacks. To remedy the situation, they introduced the client-malicious threat model and built a secure inference system, MUSE, that provides security guarantees, even when the client is malicious. In this work, we design and build SIMC, a new cryptographic system for secure inference in the client malicious threat model. On secure inference benchmarks considered by MUSE, SIMC has 23 − 29× lesser communication and is up to 11.4× faster than MUSE. SIMC obtains these improvements using a novel protocol for non-linear activation functions (such as ReLU) that has > 28× lesser communication and is up to 43× more performant than MUSE. In fact, SIMC's performance beats the state-of-the-art semi-honest secure inference system! Finally, similar to MUSE, we show how to push the majority of the cryptographic cost of SIMC to an input independent preprocessing phase. While the cost of the online phase of this protocol, SIMC++, is same as that of MUSE, the overall improvements of SIMC translate to similar improvements to the preprocessing phase of MUSE.

  • PNB-based Differential Cryptanalysis of ChaCha Stream Cipher
    by Shotaro Miyashita on November 22, 2021 at 11:32 am

    In this study, we focus on the differential cryptanalysis of the ChaCha stream cipher. In the conventional approach, an adversary first searches for the input/output differential pair with the best differential bias and then analyzes the probabilistic neutral bits (PNB) in detail based on the obtained input/output differential pair. However, although time and data complexities for the attack can be estimated by the differential bias and PNB obtained in this approach, their combination does not always represent the best. In addition, a comprehensive analysis of the PNB was not provided in existing studies; they have not clarified the upper bounds of the number of rounds required for the differential attack based on the PNB to be successful. To solve these problems, we proposed a PNB-based differential attack on the reduced-round ChaCha by first comprehensively analyzing the PNB at all output differential bit positions and then searching for the input/output differential pair with the best differential bias based on the obtained PNB. By comprehensively analyzing the PNB, we clarified that an upper bound of the number of rounds required for the PNB-based differential attack to be successful was 7.25 rounds. As a result, the proposed attack can work on the 7.25-round ChaCha with time and data complexities of \(2^{255.62}\) and \(2^{37.49}\), respectively. Further, using the existing differential bias presented by Coutinho and Neto at EUROCRYPT 2021, we further improved the attack on the 7.25-round ChaCha with time and data complexities of \(2^{244.22}\) and \(2^{69.14}\), respectively. The best existing attack on ChaCha, proposed by Coutinho and Neto at EUROCRYPT 2021, works on up to 7 rounds with time and data complexities of \(2^{228.51}\) and \(2^{80.51}\), respectively. Therefore, we improved the best existing attack on the reduced-round ChaCha. We believe that this study will be the first step towards an attack on more rounds of ChaCha, e.g., the 8-round ChaCha.

  • SoK: Tokenization on Blockchain
    by Gang Wang on November 22, 2021 at 11:31 am

    Blockchain, a potentially disruptive technology, advances many different applications, e.g., crypto-currencies, supply chains, and the Internet of Things. Under the hood of blockchain, it is required to handle different kinds of digital assets and data. The next-generation blockchain ecosystem is expected to consist of numerous applications, and each application may have a distinct representation of digital assets. However, digital assets cannot be directly recorded on the blockchain, and a tokenization process is required to format these assets. Tokenization on blockchain will inevitably require a certain level of proper standards to enrich advanced functionalities and enhance interoperable capabilities for future applications. However, due to specific features of digital assets, it is hard to obtain a standard token form to represent all kinds of assets. For example, when considering fungibility, some assets are divisible and identical, commonly referred to as fungible assets. In contrast, others that are not fungible are widely referred to as non-fungible assets. When tokenizing these assets, we are required to follow different tokenization processes. The way to effectively tokenize assets is thus essential and expecting to confront various unprecedented challenges. This paper provides a systematic and comprehensive study of the current progress of tokenization on blockchain. First, we explore general principles and practical schemes to tokenize digital assets for blockchain and classify digitized tokens into three categories: fungible, non-fungible, and semi-fungible. We then focus on discussing the well-known Ethereum standards on non-fungible tokens. Finally, we discuss several critical challenges and some potential research directions to advance the research on exploring the tokenization process on the blockchain. To the best of our knowledge, this is the first systematic study for tokenization on blockchain.

  • Light-OCB: Parallel Lightweight Authenticated Cipher with Full Security
    by Avik Chakraborti on November 22, 2021 at 11:31 am

    This paper proposes a lightweight authenticated encryption (AE) scheme, called Light-OCB, which can be viewed as a lighter variant of the CAESAR winner OCB as well as a faster variant of the high profile NIST LWC competition submission LOCUS-AEAD. Light-OCB is structurally similar to LOCUS-AEAD and uses a nonce-based derived key that provides optimal security, and short-tweak tweakable blockcipher (tBC) for efficient domain separation. Light-OCB improves over LOCUS-AEAD by reducing the number of primitive calls, and thereby significantly optimizing the throughput. To establish our claim, we provide FPGA hardware implementation details and benchmark for Light-OCB against LOCUS-AEAD and several other well-known AEs. The implementation results depict that, when instantiated with the tBC TweGIFT64, Light-OCB achieves an extremely low hardware footprint - consuming only around 1128 LUTs and 307 slices (significantly lower than that for LOCUS-AEAD) while maintaining a throughput of 880 Mbps, which is almost twice as that of LOCUS-AEAD. To the best of our knowledge, this figure is significantly better than all the known implementation results of other lightweight ciphers with parallel structures.

  • An Optimized GHV-Type HE Scheme: Simpler, Faster, and More Versatile
    by Liang Zhao on November 22, 2021 at 11:30 am

    In this paper we present an optimized variant of Gentry, Halevi and Vaikuntanathan (GHV)'s Homomorphic Encryption (HE) scheme (EUROCRYPT'10). Our scheme is appreciably more efficient than the original GHV scheme without losing its merits of the (multi-key) homomorphic property and matrix encryption property. In this research, we first measure the density for the trapdoor pairs that are created by using Alwen and Peikert's trapdoor generation algorithm and Micciancio and Peikert's trapdoor generation algorithm, respectively, and use the measurement result to precisely discuss the time and space complexity of the corresponding GHV instantiations. We then propose a generic GHV-type construction with several optimizations that improve the time and space efficiency from the original GHV scheme. In particular, our new scheme can achieve asymptotically optimal time complexity and avoid generating and storing the inverse of the used trapdoor. Finally, we present an instantiation that, by using a new set of (lower) bound parameters, has the smaller sizes of the key and ciphertext than the original GHV scheme.

  • The Legendre Symbol and the Modulo-2 Operator in Symmetric Schemes over (F_p)^n
    by Lorenzo Grassi on November 22, 2021 at 11:30 am

    Motivated by modern cryptographic use cases such as multi-party computation (MPC), homomorphic encryption (HE), and zero-knowledge (ZK) protocols, several symmetric schemes that are efficient in these scenarios have recently been proposed in the literature. Some of these schemes are instantiated with low-degree nonlinear functions, for example low-degree power maps (e.g., MiMC, HadesMiMC, Poseidon) or the Toffoli gate (e.g., Ciminion). Others (e.g., Rescue, Vision, Grendel) are instead instantiated via high-degree functions which are easy to evaluate in the target application. A recent example for the latter case is the hash function Grendel, whose nonlinear layer is constructed using the Legendre symbol. In this paper, we analyze high-degree functions such as the Legendre symbol or the modulo-2 operation as building blocks for the nonlinear layer of a cryptographic scheme over (F_p)^n. Our focus regards the security analysis rather than the efficiency in the mentioned use cases. For this purpose, we present several new invertible functions that make use of the Legendre symbol or of the modulo-2 operation. Even though these functions often provide strong statistical properties and ensure a high degree after a few rounds, the main problem regards their small number of possible outputs, that is, only three for the Legendre symbol and only two for the modulo-2 operation. By guessing them, it is possible to reduce the overall degree of the function significantly. We exploit this behavior by describing the first preimage attack on full Grendel, and we verify it in practice.

  • On the Download Rate of Homomorphic Secret Sharing
    by Ingerid Fosli on November 22, 2021 at 11:30 am

    A homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports evaluating functions on shared secrets by means of a local mapping from input shares to output shares. We initiate the study of the download rate of HSS, namely, the achievable ratio between the length of the output shares and the output length when amortized over $\ell$ function evaluations. We obtain the following results. * In the case of linear information-theoretic HSS schemes for degree-$d$ multivariate polynomials, we characterize the optimal download rate in terms of the optimal minimal distance of a linear code with related parameters. We further show that for sufficiently large $\ell$ (polynomial in all problem parameters), the optimal rate can be realized using Shamir's scheme, even with secrets over $\mathbb{F}_2$. * We present a general rate-amplification technique for HSS that improves the download rate at the cost of requiring more shares. As a corollary, we get high-rate variants of computationally secure HSS schemes and efficient private information retrieval protocols from the literature. * We show that, in some cases, one can beat the best download rate of linear HSS by allowing nonlinear output reconstruction and $2^{-\Omega(\ell)}$ error probability.

  • Squint Hard Enough: Evaluating Perceptual Hashing with Machine Learning
    by Jonathan Prokos on November 22, 2021 at 11:29 am

    Many online communications systems use perceptual hash matching systems to detect illicit files in user content. These systems employ specialized perceptual hash functions such as Microsoft's PhotoDNA or Facebook's PDQ to produce a compact digest of an image file that can be approximately compared to a database of known illicit-content digests. Recently, several proposals have suggested that hash-based matching systems be incorporated into client-side and end-to-end encrypted (E2EE) systems: in these designs, files that register as illicit content will be reported to the provider, while the remaining content will be sent confidentially. By using perceptual hashing to determine confidentiality guarantees, this new setting significantly changes the function of existing perceptual hashing -- thus motivating the need to evaluate these functions from an adversarial perspective, using their perceptual capabilities against them. For example, an attacker may attempt to trigger a match on innocuous, but politically-charged, content in an attempt to stifle speech. In this work we develop threat models for perceptual hashing algorithms in an adversarial setting, and present attacks against the two most widely deployed algorithms: PhotoDNA and PDQ. Our results show that it is possible to efficiently generate targeted second-preimage attacks in which an attacker creates a variant of some source image that matches some target digest. As a complement to this main result, we also further investigate the production of images that facilitate detection avoidance attacks, continuing a recent investigation of Jain et al. Our work shows that existing perceptual hash functions are likely insufficiently robust to survive attacks on this new setting.

  • CoHA-NTT: A Configurable Hardware Accelerator for NTT-based Polynomial Multiplication
    by Kemal Derya on November 22, 2021 at 11:27 am

    In this paper, we introduce a configurable hardware architecture that can be used to generate unified and parametric NTT-based polynomial multipliers that support a wide range of parameters of lattice-based cryptographic schemes proposed for post-quantum cryptography. Both NTT and inverse NTT operations can be performed using the unified butterfly unit of our architecture, which constitutes the core building block in NTT operations. The multitude of this unit plays an essential role in achieving the performance goals of a specific application area or platform. To this end, the architecture takes the size of butterfly units as input and generates an efficient NTT-based polynomial multiplier hardware to achieve the desired throughput and area requirements. More specifically, the proposed hardware architecture provides run-time configurability for the scheme parameters and compile-time configurability for throughput and area requirements. This work presents the first architecture with both run-time and compile-time configurability for NTT-based polynomial multiplication operations to the best of our knowledge. The implementation results indicate that the advanced configurability has a negligible impact on the time and area of the proposed architecture and that its performance is on par with the state-of-the-art implementations in the literature, if not better. The proposed architecture comprises various sub-blocks such as modular multiplier and butterfly units, each of which can be of interest on its own for accelerating lattice-based cryptography. Thus, we provide the design rationale of each sub-block and compare it with those in the literature, including our earlier works in terms of configurability and performance.

  • A Performance Evaluation of Pairing-Based Broadcast Encryption Systems
    by Arush Chhatrapati on November 22, 2021 at 11:26 am

    In a broadcast encryption system, a sender can encrypt a message for any subset of users who are listening on a broadcast channel. The goal of broadcast encryption is to leverage the broadcasting structure to achieve better efficiency than individually encrypting to each user; in particular, reducing the bandwidth (i.e., ciphertext size) required to transmit securely, although other factors such as public and private key size and the time to execute setup, encryption and decryption are also important. In this work, we conduct a detailed performance evaluation of eleven public-key, pairing-based broadcast encryption schemes offering different features and security guarantees, including public-key, identity-based, traitor-tracing, private linear and augmented systems. We implemented each system using the MCL Java pairings library, reworking some of the constructions to achieve better efficiency. We tested their performance on a variety of parameter choices, resulting in hundreds of data points to compare, with some interesting results from the classic Boneh-Gentry-Waters scheme (CRYPTO 2005) to Zhandry's recent generalized scheme (CRYPTO 2020), and more. We combine this performance data and knowledge of the systems' features with data we collected on practical usage scenarios to determine which schemes are likely to perform best for certain applications, such as video streaming services, online gaming, live sports betting and smartphone streaming. This work can inform both practitioners and future cryptographic designs in this area.

  • Amortizing Rate-1 OT and Applications to PIR and PSI
    by Melissa Chase on November 22, 2021 at 11:26 am

    Recent new constructions of rate-1 OT [Döttling, Garg, Ishai, Malavolta, Mour, and Ostrovsky, CRYPTO 2019] have brought this primitive under the spotlight and the techniques have led to new feasibility results for private-information retrieval, and homomorphic encryption for branching programs. The receiver communication of this construction consists of a quadratic (in the sender's input size) number of group elements for a single instance of rate-1 OT. Recently [Garg, Hajiabadi, Ostrovsky, TCC 2020] improved the receiver communication to a linear number of group elements for a single string-OT. However, most applications of rate-1 OT require executing it multiple times, resulting in large communication costs for the receiver. In this work, we introduce a new technique for amortizing the cost of multiple rate-1 OTs. Specifically, based on standard pairing assumptions, we obtain a two-message rate-1 OT protocol for which the amortized cost per string-OT is asymptotically reduced to only four group elements. Our results lead to significant communication improvements in PSI and PIR, special cases of SFE for branching programs. - PIR: We obtain a rate-1 PIR scheme with client communication cost of $O(\lambda\cdot\log N)$ group elements for security parameter $\lambda$ and database size $N$. Notably, after a one-time setup (or one PIR instance), any following PIR instance only requires communication cost $O(\log N)$ number of group elements. - PSI with unbalanced inputs: We apply our techniques to private set intersection with unbalanced set sizes (where the receiver has a smaller set) and achieve receiver communication of $O((m+\lambda) \log N)$ group elements where $m, N$ are the sizes of the receiver and sender sets, respectively. Similarly, after a one-time setup (or one PSI instance), any following PSI instance only requires communication cost $O(m \cdot \log N)$ number of group elements. All previous sublinear-communication non-FHE based PSI protocols for the above unbalanced setting were also based on rate-1 OT, but incurred at least $O(\lambda^2 m \log N)$ group elements.

  • An Improved Range Proof with Base-3 Construction
    by Esra Günsay on November 22, 2021 at 11:26 am

    Zero-knowledge protocols (ZKPs) allow a party to prove the validation of secret information to some other party without revealing any information about the secret itself. Appropriate, effective, and efficient use of cryptographic ZKPs contributes to many novel advances in real-world privacy-preserving frameworks. One of the most important type of cryptographic ZKPs is the zero-knowledge range proofs (ZKRPs). Such proofs have wide range of applications such as anonymous credentials, cryptocurrencies, e-cash schemes etc. In many ZKRPs the secret is represented in binary then committed via a suitable commitment scheme. Though there exist different base approaches on bilinear paring-based and RSA-like based constructions, to our knowledge there is no study on investigating the discrete logarithm-based constructions. In this study, we focus on a range proof construction produced by Mao in 1998. This protocol contains a bit commitment scheme with an OR-construction. We investigate the effect of different base approach on Mao's range proof and compare the efficiency of these basis approaches. To this end, we have extended Mao's range proof to base-3 with a modified OR-proof. We derive the number of computations in modulo exponentiations and the cost of the number of integers exchanged between parties. Then, we have generalized these costs for the base-u construction. Here, we mainly show that comparing with other base approaches, the base-3 approach consistently provides approximately 12% efficiency in computation cost and 10% efficiency in communication cost. We implemented the base-3 protocol and demonstrated that the results are consistent with our theoretical computations.

  • Security evaluation against side-channel analysis at compilation time
    by Nicolas Bruneau on November 22, 2021 at 11:24 am

    Masking countermeasure is implemented to thwart side-channel attacks. The maturity of high-order masking schemes has reached the level where the concepts are sound and proven. For instance, Rivain and Prouff proposed a full-fledged AES at CHES 2010. Some non-trivial fixes regarding refresh functions were needed though. Now, industry is adopting such solutions, and for the sake of both quality and certification requirements, masked cryptographic code shall be checked for correctness using the same model as that of the the theoretical protection rationale (for instance the probing leakage model). Seminal work has been initiated by Barthe et al. at EUROCRYPT 2015 for automated verification at higher orders on concrete implementations. In this paper, we build on this work to actually perform verification from within a compiler, so as to enable timely feedback to the developer. Precisely, our methodology enables to provide the actual security order of the code at the intermediate representation (IR) level, thereby identifying possible flaws (owing either to source code errors or to compiler optimizations). Second, our methodology allows for an exploitability analysis of the analysed IR code. In this respect, we formally handle all the symbolic expressions in the static single assignment (SSA) representation to build the optimal distinguisher function. This enables to evaluate the most powerful attack, which is not only function of the masking order $d$, but also on the number of leaking samples and of the expressions (e.g., linear vs non-linear leakages). This scheme allows to evaluate the correctness of a masked cryptographic code, and also its actual security in terms of number of traces in a given deployment context (characterized by a leakage model of the target CPU and the signal-to-noise ratio of the platform).

  • Ark of the ECC: An open-source ECDSA power analysis attack on a FPGA based Curve P-256 implementation
    by Jean-Pierre Thibault on November 22, 2021 at 11:24 am

    Power analysis attacks on ECC have been presented since almost the very beginning of DPA itself, even before the standardization of AES. Given that power analysis attacks against AES are well known and have a large body of practical artifacts to demonstrate attacks on both software and hardware implementations, it is surprising that these artifacts are generally lacking for ECC. In this work we begin to remedy this by providing a complete open-source ECDSA attack artifact, based on a high-quality hardware ECDSA core from the CrypTech project. We demonstrate an effective power analysis attack against an FPGA implementation of this core. As many recent secure boot solutions are using ECDSA, efforts into building open-source artifacts to evaluate attacks on ECDSA are highly relevant to ongoing academic and industrial research programs. To demonstrate the value of this evaluation platform, we implement several countermeasures and show that evaluating leakage on hardware is critical to understand the effectiveness of a countermeasure.

  • Safe-Error Attacks on SIKE and CSIDH
    by Fabio Campos on November 22, 2021 at 8:04 am

    The isogeny-based post-quantum schemes SIKE (NIST PQC round 3 alternate candidate) and CSIDH (Asiacrypt 2018) have received only little attention with respect to their fault attack resilience so far. We aim to fill this gap and provide a better understanding of their vulnerability by analyzing their resistance towards safe-error attacks. We present four safe-error attacks, two against SIKE and two against a constant-time implementation of CSIDH that uses dummy isogenies. The attacks use targeted bitflips during the respective isogeny-graph traversals. All four attacks lead to full key recovery. By using voltage and clock glitching, we physically carried out two of the attacks - one against each scheme -, thus demonstrate that full key recovery is also possible in practice.

  • Fuzzy Message Detection
    by Gabrielle Beck on November 22, 2021 at 2:36 am

    Many privacy-preserving protocols employ a primitive that allows a sender to "flag" a message to a recipient's public key, such that only the recipient (who possesses the corresponding secret key) can detect that the message is intended for their use. Examples of such protocols include anonymous messaging, privacy-preserving payments, and anonymous tracing. A limitation of the existing techniques is that recipients cannot easily outsource the detection of messages to a remote server, without revealing to the server the exact set of matching messages. In this work we propose a new class of cryptographic primitives called fuzzy message detection schemes. These schemes allow a recipient to derive a specialized message detection key that can identify correct messages, while also incorrectly identifying non-matching messages with a specific and chosen false positive rate $p$. This allows recipients to outsource detection work to an untrustworthy server, without revealing precisely which messages belong to the receiver. We show how to construct these schemes under a variety of assumptions; describe several applications of the new technique; and show that our schemes are efficient enough to use in real applications.

  • Efficient Zero-Knowledge Argument in Discrete Logarithm Setting: Sublogarithmic Proof or Sublinear Verifier
    by Hyeonbum Lee on November 21, 2021 at 3:27 pm

    We propose two zero-knowledge arguments for arithmetic circuits with fan-in 2 gates in the uniform random string model. Our first protocol features $O(\sqrt{\log_2 N})$ communication and round complexities and $O(N)$ computational complexity for the verifier, where $N$ is the size of the circuit. Our second protocol features $O(\log_2N)$ communication and $O(\sqrt{N})$ computational complexity for the verifier. We prove the soundness of our arguments under the discrete logarithm assumption or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption. The main ingredient of our arguments is two different generalizations of B\"unz et al.'s Bulletproofs inner-product argument (IEEE S\&P 2018) that convinces a verifier of knowledge of two vectors satisfying an inner-product relation. For a protocol with sublogarithmic communication, we devise a novel method to aggregate multiple arguments for bilinear operations such as multi-exponentiations, which is essential for reducing communication overheads. For a protocol with a sublinear verifier, we develop a generalization of the discrete logarithm relation assumption, which is essential for reducing verification overhead while keeping the soundness proof solely relying on the discrete logarithm assumption. These techniques are of independent interest.

  • Magnifying Side-Channel Leakage of Lattice-Based Cryptosystems with Chosen Ciphertexts: The Case Study of Kyber
    by Zhuang Xu on November 21, 2021 at 1:00 pm

    Lattice-based cryptography, as an active branch of post-quantum cryptography (PQC), has drawn great attention from side-channel analysis researchers in recent years. Despite the various side-channel targets examined in previous studies, detail on revealing the secret-dependent information efficiently is less studied. In this paper, we propose adaptive EM side-channel attacks with carefully constructed ciphertexts on Kyber, which is a finalist of NIST PQC standardization project. We demonstrate that specially chosen ciphertexts allow an adversary to modulate the leakage of a target device and enable full key extraction with a small number of traces through simple power analysis. Compared to prior research, our techniques require fewer traces and avoid building complex templates. We practically evaluate our methods using both a reference implementation and the ARM-specific implementation in pqm4 library. For the reference implementation, we target the leakage of the output of the inverse NTT computation and recover the full key with only four traces. For the pqm4 implementation, we develop a message-recovery attack that leads to extraction of the full secret key with between eight and 960 traces, depending on the compiler optimization level. We discuss the relevance of our findings to other lattice-based schemes and explore potential countermeasures.

  • On the Timing Leakage of the Deterministic Re-encryption in HQC KEM
    by Clemens Hlauschek on November 21, 2021 at 10:09 am

    Well before large-scale quantum computers will be available, traditional cryptosystems must be transitioned to post-quantum secure schemes. The NIST PQC competition aims to standardize suitable cryptographic schemes. Candidates are evaluated not only on their formal security strengths, but are also judged based on the security of the optimized implementation, for example, with regard to resistance against side-channel attacks. HQC is a promising code-based key encapsulation scheme and selected as an alternate candidate in the third round of the competition, which puts it on track for getting standardized separately to the finalists, in a fourth round. Despite having already received heavy scrutiny with regard to side channel attacks, in this paper, we show a novel timing vulnerability in the optimized implementations of HQC, leading to a full secret key recovery. The attack is both practical, requiring only approx. 866,000 idealized decapsulation timing oracle queries in the 128-bit security setting, and structurally different from previously identified attacks on HQC: Previously, exploitable side-channel leakages have been identified in the BCH decoder of a previously submitted version, in the ciphertext check as well as in the PRF of the Fujisaki-Okamoto (FO) transformation employed by several NIST PQC KEM candidates. In contrast, our attack uses the fact that the rejection sampling routine invoked during the deterministic re-encryption of the KEM decapsulation leaks secret-dependent timing information. These timing leaks can be efficiently exploited to recover the secret key when HQC is instantiated with the (now constant-time) BCH decoder, as well as with the RMRS decoder of the current submission. Besides a detailed analysis of the new attack, we discuss possible countermeasures and their limits.

  • Output Prediction Attacks on Block Ciphers using Deep Learning
    by Hayato Kimura on November 21, 2021 at 5:32 am

    Cryptanalysis of symmetric-key ciphers, e.g., linear/differential cryptanalysis, requires an adversary to know the internal structures of the target ciphers. On the other hand, deep learning-based cryptanalysis has attracted significant attention because the adversary is not assumed to have knowledge of the target ciphers except the algorithm interfaces. Such cryptanalysis in a blackbox setting is extremely strong; thus, we must design symmetric-key ciphers that are secure against deep learning-based cryptanalysis. However, almost previous attacks do not clarify what features or internal structures affect success probabilities. Although Benamira et al. (Eurocrypt 2021) and Chen et al. (ePrint 2021) analyzed Gohr’s results (CRYPTO 2019), they did not find any deep learning specific characteristic where it affects the success probabilities of deep learning-based attacks but does not affect those of linear/differential cryptanalysis. Therefore, it is difficult to employ the results of such cryptanalysis to design deep learning- resistant symmetric-key ciphers. In this paper, we focus on two toy SPN block ciphers (small PRESENT and small AES) and one toy Feistel block cipher (small TWINE) and propose deep learning-based output prediction attacks. Due to its small internal structures, we can construct deep learning models by employing the maximum number of plaintext/ciphertext pairs, and we can precisely calculate the rounds in which full diffusion occurs. Specifically for the SPN block ciphers, we demonstrate the following: (1) our attacks work against a similar number of rounds attacked by linear/differential cryptanalysis, (2) our attacks realize output predictions (precisely ciphertext prediction and plaintext recovery) that are much stronger than distinguishing attacks, and (3) swapping the component order or replacement components affects the success probabilities of the proposed attacks. It is particularly worth noting that this is a deep learning specific characteristic because swapping/replacement does not affect the success probabilities of linear/differential cryptanalysis. We also confirm whether the proposed attacks work on the Feistel block cipher. We expect that our results will be an important stepping stone in the design of deep learning-resistant symmetric-key ciphers.

  • Practical Garbled RAM: GRAM with $O(\log^2 n)$ Overhead
    by David Heath on November 20, 2021 at 11:04 pm

    Garbled RAM (GRAM) is a powerful technique introduced by Lu and Ostrovsky that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While multiple GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling. We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-$n$ RAM that stores blocks of size $w = \Omega(\log^2 n)$ bits, our GRAM incurs amortized $O(w \cdot \log^2 n \cdot \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as $512$ $128$-bit elements.

  • Revisiting Mutual Information Analysis: Multidimensionality, Neural Estimation and Optimality Proofs
    by Valence Cristiani on November 20, 2021 at 11:00 pm

    Recent works showed how Mutual Information Neural Estimation (MINE) could be applied to side-channel analysis in order to evaluate the amount of leakage of an electronic device. One of the main advantages of MINE over classical estimation techniques is to enable the computation between high dimensional traces and a secret,which is relevant for leakage assessment. However, optimally exploiting this information in an attack context in order to retrieve a secret remains a non-trivial task especially when a profiling phase of the target is not allowed. Within this context, the purpose of this paper is to address this problem based on a simple idea: there are multiple leakage sources in side-channel traces and optimal attacks should necessarily exploit most/all of them. To this aim, a new mathematical framework, designed to bridge classical Mutual Information Analysis (MIA) and the multidimensional aspect of neural-based estimators, is proposed. One of the goals is to provide rigorous proofs consolidating the mathematical basis behind MIA, thus alleviating inconsistencies found in the state of the art. This framework allows to derive a new attack called Neural Estimated Mutual Information Analysis (NEMIA). To the best of our knowledge, it is the first unsupervised attack able to benefit from both the power of deep learning techniques and the valuable theoretical properties of MI. Simulations and experiments show that NEMIA outperforms classical side-channel attacks, especially in low-information contexts.

  • HOLMES: A Platform for Detecting Malicious Inputs in Secure Collaborative Computation
    by Weikeng Chen on November 20, 2021 at 10:57 pm

    Though maliciously secure multiparty computation (SMPC) ensures confidentiality and integrity of the computation from malicious parties, malicious parties can still provide malformed inputs. As a result, when using SMPC for collaborative computation, input can be manipulated to perform biasing and poisoning attacks. Parties may defend against many of these attacks by performing statistical tests over one another’s input, before the actual computation. We present HOLMES, a platform for expressing and performing statistical tests securely and efficiently. Using HOLMES, parties can perform well-known statistical tests or define new tests. For efficiency, instead of performing such tests naively in SMPC, HOLMES blends together zero-knowledge proofs (ZK) and SMPC protocols, based on the insight that most computation for statistical tests is local to the party who provides the data. High-dimensional tests are critical for detecting malicious inputs but are prohibitively expensive in secure computation. To reduce this cost, HOLMES provides a new secure dimensionality reduction procedure tailored for high-dimensional statistical tests. This new procedure leverages recent development of algebraic pseudorandom functions. Our evaluation shows that, for a variety of statistical tests, HOLMES is 18x to 40x more efficient than naively implementing the statistical tests in a generic SMPC framework.

  • Post-Quantum Simulatable Extraction with Minimal Assumptions: Black-Box and Constant-Round
    by Nai-Hui Chia on November 20, 2021 at 10:55 pm

    From the minimal assumption of post-quantum semi-honest oblivious transfers, we build the first $\epsilon$-simulatable two-party computation (2PC) against quantum polynomial-time (QPT) adversaries that is both constant-round and black-box (for both the construction and security reduction). A recent work by Chia, Chung, Liu, and Yamakawa (FOCS'21) shows that post-quantum 2PC with standard simulation-based security is impossible in constant rounds, unless either $NP \subseteq BQP$ or relying on non-black-box simulation. The $\epsilon$-simulatability we target is a relaxation of the standard simulation-based security that allows for an arbitrarily small noticeable simulation error $\epsilon$. Moreover, when quantum communication is allowed, we can further weaken the assumption to post-quantum secure one-way functions (PQ-OWFs), while maintaining the constant-round and black-box property. Our techniques also yield the following set of constant-round and black-box two-party protocols secure against QPT adversaries, only assuming black-box access to PQ-OWFs: - extractable commitments for which the extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge commit-and-prove whose commit stage is extractable with $\epsilon$-simulation; - $\epsilon$-simulatable coin-flipping; - $\epsilon$-zero-knowledge arguments of knowledge for $NP$ for which the knowledge extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge arguments for $QMA$. At the heart of the above results is a black-box extraction lemma showing how to efficiently extract secrets from QPT adversaries while disturbing their quantum state in a controllable manner, i.e., achieving $\epsilon$-simulatability of the after-extraction state of the adversary.

  • Blockchain-based Security Framework for Critical Industry 4.0 Cyber-physical System
    by Ziaur Rahman on November 20, 2021 at 10:55 pm

    There has been an intense concern for security alternatives because of the recent rise of cyber attacks, mainly targeting critical systems such as industry, medical, or energy ecosystem. Though the latest industry infrastructures largely depend on AI-driven maintenance, the prediction based on corrupted data undoubtedly results in loss of life and capital. Admittedly, an inadequate data-protection mechanism can readily challenge the security and reliability of the network. The shortcomings of the conventional cloud or trusted certificate-driven techniques have motivated us to exhibit a unique Blockchain-based framework for a secure and efficient industry 4.0 system. The demonstrated framework obviates the long-established certificate authority after enhancing the consortium Blockchain that reduces the data processing delay, and increases cost-effective throughput. Nonetheless, the distributed industry 4.0 security model entails cooperative trust than depending on a single party, which in essence indulges the costs and threat of the single point of failure. Therefore, multi-signature technique of the proposed framework accomplishes the multi-party authentication, which confirms its applicability for the real-time and collaborative cyber-physical system.

  • Non Atomic Payment Splitting in Channel Networks
    by Stefan Dziembowski on November 20, 2021 at 11:41 am

    Off-chain channel networks are one of the most promising technologies for dealing with blockchain scalability and delayed finality issues. Parties that are connected within such networks can send coins to each other without interacting with the blockchain. Moreover, these payments can be ``routed'' over the network. Thanks to this, even the parties that do not have a channel in common can perform payments between each other with the help of intermediaries. In this paper, we introduce a new notion that we call Non-Atomic Payment Splitting (NAPS) protocols that allow the intermediaries in the network to split the payments recursively into several subpayments in such a way that the payment can be successful ``partially'' (i.e.~not all the requested amount may be transferred). This is in contrast with the existing splitting techniques that are ``atomic'' in the sense that they did not allow such partial payments (we compare the ``atomic'' and ``non-atomic'' approaches in the paper). We define NAPS formally and then present a protocol that we call ``EthNA'', that satisfies this definition. EthNA is based on very simple and efficient cryptographic tools, and in particular, it does not use any expensive cryptographic primitives. We implement a simple variant of \EthNA in Solidity and provide some benchmarks. We also report on some experiments with routing using \Ethna.

  • Libra: Succinct Zero-Knowledge Proofs with Optimal Prover Computation
    by Tiancheng Xie on November 20, 2021 at 7:54 am

    We present Libra, the first zero-knowledge proof system that has both optimal prover time and succinct proof size/verification time. In particular, if C is the size of the circuit being proved (i) the prover time is O(C) irrespective of the circuit type; (ii) the proof size and verification time are both O(d log C) for d-depth log-space uniform circuits (such as RAM programs). In addition Libra features an one-time trusted setup that depends only on the size of the input to the circuit and not on the circuit logic. Underlying Libra is a new linear-time algorithm for the prover of the interactive proof protocol by Goldwasser, Kalai and Rothblum (also known as GKR protocol), as well as an efficient approach to turn the GKR protocol to zero-knowledge using small masking polynomials. Not only does Libra have excellent asymptotics, but it is also efficient in practice. For example, our implementation shows that it takes 200 seconds to generate a proof for constructing a SHA2-based Merkle tree root on 256 leaves, outperforming all existing zero-knowledge proof systems. Proof size and verification time of Libra are also competitive.

  • Scalable Ciphertext Compression Techniques for Post-Quantum KEMs and their Applications
    by Shuichi Katsumata on November 20, 2021 at 4:39 am

    A $\mathit{multi\text{-}recipient}$ key encapsulation mechanism, or $\mathsf{mKEM}$, provides a scalable solution to securely communicating to a large group, and offers savings in both bandwidth and computational cost compared to the trivial solution of communicating with each member individually. All prior works on $\mathsf{mKEM}$ are only limited to classical assumptions and, although some generic constructions are known, they all require specific properties that are not shared by most post-quantum schemes. In this work, we first provide a simple and efficient generic construction of $\mathsf{mKEM}$ that can be instantiated from versatile assumptions, including post-quantum ones. We then study these $\mathsf{mKEM}$ instantiations at a practical level using 8 post-quantum $\mathsf{mKEM}$s (which are lattice and isogeny-based NIST candidates), and CSIDH, and show that compared to the trivial solution, our $\mathsf{mKEM}$ offers savings of at least one order of magnitude in the bandwidth, and make encryption time shorter by a factor ranging from 1.92 to 35. Additionally, we show that by combining $\mathsf{mKEM}$ with the TreeKEM protocol used by MLS $-$ an IETF draft for secure group messaging $-$ we obtain significant bandwidth savings.

  • VASA: Vector AES Instructions for Security Applications
    by Jean-Pierre Münch on November 20, 2021 at 2:23 am

    Due to standardization, AES is today’s most widely used block cipher. Its security is well-studied and hardware acceleration is available on a variety of platforms. Following the success of the Intel AES New Instructions (AES-NI), support for Vectorized AES (VAES) has been added in 2018 and already shown to be useful to accelerate many implementations of AES-based algorithms where the order of AES evaluations is fixed a priori. In our work, we focus on using VAES to accelerate the computation in secure multi-party computation protocols and applications. For some MPC building blocks, such as OT extension, the AES operations are independent and known a priori and hence can be easily parallelized, similar to the original paper on VAES by Drucker et al. (ITNG’19). We evaluate the performance impact of using VAES in the AES-CTR implementations used in Microsoft CrypTFlow2, and the EMP-OT library which we accelerate by up to 24%. The more complex case that we study for the first time in our paper are dependent AES calls that are not fixed yet in advance and hence cannot be parallelized manually. This is the case for garbling schemes. To get optimal efficiency from the hardware, enough independent calls need to be combined for each batch of AES executions. We identify such batches using a deferred execution technique paired with early execution to reduce non-locality issues and more static techniques using circuit depth and explicit gate independence. We present a performance and a modularity focused technique to compute the AES operations efficiently while also immediately using the results and preparing the inputs. Using these techniques, we achieve a performance improvement via VAES of up to 244% for the ABY framework and of up to 28% for the EMP-AGMPC framework. By implementing several garbling schemes from the literature using VAES acceleration, we obtain a 171% better performance for ABY.

  • Binary Search in Secure Computation
    by Marina Blanton on November 20, 2021 at 2:03 am

    Binary search is one of the most popular algorithms in computer science. Realizing it in the context of secure multiparty computation which demands data-oblivious execution, however, is extremely non-trivial. It has been previously implemented only using oblivious RAM (ORAM) for secure computation and in this work we initiate the study of this topic using conventional secure computation techniques based on secret sharing. We develop a suite of protocols with different properties and of different structure for searching a private dataset of $m$ elements by a private numeric key. Our protocols result in $O(m)$ and $O(\sqrt{m})$ communication using only standard and readily available operations based on secret sharing. We further extend our protocols to support write operations, namely, binary search that obliviously updates the selected element, and realize two variants: updating non-key fields and updating the key field. Our implementation results indicate that even after applying known and our own optimizations to the fastest ORAM constructions, our solutions are faster than optimized ORAM schemes for datasets of up to $2^{30}$ elements and by up to two orders of magnitude. We hope that this work will prompt further interest in seeking efficient realizations of this important problem.

  • Massive Superpoly Recovery with Nested Monomial Predictions
    by Kai Hu on November 19, 2021 at 12:47 pm

    Determining the exact algebraic structure or some partial information of the superpoly for a given cube is a necessary step in the cube attack -- a generic cryptanalytic technique for symmetric-key primitives with some secret and public tweakable inputs. Currently, the division property based approach is the most powerful tool for exact superpoly recovery. However, as the algebraic normal form (ANF) of the targeted output bit gets increasingly complicated as the number of rounds grows, existing methods for superpoly recovery quickly hit their bottlenecks. For example, previous method stuck at round 842, 190, and 892 for Trivium, Grain-128AEAD, and Kreyvium, respectively. In this paper, we propose a new framework for recovering the exact ANFs of massive superpolies based on the monomial prediction technique (ASIACRYPT 2020, an alternative language for the division property). In this framework, the targeted output bit is first expressed as a polynomial of the bits of some intermediate states. For each term appearing in the polynomial, the monomial prediction technique is applied to determine its superpoly if the corresponding MILP model can be solved within a preset time limit. Terms unresolved within the time limit are further expanded as polynomials of the bits of some deeper intermediate states with symbolic computation, whose terms are again processed with monomial predictions. The above procedure is iterated until all terms are resolved. Finally, all the sub-superpolies are collected and assembled into the superpoly of the targeted bit. We apply the new framework to Trivium, Grain-128AEAD, and Kreyvium. As a result, the exact ANFs of the superpolies for 843-, 844- and 845-round Trivium, 191-round Grain-128AEAD and 894-round Kreyvium are recovered. Moreover, with help of the Möbius transform, we present a novel key-recovery technique based on superpolies involving all key bits by exploiting the sparse structures, which leads to the best key-recovery attacks on the targets considered.

  • Cryptanalysis of an oblivious PRF from supersingular isogenies
    by Andrea Basso on November 19, 2021 at 12:16 pm

    We cryptanalyse the SIDH-based oblivious pseudorandom function from supersingular isogenies proposed at Asiacrypt'20 by Boneh, Kogan and Woo. To this end, we give an attack on an assumption, the auxiliary one-more assumption, that was introduced by Boneh et al. and we show that this leads to an attack on the oblivious PRF itself. The attack breaks the pseudorandomness as it allows adversaries to evaluate the OPRF without further interactions with the server after some initial OPRF evaluations and some offline computations. More specifically, we first propose a polynomial-time attack. Then, we argue it is easy to change the OPRF protocol to include some countermeasures, and present a second subexponential attack that succeeds in the presence of said countermeasures. Both attacks break the security parameters suggested by Boneh et al. Furthermore, we provide a proof of concept implementation as well as some timings of our attack. Finally, we examine the generation of one of the OPRF parameters and argue that a trusted third party is needed to guarantee provable security.

  • Trail Search with CRHS Equations
    by John Petter Indrøy on November 19, 2021 at 6:23 am

    Evaluating a block cipher’s strength against differential or linear cryptanalysis can be a difficult task. Several approaches for finding the best differential or linear trails in a cipher have been proposed, such as using mixed integer linear programming or SAT solvers. Recently a different approach was suggested, modelling the problem as a staged, acyclic graph and exploiting the large number of paths the graph contains. This paper follows up on the graph-based approach and models the prob- lem via compressed right-hand side equations. The graph we build contains paths which represent differential or linear trails in a cipher with few active S-boxes. Our method incorporates control over the memory usage, and the time complexity scales linearly with the number of rounds of the cipher being analysed. The proposed method is made available as a tool, and using it we are able to find differential trails for the Klein and Prince ciphers with higher probabilities than previously published.

  • Analysis of CryptoNote Transaction Graphs using the Dulmage-Mendelsohn Decomposition
    by Saravanan Vijayakumaran on November 19, 2021 at 2:36 am

    Transactions in CryptoNote blockchains use linkable ring signatures to prevent double spending. Each transaction ring is associated with a key image, which is a collision-resistant one-way function of the spent output's secret key. Several techniques have been proposed to trace CryptoNote transactions, i.e. identify the actual output associated with a key image, by using the transaction history. In this paper, we show that the Dulmage-Mendelsohn (DM) decomposition of bipartite graphs can be used to trace CryptoNote transactions. The DM decomposition technique is optimal in the sense that it eliminates every output-key image association which is incompatible with the transaction history. We used the Monero transaction history for performance comparison. For pre-RingCT outputs in Monero, the DM decomposition technique performs better than existing techniques. For RingCT outputs in Monero, the DM decomposition technique has the same performance as existing techniques, with only five out of approximately 29 million outputs being identified as spent. To study the effect of hard forks on Monero RingCT output traceability, we used information from four Monero hard forks. The DM decomposition technique is able to trace only 62,809 out of approximately 26 million RingCT transaction rings. Our results are further evidence supporting the claim that Monero RingCT transactions are mostly immune to traceability attacks.

  • Dory: Efficient, Transparent arguments for Generalised Inner Products and Polynomial Commitments
    by Jonathan Lee on November 18, 2021 at 4:13 pm

    This paper presents Dory, a transparent setup, public-coin interactive argument for proving correctness of an inner-pairing product between committed vectors of elements of the two source groups. For an inner product of length $n$, proofs are $6 \log n$ target group elements, $1$ element of each source group and $3$ scalars. Verifier work is dominated by an $O(\log n)$ multi-exponentiation in the target group. Security is reduced to the symmetric external Diffie Hellman assumption in the standard model. We also show an argument reducing a batch of two such instances to one, requiring $O(n^{1/2})$ work on the Prover and $O(1)$ communication. We apply Dory to build a multivariate polynomial commitment scheme via the Fiat-Shamir transform. For $n$ the product of one plus the degree in each variable, Prover work to compute a commitment is dominated by a multi-exponentiation in one source group of size $n$. Prover work to show that a commitment to an evaluation is correct is $O(n^{\log 8 / \log 25})$ in general and $O(n^{1/2})$ for univariate or multilinear polynomials, whilst communication complexity and Verifier work are both $O(\log n)$. Using batching, the Verifier can validate $\ell$ polynomial evaluations for polynomials of size at most $n$ with $O(\ell + \log n)$ group operations and $O(\ell \log n)$ field operations.

  • Non-Malleable Codes for Bounded Polynomial-Depth Tampering
    by Dana Dachman-Soled on November 18, 2021 at 12:52 pm

    Non-malleable codes allow one to encode data in such a way that, after tampering, the modified codeword is guaranteed to decode to either the original message, or a completely unrelated one. Since the introduction of the notion by Dziembowski, Pietrzak, and Wichs (ICS '10 and J. ACM '18), a large body of work has focused on realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, non-malleable codes in the plain model (i.e., no trusted setup) secure against $\textit{bounded}$ polynomial size tampering are not known and obtaining such a code has been a major open problem. We present the first construction of a non-malleable code secure against $\textit{all}$ polynomial size tampering functions that have $\textit{bounded polynomial depth}$. This is an even larger class than all bounded polynomial $\textit{size}$ functions and, in particular, we capture all functions in non-uniform $\mathbf{NC}$ (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we obtain sub-exponentially small error.

  • Bracing A Transaction DAG with A Backbone Chain
    by Shuyang Tang on November 18, 2021 at 11:50 am

    Directed Acyclic Graph (DAG) is becoming an intriguing direction for distributed ledger structure due to its great potential in improving the scalability of distributed ledger systems. Among existing DAG-based ledgers, one promising category is transaction DAG, namely, treating each transaction as a graph vertex. In this paper, we propose Haootia, a novel two-layer framework of consensus, with a ledger in the form of a transaction DAG built on top of a delicately designed PoW-based backbone chain. By elaborately devising the principle of transaction linearizations, we achieve a secure and scalable DAG-based consensus. By implementing Haootia, we conclude that, with a rotating committee of size 46 and a confirmation latency around 20 seconds, Haootia achieves a throughput around 7500 TPS which is overwhelming compared with all formally analyzed DAG-based consensus schemes to date and all existing non-DAG-based ones to our knowledge.

  • Private Liquidity Matching using MPC
    by Shahla Atapoor on November 18, 2021 at 8:27 am

    Many central banks, as well as blockchain systems, are looking into distributed versions of interbank payment systems, in particular the netting procedure. When executed in a distributed manner this presents a number of privacy problems. This paper studies a privacy preserving netting protocol to solve the gridlock resolution problem in such Real Time Gross Settlement systems. Our solution utilizes Multi-party Computation and is implemented in the SCALE MAMBA system, using Shamir secret sharing scheme over three parties in an actively secure manner. Our experiments show that, even for large throughput systems, such a privacy preserving operation is often feasible.

  • Gambling for Success: The Lottery Ticket Hypothesis in Deep Learning-based SCA
    by Guilherme Perin on November 17, 2021 at 12:30 pm

    Deep learning-based side-channel analysis (SCA) represents a strong approach for profiling attacks. Still, this does not mean it is trivial to find neural networks that perform well for any setting. Based on the developed neural network architectures, we can distinguish between small neural networks that are easier to tune and less prone to overfitting but could have insufficient capacity to model the data. On the other hand, large neural networks have sufficient capacity but can overfit and are more difficult to tune. This brings an interesting trade-off between simplicity and performance. This work proposes to use a pruning strategy and recently proposed Lottery Ticket Hypothesis (LTH) as an efficient method to tune deep neural networks for profiling SCA. Pruning provides a regularization effect on deep neural networks and reduces the overfitting posed by overparameterized models. We demonstrate that we can find pruned neural networks that perform on the level of larger networks, where we manage to reduce the number of weights by more than 90% on average. This way, pruning and LTH approaches become alternatives to costly and difficult hyperparameter tuning in profiling SCA. Our analysis is conducted over different masked AES datasets and for different neural network topologies. Our results indicate that pruning, and more specifically LTH, can result in competitive deep learning models.

  • Actively Secure Setup for SPDZ
    by Dragos Rotaru on November 17, 2021 at 8:08 am

    We present an actively secure, practical protocol to generate the distributed secret keys needed in the SPDZ offline protocol. The resulting distribution of the public and secret keys is such that the associated SHE `noise' analysis is the same as if the distributed keys were generated by a trusted setup. We implemented the presented protocol for distributed BGV key generation within the SCALE-MAMBA framework. Our method makes use of a new method for creating doubly (or even more) authenticated bits in different MPC engines, which has applications in other areas of MPC-based secure computation. We were able to generate keys for two parties and a plaintext size of 64 bits in around five minutes, and approximately eighteen minutes for a 128 bit prime.

  • A Practical Adaptive Key Recovery Attack on the LGM (GSW-like) Cryptosystem
    by Prastudy Fauzi on November 16, 2021 at 6:41 pm

    We present an adaptive key recovery attack on the leveled homomorphic encryption scheme suggested by Li, Galbraith and Ma (Provsec 2016), which itself is a modification of the GSW cryptosystem designed to resist key recovery attacks by using a different linear combination of secret keys for each decryption. We were able to efficiently recover the secret key for a realistic choice of parameters using a statistical attack. In particular, this means that the Li, Galbraith and Ma strategy does not prevent adaptive key recovery attacks.

  • FAST: Fair Auctions via Secret Transactions
    by Bernardo David on November 16, 2021 at 5:08 pm

    Sealed-bid auctions are a common way of allocating an asset among a set of parties but require trusting an auctioneer who analyses the bids and determines the winner. Many privacy-preserving computation protocols for auctions have been proposed to eliminate the need for a trusted third party. However, they lack fairness, meaning that the adversary learns the outcome of the auction before honest parties and may choose to make the protocol fail without suffering any consequences. In this work, we propose efficient protocols for both first and second-price sealed-bid auctions with fairness against rational adversaries, leveraging secret cryptocurrency transactions and public smart contracts. In our approach, the bidders jointly compute the winner of the auction while preserving the privacy of losing bids and ensuring that cheaters are financially punished by losing a secret collateral deposit. We guarantee that it is never profitable for rational adversaries to cheat by making the deposit equal to the bid plus the cost of running the protocol, i.e., once a party commits to a bid, it is guaranteed that it has the funds and it cannot walk away from the protocol without forfeiting the bid. Moreover, our protocols ensure that the winner is determined and the auction payments are completed even if the adversary misbehaves so that it cannot force the protocol to fail and then rejoin the auction with an adjusted bid. In comparison to the state-of-the-art, our constructions are both more efficient and furthermore achieve stronger security properties, i.e., fairness. Interestingly, we show how the second-price can be computed with a minimal increase of the complexity of the simpler first-price case. Moreover, in case there is no cheating, only collateral deposit and refund transactions must be sent to the smart contract, significantly saving on-chain storage.

  • Intelligent Composed Algorithms
    by Frank Byszio on November 16, 2021 at 2:23 pm

    Intelligent Composed Algorithms (ICA) have been developed as a mechanism for introducing new cryptographic algorithms into applications and PKIs. Using ICAs, known cryptographic algorithms (Component-Algorithms) can be combined in order to obtain a stronger mix of cryptographic algorithms or primitives. Using ICAs it is also possible to use known Component-Algorithms as mutual alternatives. Furthermore, the combined and alternative use of Component-Algorithms as ICAs shall enable agile use of cryptographic algorithms without having to change standards as X.509 or CMS. An Intelligent Composed Algorithm is a flexible group of cryptographic algorithms together with the corresponding rules for their combination. The rules for the combination of Component-Algorithms are defined as algorithms (Controlling-Algorithms) themselves. In applications, ICAs are used as conventional algorithms, described by an algorithm identifier (an OID) and matching parameters. The chosen Component-Algorithms are defined by parameters of the Controlling-Algorithm. The use of ICAs impose no need to modify higher-order standards for applications and protocols, as X.509, RFC 5280, RFC 6960, RFC 2986, RFC 4210, and RFC 5652.

  • Neon NTT: Faster Dilithium, Kyber, and Saber on Cortex-A72 and Apple M1
    by Hanno Becker on November 16, 2021 at 12:51 pm

    We present new speed records on the Armv8-A architecture for the lattice-based schemes Dilithium, Kyber, and Saber. The core novelty in this paper is the combination of Montgomery multiplication and Barrett reduction resulting in “Barrett multiplication” which allows particularly efficient modular one-known-factor multiplication using the Armv8-A Neon vector instructions. These novel techniques combined with fast two-unknown-factor Montgomery multiplication, Barrett reduction sequences, and interleaved multi-stage butterflies result in significantly faster code. We also introduce “asymmetric multiplication” which is an improved technique for caching the results of the incomplete NTT, used e.g. for matrix-to-vector polynomial multiplication. Our implementations target the Arm Cortex-A72 CPU, on which our speed is 1.7× that of the state-of-the-art matrix-to-vector polynomial multiplication in Kyber [Nguyen–Gaj 2021]. For Saber, NTTs are far superior to Toom–Cook multiplication on the Armv8-A architecture, outrunning the matrix-to-vector polynomial multiplication by 2.1×. On the Apple M1, our matrix-vector products run 2.1× and 1.9× faster for Kyber and Saber respectively.

  • Probabilistic micropayments with transferability
    by Taisei Takahashi on November 16, 2021 at 4:21 am

    Micropayments are one of the challenges in cryptocurrencies. The problems in realizing micropayments in the blockchain are the low throughput and the high blockchain transaction fee. As a solution, decentralized probabilistic micropayment has been proposed. The winning amount is registered in the blockchain, and the tickets are issued to be won with probability $p$, which allows us to aggregate approximately $\frac{1}{p}$ transactions into one. Unfortunately, existing solutions do not allow for ticket transferability, and the smaller $p$, the more difficult it is to use them in the real world. We propose a novel decentralized probabilistic micropayment Transferable Scheme. It allows tickets to be transferable among users. By allowing tickets to be transferable, we can make $p$ smaller. We also propose a novel Proportional Fee Scheme. This is a scheme where each time a ticket is transferred, a portion of the blockchain transaction fee will be charged. With the proportional fee scheme, users will have the advantage of sending money with a smaller fee than they would generally send through the blockchain. For example, sending one dollar requires only ten cents.