• Taming the many EdDSAs
    by Konstantinos Chalkias on October 21, 2020 at 12:25 pm

    This paper analyses security of concrete instantiations of EdDSA by identifying exploitable inconsistencies between standardization recommendations and Ed25519 implementations. We mainly focus on current ambiguity regarding signature verification equations, binding and malleability guarantees, and incompatibilities between randomized batch and single verification. We give a formulation of Ed25519 signature scheme that achieves the highest level of security, explaining how each step of the algorithm links with the formal security properties. We develop optimizations to allow for more efficient secure implementations. Finally, we designed a set of edge-case test-vectors and run them by some of the most popular Ed25519 libraries. The results allowed to understand the security level of those implementations and showed that most libraries do not comply with the latest standardization recommendations. The methodology allows to test compatibility of different Ed25519 implementations which is of practical importance for consensus-driven applications.

  • Masking in Fine-Grained Leakage Models: Construction, Implementation and Verification
    by Gilles Barthe on October 21, 2020 at 11:24 am

    We propose a new approach for building efficient, provably secure, and practically hardened implementations of masked algorithms. Our approach is based on a Domain Specific Language in which users can write efficient assembly implementations and fine-grained leakage models. The latter are then used as a basis for formal verification, allowing for the first time formal guarantees for a broad range of device-specific leakage effects not addressed by prior work. The practical benefits of our approach are demonstrated through a case study of the PRESENT S-Box: we develop a highly optimized and provably secure masked implementation, and show through practical evaluation based on TVLA that our implementation is practically resilient. Our approach significantly narrows the gap between formal verification of masking and practical security.

  • Winkle: Foiling Long-Range Attacks in Proof-of-Stake Systems
    by Sarah Azouvi on October 21, 2020 at 11:22 am

    Winkle protects any validator-based byzantine fault tolerant consensus mechanisms, such as those used in modern Proof-of-Stake blockchains, against long-range attacks where old validators’ signature keys get compromised. Winkle is a decentralized secondary layer of client-based validation, where a client includes a single additional field into a transaction that they sign: a hash of the previously sequenced block. The block that gets a threshold of signatures (confirmations) weighted by clients’ coins is called a “confirmed” checkpoint. We show that under plausible and flexible security assumptions about clients the confirmed checkpoints can not be equivocated. We discuss how client key rotation increases security, how to accommodate for coins’ minting and how delegation allows for faster checkpoints. We evaluate checkpoint latency experimentally using Bitcoin and Ethereum transaction graphs, with and without delegation of stake.

  • Compressing Proofs of $k$-Out-Of-$n$ Partial Knowledge
    by Thomas Attema on October 21, 2020 at 9:18 am

    In an (honest-verifier) zero-knowledge proof of partial knowledge, introduced by Cramer, Damg{\aa}rd and Schoenmakers (CRYPTO 1994), a prover knowing witnesses for some $k$-subset of $n$ given public statements can convince the verifier of this claim without revealing which $k$-subset. The accompanying solution cleverly combines $\Sigma$-protocol theory and arithmetic secret sharing, and achieves linear communication complexity for general $k,n$. Especially the ``one-out-of-$n$'' case $k=1$ has seen myriad applications during the last decades, e.g., in electronic voting, ring signatures, and confidential transaction systems in general. In this paper we focus on the discrete logarithm (DL) setting, where the prover claims knowledge of DLs of $k$-out-of-$n$ given elements. Groth and Kohlweiss (EUROCRYPT 2015) have shown how to solve the special case $k=1$ %, yet arbitrary~$n$, with {\em logarithmic} (in $n$) communication, instead of linear as prior work. However, their method %, which is original, takes explicit advantage of $k=1$ and does not generalize to $k>1$ without losing all advantage over prior work. Alternatively, an {\em indirect} approach for solving the considered problem is by translating the $k$-out-of-$n$ relation into a circuit and then applying recent advances in communication-efficient circuit ZK. Indeed, for the $k=1$ case this approach has been highly optimized, e.g., in ZCash. Our main contribution is a new, simple honest-verifier zero-knowledge proof protocol for proving knowledge of $k$ out of $n$ DLs with {\em logarithmic} communication and {\em for general $k$ and $n$}, without requiring any generic circuit ZK machinery. Our approach deploys a novel twist on {\em compressed} $\Sigma$-trotocol theory (CRYPTO 2020) that we then utilize to compress a carefully chosen adaptation of the CRYPTO 1994 approach down to logarithmic size. Interestingly, {\em even for $k=1$ and general $n$} our approach improves prior {\em direct} approaches as it reduces prover complexity without increasing the communication complexity. Besides the conceptual simplicity, we also identify regimes of practical relevance where our approach achieves asymptotic and concrete improvements, e.g., in proof size and prover complexity, over the generic approach based on circuit-ZK. Finally, we show various extensions and generalizations of our core result. For instance, we extend our protocol to proofs of partial knowledge of Pedersen (vector) commitment openings, and/or to include a proof that the witness satisfies some additional constraint, and we show how to extend our results to non-threshold access structures.

  • Proof of a shuffle for lattice-based cryptography (Full version)
    by Núria Costa on October 21, 2020 at 6:34 am

    In this paper we present the first proof of a shuffle for lattice-based cryptography which can be used to build a universally verifiable mix-net capable of mixing votes encrypted with a post-quantum algorithm, thus achieving long-term privacy. Universal verifiability is achieved by means of the publication of a non-interactive zero knowledge proof of a shuffle generated by each mix-node which can be verified by any observer. This published data guarantees long-term privacy since its security is based on perfectly hiding commitments and also on the hardness of solving the Ring Learning With Errors (RLWE) problem, that is widely believed to be quantum resistant.

  • Tiramisu: Black-Box Simulation Extractable NIZKs in the Updatable CRS Model
    by Karim Baghery on October 21, 2020 at 6:26 am

    In CRYPTO'18, Groth et al. introduced the $\textit{updatable}$ CRS model that allows bypassing the trust in the setup of NIZK arguments. Zk-SNARKs are the well-known family of NIZK arguments that are ubiquitously deployed in practice. In applications that achieve $\textit{universal composability}$, e.g. Hawk [S&P'16], Gyges [CCS'16], Ouroboros Crypsinous [S&P'19], the underlying SNARK is lifted by the $\texttt{COCO}$ framework [Kosba et al.,2015] to achieve Black-Box Simulation Extractability (BB-SE). The $\texttt{COCO}$ framework is designed in the standard CRS model, consequently, the BB-SE NIZK arguments built with it need a trusted setup phase. In a promising research direction, recently subversion-resistant and updatable SNARKs are proposed that can eliminate/bypass the needed trust in schemes. However, none of the available subversion-resistant/updatable schemes can achieve BB-SE, as Bellare et al.'s result from ASIACRYPT'16 shows that achieving simultaneously Sub-ZK (ZK without trusting a third party) and BB extractability is impossible. In this paper, we propose $\texttt{Tiramisu}$, as construction to build BB-SE NIZK arguments in the $\textit{updatable}$ CRS model. Similar to the $\texttt{COCO}$, $\texttt{Tiramisu}$ is suitable for modular use in larger cryptographic systems and allows building BB-SE NIZK arguments, but with $\textit{updatable}$ parameters. In the cost of one time CRS update, $\texttt{Tiramisu}$ gets arround the mentioned impossibility result by Bellare et al. Namely, by one time updating the CRS, all the parties eliminate the trust on a third-party and the protocol satisfies ZK and BB-SE in the $\textit{updatable}$ CRS model. Meanwhile, we define a variation of public-key cryptosystems with updatable keys, suitable for the updatable CRS model, and present an efficient construction based on the El-Gamal cryptosystem which can be of independent interest. We instantiate $\texttt{Tiramisu}$ and present efficient BB-SE zk-SNARKs with updatable parameters that can be used in protocols like Hawk, Gyges, Ouroboros Crypsinous while allowing the end-users to update the parameters and eliminate the needed trust.

  • Lattice-based proof of a shuffle
    by Núria Costa on October 21, 2020 at 3:22 am

    In this paper we present the first fully post-quantum proof of a shuffle for RLWE encryption schemes. Shuffles are commonly used to construct mixing networks (mix-nets), a key element to ensure anonymity in many applications such as electronic voting systems. They should preserve anonymity even against an attack using quantum computers in order to guarantee long-term privacy. The proof presented in this paper is built over RLWE commitments which are perfectly binding and computationally hiding under the RLWE assumption, thus achieving security in a post-quantum scenario. Furthermore we provide a new definition for a secure mixing node (mix-node) and prove that our construction satisfies this definition.

  • Quantum-resistant Public-key Authenticated Encryption with Keyword Search for Industrial Internet of Things
    by Zi-Yuan Liu on October 21, 2020 at 2:19 am

    The industrial Internet of Things (IIoT) integrates sensors, instruments, equipment, and industrial applications, enabling traditional industries to automate and intelligently process data. To reduce the cost and demand of required service equipment, IIoT relies on cloud computing to further process and store data. However, the means for ensuring the privacy and confidentiality of the outsourced data and the maintenance of flexibility in the use of these data remain unclear. Public-key authenticated encryption with keyword search (PAEKS) is a variant of public-key encryption with keyword search that not only allows users to search encrypted data by specifying keywords but also prevents insider keyword guessing attacks (IKGAs). However, all current PAEKS schemes are based on the discrete logarithm assumption and are therefore vulnerable to quantum attacks. Additionally, the security of these schemes are only proven under random oracle and are considered insufficiently secure. In this study, we first introduce a generic PAEKS construction that enjoys the security under IKGAs in the standard model. Based on the framework, we propose a novel instantiation of quantum-resistant PAEKS that is based on NTRU assumption. Compared with its state-of-the-art counterparts, our instantiation is more efficient and secure.

  • Multiparty Cardinality Testing for Threshold Private Set Intersection
    by Pedro Branco on October 21, 2020 at 2:19 am

    Threshold Private Set Intersection (PSI) allows multiple parties to compute the intersection of their input sets if and only if the intersection is larger than $n-t$, where $n$ is the size of the sets and $t$ is some threshold. The main appeal of this primitive is that, in contrast to standard PSI, known upper-bounds on the communication complexity only depend on the threshold $t$ and not on the sizes of the input sets. Current Threshold PSI protocols split themselves into two components: A Cardinality Testing phase, where parties decide if the intersection is larger than some threshold; and a PSI phase, where the intersection is computed. The main source of inefficiency of Threshold PSI is the former part. In this work, we present a new Cardinality Testing protocol that allows $N$ parties to check if the intersection of their input sets is larger than $n-t$. The protocol incurs in $\tilde{ \mathcal{O}} (Nt^2)$ communication complexity. We thus obtain a Threshold PSI scheme for $N$ parties with communication complexity $\tilde{ \mathcal{O}}(Nt^2)$.

  • Individual Simulations
    by Yi Deng on October 20, 2020 at 6:39 pm

    We develop an individual simulation technique that explicitly makes use of particular properties/structures of a given adversary's functionality. Using this simulation technique, we obtain the following results. 1. We construct the first protocols that \emph{break previous black-box barriers} of [Xiao, TCC'11 and Alwen et al., Crypto'05] under the standard hardness of factoring, both of which are polynomial time simulatable against all a-priori bounded polynomial size distinguishers: -- Two-round selective opening secure commitment scheme. -- Three-round concurrent zero knowledge and concurrent witness hiding argument for NP in the bare public-key model. 2. We present a simpler two-round weak zero knowledge and witness hiding argument for NP in the plain model under the sub-exponential hardness of factoring. Our technique also yields a significantly simpler proof that existing distinguisher-dependent simulatable zero knowledge protocols are also polynomial time simulatable against all distinguishers of a-priori bounded polynomial size. The core conceptual idea underlying our individual simulation technique is an observation of the existence of nearly optimal extractors for all hard distributions: For any NP-instance(s) sampling algorithm, there exists a polynomial-size witness extractor (depending on the sampler's functionality) that almost outperforms any circuit of a-priori bounded polynomial size in terms of the success probability.

  • RSA for poor men: a cryptosystem based on probable primes to base 2 numbers
    by Marek Wójtowicz on October 20, 2020 at 3:58 pm

    We show it is possible to build an RSA-type cryptosystem by utilizing \textit{probable primes to base 2} numbers. Our modulus $N$ is the product $n\cdot m$ of such numbers (so here both prime and some composite, e.g. Carmichael or Fermat, numbers are acceptable) instead of prime numbers. Moreover, we require for $n$ and $m$ to be co-prime only, and so we don't have to worry about whether any of the numbers $n, m$ is composite or not. The encryption and decryption processes are similar as those in the RSA. Hence, in this cryptosystem we may apply the above kind of numbers of arbitrary length being still sure that the system works well. The price for that is not so high: the size of a message $M$ (as a number) permitted by the new system must be smaller than log (in base 2) of $n\cdot m$. The proposed cryptosystem can be applied in the case the numbers $n,m$ are 'sufficiently large' for a user, or as a completion of the classical RSA if $m,n$ are probable primes but possibly not prime, or in a 'secret sharing'-type cryptosystem. The numbers $n,m$ can be also taken from a narrower class of probable primes to base 2 numbers, e.g., Euler, or strong, or Baillie-PSW.

  • Hashing to elliptic curves of $j=0$ and quadratic imaginary orders of class number $2$
    by Dmitrii Koshelev on October 20, 2020 at 12:04 pm

    Let $\mathbb{F}_{\!p}$ be a prime finite field ($p > 5$) and $E_b\!: y_0^2 = x_0^3 + b$ be an elliptic $\mathbb{F}_{\!p}$-curve of $j$-invariant $0$. In this article we produce the simplified SWU hashing to curves $E_b$ having an $\mathbb{F}_{\!p^2}$-isogeny of degree $5$. This condition is fulfilled for some Barreto--Naehrig curves, including BN512 from the standard ISO/IEC 15946-5. Moreover, we show how to implement the simplified SWU hashing in constant time (for any $j$-invariant), namely without quadratic residuosity tests and inversions in $\mathbb{F}_{\!p}$. Thus in addition to the protection against timing attacks, the new hashing $h\!: \mathbb{F}_{\!p} \to E_b(\mathbb{F}_{\!p})$ turns out to be much more efficient than the (universal) SWU hashing, which generally requires to perform $2$ quadratic residuosity tests.

  • Hashing to elliptic curves of $j$-invariant $1728$
    by Dmitrii Koshelev on October 20, 2020 at 12:01 pm

    This article generalizes the simplified Shallue--van de Woestijne--Ulas (SWU) method of a deterministic finite field mapping $h\!: \mathbb{F}_{\!q} \to E_a(\mathbb{F}_{\!q})$ to the case of any elliptic $\mathbb{F}_{\!q}$-curve $E_a\!: y^2 = x^3 - ax$ of $j$-invariant $1728$. In comparison with the (classical) SWU method the simplified SWU method allows to avoid one quadratic residuosity test in the field $\mathbb{F}_{\!q}$, which is a quite painful operation in cryptography with regard to timing attacks. More precisely, in order to derive $h$ we obtain a rational $\mathbb{F}_{\!q}$-curve $C$ (and its explicit quite simple proper $\mathbb{F}_{\!q}$-parametrization) on the Kummer surface $K^\prime$ associated with the direct product $E_a \!\times\! E_a^\prime$, where $E_a^\prime$ is the quadratic $\mathbb{F}_{\!q}$-twist of $E_a$. Our approach of finding $C$ is based on the fact that every curve $E_a$ has a vertical $\mathbb{F}_{\!q^2}$-isogeny of degree $2$.

  • Semi-Quantum Money
    by Roy Radian on October 20, 2020 at 11:47 am

    Quantum money allows a bank to mint quantum money states that can later be verified and cannot be forged. Usually, this requires a quantum communication infrastructure to transfer quantum states between the user and the bank. Gavinsky (CCC 2012) introduced the notion of classically verifiable quantum money, which allows verification through classical communication. In this work we introduce the notion of classical minting, and combine it with classical verification to introduce semi-quantum money. Semi-quantum money is the first type of quantum money to allow transactions with completely classical communication and an entirely classical bank. This work features constructions for both a public memory-dependent semi-quantum money scheme and a private memoryless semi-quantum money scheme. The public construction is based on the works of Zhandry and Coladangelo, and the private construction is based on the notion of Noisy Trapdoor Claw Free Functions (NTCF) introduced by Brakerski et al. (FOCS 2018). In terms of technique, our main contribution is a perfect parallel repetition theorem for NTCF.

  • Cryptanalysis of Feistel-Based Format-Preserving Encryption
    by Orr Dunkelman on October 20, 2020 at 9:00 am

    Format-Preserving Encryption (FPE) is a method to encrypt non-standard domains, thus allowing for securely encrypting not only binary strings, but also special domains, e.g., social security numbers into social security numbers. The need for those resulted in a few standardized constructions such as the NIST standardized FF1 and FF3-1 and the Korean Standards FEA-1 and FEA-2. Moreover, there are currently efforts both in ANSI and in ISO to include such block ciphers to standards (e.g., the ANSI X9.124 discussing encryption for financial services). Most of the proposed FPE schemes, such as the NIST standardized FF1 and FF3-1 and the Korean Standards FEA-1 and FEA-2, are based on a Feistel construction with pseudo-random round functions. Moreover, to mitigate enumeration attacks against the possibly small domains, they all employ tweaks, which enrich the actual domain sizes. In this paper we present distinguishing attacks against Feistel-based FPEs. We show a distinguishing attack against the full FF1 with data complexity of $2^{60}$ 20-bit plaintexts, against the full FF3-1 with data complexity of $2^{40}$ 20-bit plaintexts. For FEA-1 with 128-bit, 192-bit and 256-bit keys, the data complexity of the distinguishing attack is $2^{32}$, $2^{40}$, and $2^{48}$ 8-bit plaintexts, respectively. The data complexity of the distinguishing attack against the full FEA-2 with 128-bit, 192-bit and 256-bit is $2^{56}$, $2^{68}$, and $2^{80}$ 8-bit plaintexts, respectively. Moreover, we show how to extend the distinguishing attack on FEA-1 and FEA-2 using 192-bit and 256-bit keys into key recovery attacks with time complexity $2^{136}$ (for both attacks).

  • A note on the low order assumption in class group of an imaginary quadratic number fields
    by Karim Belabas on October 20, 2020 at 8:59 am

    In this short note we analyze the low order assumption in the imaginary quadratic number fields. We show how this assumption is broken for Mersenne primes. We also provide a description on how to possible attack this assumption for other class of prime numbers leveraging some new mathematical tool coming from higher (cubic) number fields.

  • Security and Privacy of Decentralized Cryptographic Contact Tracing
    by Noel Danz on October 20, 2020 at 8:58 am

    Automated contact tracing leverages the ubiquity of smartphones to warn users about an increased exposure risk to COVID-19. In the course of only a few weeks, several cryptographic protocols have been proposed that aim to achieve such contract tracing in a decentralized and privacy-preserving way. Roughly, they let users' phones exchange random looking pseudonyms that are derived from locally stored keys. If a user is diagnosed, her phone uploads the keys which allows other users to check for any contact matches. Ultimately this line of work led to Google and Apple including a variant of these protocols into their phones which is currently used by millions of users. Due to the obvious urgency, these schemes were pushed to deployment without a formal analysis of the achieved security and privacy features. In this work we address this gap and provide the first formal treatment of such decentralized cryptographic contact tracing. We formally define three main properties in a game-based manner: pseudonym and trace unlinkability to guarantee the privacy of users during healthy and infectious periods, and integrity ensuring that triggering false positive alarms is infeasible. A particular focus of our work is on the timed aspects of these schemes, as both keys and pseudonyms are rotated regularly, and we specify different variants of the aforementioned properties depending on the time granularity for which they hold. We analyze a selection of practical protocols (DP-3T, TCN, GAEN) and prove their security under well-defined assumptions.

  • Privately Connecting Mobility to Infectious Diseases via Applied Cryptography
    by Alessandro Bruni on October 20, 2020 at 8:08 am

    Human mobility is undisputedly one of the critical factors in infectious disease dynamics. Until a few years ago, researchers had to rely on static data to model human mobility, which was then combined with a transmission model of a particular disease resulting in an epidemiological model. Recent works have consistently been showing that substituting the static mobility data with mobile phone data leads to significantly more accurate models. While prior studies have exclusively relied on a mobile operator’s subscribers’ aggregated data, it may be preferable to contemplate aggregated mobility data of infected individuals only. Clearly, naively linking mobile phone data with infected individuals would massively intrude privacy. This research aims to develop a solution that reports the aggregated mobile phone location data of infected individuals while still maintaining compliance with privacy expectations. To achieve privacy, we use homomorphic encryption, zero-knowledge proof techniques, and differential privacy. Our protocol’s open-source implementation can process eight million subscribers in one hour. Additionally, we provide a legal analysis of our solution with regards to the General Data Protection Regulation.

  • Poseidon: A New Hash Function for Zero-Knowledge Proof Systems
    by Lorenzo Grassi on October 20, 2020 at 7:06 am

    The area of practical computational integrity proof systems, like SNARKs, STARKs, Bulletproofs, is seeing a very dynamic development with several constructions having appeared recently with improved properties and relaxed setup requirements. Many use cases of such systems involve, often as their most expensive part, proving the knowledge of a preimage under a certain cryptographic hash function, which is expressed as a circuit over a large prime field. A notable example is a zero-knowledge proof of coin ownership in the Zcash cryptocurrency, where the inadequacy of the SHA-256 hash function for such a circuit caused a huge computational penalty. In this paper, we present a modular framework and concrete instances of cryptographic hash functions which work natively with GF(p) objects. Our hash function Poseidon uses up to 8x fewer constraints per message bit than Pedersen Hash. Our construction is not only expressed compactly as a circuit, but can also be tailored for various proof systems using specially crafted polynomials, thus bringing another boost in performance. We demonstrate this by implementing a 1-out-of-a-billion membership proof with Merkle trees in less than a second by using Bulletproofs.

  • A White-Box Masking Scheme Resisting Computational and Algebraic Attacks
    by Okan Seker on October 20, 2020 at 5:13 am

    White-box cryptography attempts to protect cryptographic secrets in pure software implementations. Due to their high utility, white-box cryptosystems (WBC) are deployed by the industry even though the security of these constructions is not well defined. A major breakthrough in generic cryptanalysis of WBC was Differential Computation Analysis (DCA), which requires minimal knowledge of the underlying white-box protection and also thwarts many obfuscation methods. To avert DCA, classic masking countermeasures originally intended to protect against highly related side-channel attacks have been proposed for use in WBC. However, due to the controlled environment of WBCs, new algebraic attacks against classic masking schemes have quickly been found. These algebraic DCA attacks break all classic masking countermeasures efficiently, as they are independent of the masking order. In this work, we propose a novel generic masking scheme that can resist both DCA and algebraic DCA attacks. The proposed scheme extends the seminal work by Ishai et al. which is probing secure and thus resists DCA, to also resist algebraic attacks. To prove the security of our scheme, we demonstrate the connection between two main security notions in white-box cryptography: probing security and prediction security. Resistance of our masking scheme to DCA is proven for an arbitrary order of protection, using the well-known strong non-interference notion by Barthe et al. Our masking scheme also resists algebraic attacks, which we show concretely for first and second order algebraic protection. Moreover, we present an extensive performance analysis and quantify the overhead of our scheme, for a proof-of-concept protection of an AES implementation.

  • TRVote: A New, Trustworthy and Robust Electronic Voting System
    by Fatih Tiryakioglu on October 20, 2020 at 4:42 am

    We propose a new Direct-Recording Electronic (DRE)-based voting system that we call TRVote. The reliability of TRVote is ensured during the vote generation phase where the random challenges are generated by the voters instead of utilizing the random number generator of the machine. Namely, the challenges of voters are utilized to prevent and detect a malicious behavior of a corrupted voting machine. Due to the unpredictability of the challenges, the voting machine cannot cheat voters without being detected. TRVote provides two kinds of verification; "cast-as-intended" is ensured during the vote generation phase and "recorded-as-cast" is ensured through a secure Web Bulletin Board (WBB). More concretely, voters can verify that their votes are cast as intended via a zero-knowledge proof on a printed receipt using QR codes. After the election, the central server broadcasts all receipts in a secure WBB where the voters (or, perhaps proxies) can check whether their receipts appear correctly. In order to implement the voting process, the proposed voting machine has simple components such as mechanical switches, a touchscreen, and a printer. In this system, each candidate is represented by a mechanical switch which is equipped within the voting machine. The machine has a flexible structure in the sense that the mechanical switches can be placed and removed as plug-ins depending on the number of candidates which allows to support arbitrary number of candidates. We show that the proposed system is robust and guarantees privacy of voters. We further analyze that it is universally verifiable and secure against coercion.

  • Modified Secure Hashing algorithm(MSHA-512)
    by Ashoka SB on October 20, 2020 at 3:12 am

    In recent year’s security has become an important role in the field of Defense, Business, Medical and Industries.Different types of cryptography algorithms has been implemented in order to provide security with high performance. A hash function is a cryptography algorithm without a key such as MD5, SHA-family. Secure hash algorithm which is standardized by NIST as secured hashing in FIPS. In this paper we mainly focus on code optimization and increase the performance of SHA-512. To optimize and increase the performance of SHA-512 we have modified the algorithm and proposed Modified Secure hash algorithm(MSHA-512 ). It is a one-way hash function that can process a message to produce a condensed representation called a message digest.In this proposed algorithm within the limited rounds (40 rounds instead of 80 rounds) we can obtain exact result which is same as traditional SHA-512 algorithm. By using MSHA-512 algorithm we can reduce the time upto 50% for the same process, which increases the performance of the algorithm and helps to increase the flexibility of server in running streams. The MSHA-512 generates the same hash code as we generate in the SHA-512 for all different size of inputs. MSHA-512 is developed and designed to overcome the time complexity of SHA-512 and increase the performance of it by reducing the rounds.

  • Keep the Dirt: Tainted TreeKEM, Adaptively and Actively Secure Continuous Group Key Agreement
    by Joël Alwen on October 20, 2020 at 12:43 am

    While messaging systems with strong security guarantees are widely used in practice, designing a protocol that scales efficiently to large groups and enjoys similar security guarantees remains largely open. The two existing proposals to date are ART (Cohn-Gordon et al., CCS18) and TreeKEM (IETF, The Messaging Layer Security Protocol, draft). TreeKEM is the currently considered candidate by the IETF MLS working group, but dynamic group operations (i.e. adding and removing users) can cause efficiency issues. In this paper we formalize and analyze a variant of TreeKEM which we term Tainted TreeKEM (TTKEM for short). The basic idea underlying TTKEM was suggested by Millican (MLS mailing list, February 2018). This version is more efficient than TreeKEM for some natural distributions of group operations, we quantify this through simulations. Our second contribution is two security proofs for TTKEM which establish post compromise and forward secrecy even against adaptive attackers. If $n$ is the group size and $Q$ the number of operations, the security loss (to the underlying PKE) in the Random Oracle Model is a polynomial factor $(Qn)^2$, and in the Standard Model a quasipolynomial $Q^{\log(n)}$. Our proofs can be adapted to TreeKEM as well. Before our work no security proof for any TreeKEM-like protocol establishing tight security against an adversary who can adaptively choose the sequence of operations was known. We also are the first to prove (or even formalize) active security where the server can arbitrarily deviate from the protocol specification. Proving fully active security - where also the users can arbitrarily deviate - remains open.

  • On the Success Probability of Solving Unique SVP via BKZ
    by Eamonn W. Postlethwaite on October 20, 2020 at 12:25 am

    As lattice-based key encapsulation, digital signature, and fully homomorphic encryption schemes near standardisation, ever more focus is being directed to the precise estimation of the security of these schemes. The primal attack reduces key recovery against such schemes to instances of the unique Shortest Vector Problem (uSVP). Dachman-Soled et al. (Crypto 2020) recently proposed a new approach for fine-grained estimation of the cost of the primal attack when using Progressive BKZ for the lattice reduction step. In this paper we review and extend their technique to BKZ 2.0 and provide extensive experimental evidence of its accuracy. Using this technique we also explain results from previous primal attack experiments by Albrecht et al. (Asiacrypt 2017) where attacks succeeded with smaller than expected block sizes. Finally, we use our simulators to reestimate the cost of attacking the three lattice finalists of the NIST Post Quantum Standardisation Process.

  • Simulation Extractable Versions of Groth's zk-SNARK Revisited
    by Karim Baghery on October 20, 2020 at 12:24 am

    Among various NIZK arguments, zk-SNARKs are the most efficient constructions in terms of proof size and verification which are two critical criteria for large scale applications. Currently, Groth’s construction, $\mathsf{Groth16}$, from Eurocrypt’16 is the most efficient and widely deployed one. However, it is proven to achieve only knowledge soundness, which does not prevent attacks from the adversaries who have seen simulated proofs. There has been considerable progress in modifying $\mathsf{Groth16}$ to achieve simulation extractability to guarantee the non-malleability of proofs. We revise the Simulation Extractable (SE) version of $\mathsf{Groth16}$ proposed by Bowe and Gabizon that has the most efficient prover and $\mathsf{crs}$ size among the candidates, although it adds Random Oracle (RO) to the original construction. We present a new version which requires 4 parings in the verification, instead of 5. We also get rid of the RO at the cost of a collision resistant hash function and a single new element in the $\mathsf{crs}$. Our construction is proven in the $\textit{generic group model}$ and seems to result in the most efficient SE variant of $\mathsf{Groth16}$ in most dimensions.

  • On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work
    by Kai-Min Chung on October 20, 2020 at 12:23 am

    We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). This technique has proven to be very powerful for reproving known lower bound results, but also for proving new results that seemed to be out of reach before. Despite being very useful, it is however still quite cumbersome to actually employ the compressed oracle technique. To start off with, we offer a concise yet mathematically rigorous exposition of the compressed oracle technique. We adopt a more abstract view than other descriptions found in the literature, which allows us to keep the focus on the relevant aspects. Our exposition easily extends to the parallel-query QROM, where in each query-round the considered quantum oracle algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis of quantum oracle algorithms. Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, we show that, for typical examples, the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds. We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main application is to prove hardness of finding a $q$-chain, i.e., a sequence $x_0,x_1,\ldots,x_q$ with the property that $x_i = H(x_{i-1})$ for all $1 \leq i \leq q$, with fewer than $q$ parallel queries. The above problem of producing a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete application of our new bound, we prove that the ``Simple Proofs of Sequential Work" proposed by Cohen and Pietrzak remain secure against quantum attacks. Such a proof is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack, and substantial additional work is necessary. Thanks to our framework, this can now be done with purely classical reasoning.

  • QCB: Efficient Quantum-secure Authenticated Encryption
    by Ritam Bhaumik on October 20, 2020 at 12:21 am

    It was long thought that symmetric cryptography was only mildly affected by quantum attacks, and that doubling the key length was sufficient to restore security. However, recent works have shown that Simon's quantum period finding algorithm breaks a large number of MAC and authenticated encryption algorithms when the adversary can query the MAC/encryption oracle with a quantum superposition of messages. In particular, the OCB authenticated encryption mode is broken in this setting, and no quantum-secure mode is known with the same efficiency (rate-one and parallelizable). In this paper we generalize the previous attacks, show that a large class of OCB-like schemes is unsafe against superposition queries, and discuss the quantum security notions for authenticated encryption modes. We propose a new rate-one parallelizable mode named QCB inspired by TAE and OCB and prove its security against quantum superposition queries.

  • Expected Constant Round Byzantine Broadcast under Dishonest Majority
    by Jun Wan on October 19, 2020 at 2:00 pm

    Byzantine Broadcast (BB) is a central question in distributed systems, and an important challenge is to understand its round complexity. Under the honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes $n$. However, whether we can match the expected constant round complexity in the corrupt majority setting --- or more precisely, when $f \geq n/2 + \omega(1)$ --- remains unknown, where $f$ denotes the number of corrupt nodes. In this paper, we are the first to resolve this long-standing question. We show how to achieve BB in expected $O((n/(n-f))^2)$ rounds. Our results hold under both a static adversary and a weakly adaptive adversary who cannot perform ``after-the-fact removal'' of messages already sent by a node before it becomes corrupt.

  • Subsampling and Knowledge Distillation On Adversarial Examples: New Techniques for Deep Learning Based Side Channel Evaluations
    by Aron Gohr on October 19, 2020 at 1:08 pm

    This paper has four main goals. First, we show how we solved the CHES 2018 AES challenge in the contest using essentially a linear classifier combined with a SAT solver and a custom error correction method. This part of the paper has previously appeared in a preprint by the current authors (e-print report 2019/094) and later as a contribution to a preprint write-up of the solutions by the three winning teams (e-print report 2019/860). Second, we develop a novel deep neural network architecture for side-channel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge (with an expected success rate of roughly 70 percent if about 100 CPU hours are allowed for the equation solving stage of the attack). This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution. Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model. Fourth, we study the properties of adversarially generated side-channel traces for our model. Partly reproducing recent work on useful features in adversarial examples in our application domain, we find that a linear classifier generalizing to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist. The experiments described in this paper can be reproduced using code available at https://github.com/agohr/ches2018 .

  • Robust Property-Preserving Hash Functions for Hamming Distance and More
    by Nils Fleischhacker on October 19, 2020 at 10:36 am

    Robust property-preserving hash (PPH) functions, recently introduced by Boyle, Lavigne, and Vaikuntanathan [ITCS 2019], compress large inputs $x$ and $y$ into short digests $h(x)$ and $h(y)$ in a manner that allows for computing a predicate $P$ on $x$ and $y$ while only having access to the corresponding hash values. In contrast to locality-sensitive hash functions, a robust PPH function guarantees to correctly evaluate a predicate on $h(x)$ and $h(y)$ even if $x$ and $y$ are chosen adversarially after seeing $h$. Our main result is a robust PPH function for the exact hamming distance predicate $$\mathsf{HAM}^t(x, y) = \begin{cases}1 &\text{if } d( x, y) \geq t \\0 & \text{Otherwise}\\\end{cases}$$ where $d(x, y)$ is the hamming-distance between $x$ and $y$. Our PPH function compresses $n$-bit strings into $\mathcal{O}(t \lambda)$-bit digests, where $\lambda$ is the security parameter. The construction is based on the q-strong bilinear discrete logarithm assumption. Along the way, we construct a robust PPH function for the set intersection predicate $$ \mathsf{INT}^t(X, Y) = \begin{cases} 1 &\text{if } \vert X \cap Y\vert > n - t \\ 0 & \text{Otherwise}\\ \end{cases} $$ which compresses sets $X$ and $Y$ of size $n$ with elements from some arbitrary universe $U$ into $\mathcal{O}(t\lambda)$-bit long digests. This PPH function may be of independent interest. We present an almost matching lower bound of $\Omega(t \log t)$ on the digest size of any PPH function for the intersection predicate, which indicates that our compression rate is close to optimal. Finally, we also show how to extend our PPH function for the intersection predicate to more than two inputs.

  • Semi-Commutative Masking: A Framework for Isogeny-based Protocols, with an Application to Fully Secure Two-Round Isogeny-based OT
    by Cyprien Delpech de Saint Guilhem on October 19, 2020 at 7:23 am

    We define semi-commutative invertible masking structures which aim to capture the methodology of exponentiation-only protocol design (such as discrete logarithm and isogeny-based cryptography). We discuss two instantiations: the first is based on commutative group actions and captures both the action of exponentiation in the discrete logarithm setting and the action of the class group of commutative endomorphism rings of elliptic curves, in the style of the CSIDH key-exchange protocol; the second is based on the semi-commutative action of isogenies of supersingular elliptic curves, in the style of the SIDH key-exchange protocol. We then construct two oblivious transfer protocols using this new structure and prove that these UC-securely realise the oblivious transfer functionality in the random-oracle-hybrid model against passive adversaries with static corruptions. Moreover, by starting from one of these two protocols and using the compiler introduced by Döttling et al. (Eurocrypt 2020), we achieve the first fully UC-secure two-round OT protocol based on supersingular isogenies.

  • Polynomial Multiplication with Contemporary Co-Processors: Beyond Kronecker, Schönhage-Strassen & Nussbaumer
    by Joppe W. Bos on October 19, 2020 at 1:43 am

    We discuss how to efficiently utilize contemporary co-processors used for public-key cryptography for polynomial multiplication in lattice-based cryptography. More precisely, we focus on polynomial multiplication in rings of the form $Z[X]/(X^n + 1)$. We make use of the roots of unity in this ring and construct the Kronecker+ algorithm, a variant of Nussbaumer designed to combine especially well with the Kronecker substitution method, This is a symbolic NTT of variable depth that can be considered a generalization of Harvey’s multipoint Kronecker substitution. Compared to straightforwardly combining Kronecker substitution with the state-ofthe- art symbolic NTT algorithms Nussbaumer or Schönhage-Strassen, we halve the number or the bit size of the multiplied integers, respectively. Kronecker+ directly applies to the polynomial multiplication operations in the lattice-based cryptographic schemes Kyber and Saber, and we provide a detailed implementation analysis for the latter. This analysis highlights the potential of the Kronecker+ technique for commonly available multiplier lengths on contemporary co-processors.

  • TMVP-based Multiplication for Polynomial Quotient Rings and Application to Saber on ARM Cortex-M4
    by İrem Keskinkurt Paksoy on October 19, 2020 at 1:42 am

    Lattice-based NIST PQC finalists need efficient multiplication in $\mathbb{Z}_q[x]/(f(x))$. Multiplication in this ring can be performed very efficiently via number theoretic transform (NTT) as done in CRYSTALS-Kyber if the parameters of the scheme allow it. If NTT is not supported, other multiplication algorithms must be employed. For example, if the modulus $q$ of the scheme is a power of two as in Saber and NTRU, then NTT can not be used directly. In this case, Karatsuba and Toom-Cook methods together with modular reduction are commonly used for multiplication in this ring. In this paper, we show that the Toeplitz matrix-vector product (TMVP) representation of modular polynomial multiplication yields better results than Karatsuba and Toom-Cook methods. We present three- and four-way TMVP formulas that we derive from three- and four-way Toom-Cook algorithms, respectively. We use the four-way TMVP formula to develop an algorithm for multiplication in the ring $\mathbb{Z}_{2^m}[x]/(x^{256}+1)$. We implement the proposed algorithm on the ARM Cortex-M4 microcontroller and apply it to Saber, which is one of the lattice-based finalists of the NIST PQC competition. We compare the results to previous implementations. The TMVP-based multiplication algorithm we propose is $20.83\%$ faster than the previous algorithm that uses a combination of Toom-Cook, Karatsuba, and schoolbook methods. Our algorithm also speeds up key generation, encapsulation, and decapsulation algorithms of all variants of Saber. The speedups vary between $4.3-39.8\%$. Moreover, our algorithm requires less memory than the others, except for the memory-optimized implementation of Saber.

  • Byzantine Ordered Consensus without Byzantine Oligarchy
    by Yunhao Zhang on October 19, 2020 at 1:36 am

    The specific order of commands agreed upon when running state machine replication (SMR) is immaterial to fault-tolerance: all that is required is for all correct deterministic replicas to follow it. In the permissioned blockchains that rely on Byzantine fault tolerant (BFT) SMR, however, nodes have a stake in the specific sequence that ledger records, as well as in preventing other parties from manipulating the sequencing to their advantage. The traditional specification of SMR correctness, however, has no language to express these concerns. This paper introduces Byzantine ordered consensus, a new primitive that augments the correctness specification of BFT SMR to include specific guarantees on the total orders it produces; and a new architecture for BFT SMR that, by factoring out ordering from consensus, can enforce these guarantees and prevent Byzantine nodes from controlling ordering decisions (a Byzantine oligarchy). These contributions are instantiated in Pompe, a BFT SMR protocol that is guaranteed to order commands in a way that respects a natural extension of linearizability.

  • Unbounded Key-Policy Attribute-based Encryption with Black-Box Traceability
    by Yunxiu Ye on October 19, 2020 at 1:35 am

    Attribute-based encryption received widespread attention as soon as it was proposed. However, due to its specific characteristics, some restrictions on attribute set in attribute-based encryption are not flexible enough in actual operation. In addition, since access authorities are determined according to users' attributes, users sharing the same attributes are difficult to be distinguished. Once a malicious user makes illicit gains by their decryption authorities, it is difficult to track down specific users. This paper follows practical demands to propose a more flexible key-policy attribute-based encryption scheme with black-box traceability. The scheme has a constant size of public parameters which can be utilized to construct attribute-related parameters flexibly, and the method of traitor tracing in broadcast encryption is introduced to achieve effective malicious user tracing. In addition, the security and feasibility can be proved by the security proofs and performance evaluation in this paper.

  • Is Real-time Phishing Eliminated with FIDO? Social Engineering Downgrade Attacks against FIDO Protocols
    by Enis Ulqinaku on October 19, 2020 at 1:35 am

    FIDO's Universal-2-Factor (U2F) is a web-authentication mechanism designed to provide resilience to real-time phishing—a class of attacks that undermines multi-factor authentication by allowing an attacker to relay second-factor one-time tokens from the victim user to the legitimate website in real-time. A U2F dongle is simple to use, and is designed to ensure users have complete mental models of proper usage. We show that social engineering attacks allow an adversary to downgrade FIDO’s U2F to alternative authentication mechanisms. Websites allow such alternatives to handle dongle malfunction or loss. All FIDO-supporting wesbites in Alexa's top 100 allow choosing alternatives to FIDO, and are thus vulnerable to real-time phishing attacks. We crafted a phishing website that mimics Google login’s page and implements a FIDO-downgrade attack. We then ran a carefully-designed user study to test the effect on users. We found that, while registering FIDO as their second authentication factor, 55 % of participants fell for real-time phishing, and another 35% would potentially be susceptible to the attack in practice.

  • On the Effect of the (Micro)Architecture on the Development of Side-Channel Resistant Software
    by Lauren De Meyer on October 19, 2020 at 1:34 am

    There are many examples of how to assess the side-channel resistance of a hardware implementation for a given order, where one has to take into account all transitions and glitches produced by a given design. However, microprocessors do not conform with the ideal circuit model which is typically used to gain confidence in the security of masking against side-channel attacks. As a result, masked software implementations in practice do not exhibit the security one would expect in theory. In this paper, we generalize and extend work by Papagiannopoulos and Veshchikov to describe the ways in which a microprocessor may leak. We show that the sources of leakage are far more numerous than previously considered and highly dependent on the platform. We further describe how to write high-level code in the C programming language that allows one to work around common micro-architectural features. In particular, we introduce implementation techniques to reduce sensitive combinations made by the CPU and which are devised so as to be preserved through the optimizations made by the compiler. However, these techniques cannot be proven to be secure. In this paper, we seek to highlight leakage not considered in current models used in proofs and describe some potential solutions. We apply our techniques to two case studies (DES and AES) and show that they are able to provide a modest level of security on several platforms.

  • Concrete quantum cryptanalysis of binary elliptic curves
    by Gustavo Banegas on October 19, 2020 at 1:34 am

    This paper analyzes and optimizes quantum circuits for computing discrete logarithms on binary elliptic curves, including reversible circuits for fixed-base-point scalar multiplication and the full stack of relevant subroutines. The main optimization target is the size of the quantum computer, i.e., the number of logical qubits required, as this appears to be the main obstacle to implementing Shor's polynomial-time discrete-logarithm algorithm. The secondary optimization target is the number of logical Toffoli gates. For an elliptic curve over a field of 2^n elements, this paper reduces the number of qubits to 7n+[log_2(n)]+9. At the same time this paper reduces the number of Toffoli gates to 48n^3+8n^(log_2(3)+1)+352n^2 log_2(n)+512n^2+O(n^(log_2(3))) with double-and-add scalar multiplication, and a logarithmic factor smaller with fixed-window scalar multiplication. The number of CNOT gates is also O(n^3). Exact gate counts are given for various sizes of elliptic curves currently used for cryptography.

  • Optimized Software Implementations for theLightweight Encryption Scheme ForkAE
    by Arne Deprez on October 19, 2020 at 1:33 am

    In this work we develop optimized software implementationsfor ForkAE, a second round candidate in the ongoing NIST lightweight cryptography standardization process. Moreover, we analyze the perfor-mance and efficiency of different ForkAE implementations on two em-bedded platforms: ARM Cortex-A9 and ARM Cortex-M0.First, we study portable ForkAE implementations. We apply a decryption optimization technique which allows us to accelerate decryption by up to 35%. Second, we go on to explore platform-specific software op-timizations. In platforms where cache-timing attacks are not a risk, we present a novel table-based approach to compute the SKINNY round function. Compared to the existing portable implementations, this technique speeds up encryption and decryption by 20% and 25%, respectively. Additionally, we propose a set of platform-specific optimizations for processors with parallel hardware extensions such as ARM NEON. Without relying on parallelism provided by long messages (cf. bit-sliced implementations), we focus on the primitive-level ForkSkinny parallelism provided by ForkAE to reduce encryption and decryption latency by up to 30%. We benchmark the performance of our implementations on the ARM Cortex-M0 and ARM Cortex-A9 processors and give a comparison withthe other SKINNY-based schemes in the NIST lightweight competition– SKINNY-AEAD and Romulus

  • Coco: Co-Design and Co-Verification of Masked Software Implementations on CPUs
    by Barbara Gigerl on October 19, 2020 at 1:33 am

    The protection of cryptographic implementations against power analysis attacks is of critical importance for many applications in embedded systems. The typical approach of protecting against these attacks is to implement algorithmic countermeasures, like masking. However, implementing these countermeasures in a secure and correct manner is challenging. Masking schemes require the independent processing of secret shares, which is a property that is often violated by CPU microarchitectures in practice. In order to write leakage-free code, the typical approach in practice is to iteratively explore instruction sequences and to empirically verify whether there is leakage caused by the hardware for this instruction sequence or not. Clearly, this approach is neither efficient, nor does it lead to rigorous security statements. In this paper, we overcome the current situation and present the first approach for co-design and co-verification of masked software implementations on CPUs. First, we present Coco, a tool that allows us to provide security proofs at the gate-level for the execution of a masked software implementation on a concrete CPU. Using Coco , we analyze the popular 32-bit RISC-V Ibex core, identify all design aspects that violate the security of our tested masked software implementations and perform corrections, mostly in hardware. The resulting secured Ibex core has an area overhead around 10%, the runtime of software on this core is largely unaffected, and the formal verification with Coco of an, e.g., first-order masked Keccak S-box running on the secured Ibex core takes around 156 seconds. To demonstrate the effectiveness of our suggested design modifications, we perform practical leakage assessments using an FPGA evaluation board.

  • I Choose You: Automated Hyperparameter Tuning for Deep Learning-based Side-channel Analysis
    by Lichao Wu on October 19, 2020 at 1:33 am

    Deep learning-based SCA represents a powerful option for profiling side-channel analysis. Numerous results in the last few years indicate neural networks can break targets protected with countermeasures even with a relatively small number of attack traces. Intuitively, the more powerful neural network architecture we require, the more effort we need to spend in its hyperparameter tuning. Current results commonly use random search and reach good performance. Yet, we remain with the question of how good are such architectures if compared with the architectures that are carefully designed by following a specific methodology. Unfortunately, the works considering methodologies are sparse and difficult to ease without prior knowledge about the target. This work proposes an automated way for deep learning hyperparameter tuning that is based on Bayesian Optimization. We build a custom framework denoted as AutoSCA that supports both machine learning and side-channel metrics. Our experimental analysis shows that Bayesian optimization performs well regardless of the dataset, leakage model, or neural network type. What is more, we find a number of neural network architectures outperforming state-of-the-art attacks. Finally, we note that random search, despite being considered not particularly powerful, manages to reach top performance for a number of considered settings. We postulate this happens since the datasets are relatively easy to break, and there are many neural network architectures reaching top performance.

  • Optimal Oblivious Parallel RAM
    by Gilad Asharov on October 19, 2020 at 1:32 am

    An oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (STOC '87 and J. ACM '96), is a technique for hiding RAM's access pattern. That is, for every input the distribution of the observed locations accessed by the machine is essentially independent of the machine's secret inputs. Recent progress culminated in a work of Asharov et al. (EUROCRYPT '20), obtaining an ORAM with (amortized) logarithmic overhead in total work, which is known to be optimal. Oblivious Parallel RAM (OPRAM) is a natural extension of ORAM to the (more realistic) parallel setting where several processors make concurrent accesses to a shared memory. It is known that any OPRAM must incur logarithmic work overhead and for highly parallel RAMs a logarithmic depth blowup (in the balls and bins model). Despite the significant recent advances, there is still a large gap: all existing OPRAM schemes incur a poly-logarithmic overhead either in total work or in depth. Our main result closes the aforementioned gap and provides an essentially optimal OPRAM scheme. Specifically, assuming one-way functions, we show that any Parallel RAM with memory capacity~$N$ can be obliviously simulated in space $O(N)$, incurring only $O(\log N)$ blowup in (amortized) total work as well as in depth. Our transformation supports all PRAMs in the CRCW mode and the resulting simulation is in the CRCW mode as well.

  • On metric regularity of Reed-Muller codes
    by Alexey Oblaukhov on October 19, 2020 at 12:46 am

    In this work we study metric properties of the well-known family of binary Reed-Muller codes. Let $A$ be an arbitrary subset of the Boolean cube, and $\widehat{A}$ be the metric complement of $A$ --- the set of all vectors of the Boolean cube at the maximal possible distance from $A$. If the metric complement of $\widehat{A}$ coincides with $A$, then the set $A$ is called a metrically regular set. The problem of investigating metrically regular sets appeared when studying bent functions, which have important applications in cryptography and coding theory and are also one of the earliest examples of a metrically regular set. In this work we describe metric complements and establish the metric regularity of the codes $\mathcal{RM}(0,m)$ and $\mathcal{RM}(k,m)$ for $k \geqslant m-3$. Additionally, the metric regularity of the codes $\mathcal{RM}(1,5)$ and $\mathcal{RM}(2,6)$ is proved. Combined with previous results by Tokareva N. (2012) concerning duality of affine and bent functions, this establishes the metric regularity of most Reed-Muller codes with known covering radius. It is conjectured that all Reed-Muller codes are metrically regular.

  • Metal: A Metadata-Hiding File-Sharing System
    by Weikeng Chen on October 18, 2020 at 11:36 pm

    File-sharing systems like Dropbox offer insufficient privacy because a compromised server can see the file contents in the clear. Although encryption can hide such contents from the servers, metadata leakage remains significant. The goal of our work is to develop a file-sharing system that hides metadata---including user identities and file access patterns. Metal is the first file-sharing system that hides such metadata from malicious users and that has a latency of only a few seconds. The core of Metal consists of a new two-server multi-user oblivious RAM (ORAM) scheme, which is secure against malicious users, a metadata-hiding access control protocol, and a capability sharing protocol. Compared with the state-of-the-art malicious-user file-sharing scheme PIR-MCORAM (Maffei et al.'17), which does not hide user identities, Metal hides the user identities and is 500x faster (in terms of amortized latency) or 10^5x faster (in terms of worst-case latency).

  • Low-gate Quantum Golden Collision Finding
    by Samuel Jaques on October 18, 2020 at 5:03 am

    The golden collision problem asks us to find a single, special collision among the outputs of a pseudorandom function. This generalizes meet-in-the-middle problems, and is thus applicable in many contexts, such as cryptanalysis of the NIST post-quantum candidate SIKE. The main quantum algorithms for this problem are memory-intensive, and the costs of quantum memory may be very high. The quantum circuit model implies a linear cost for random access, which annihilates the exponential advantage of the previous quantum collision-finding algorithms over Grover's algorithm or classical van Oorschot-Wiener. Assuming that quantum memory is costly to access but free to maintain, we provide new quantum algorithms for the golden collision problem with high memory requirements but low gate costs. Under the assumption of a two-dimensional connectivity layout, we provide better quantum parallelization methods for generic and golden collision finding. This lowers the quantum security of the golden collision and meet-in-the-middle problems, including SIKE.

  • Direct Anonymous Attestation with Optimal TPM Signing Efficiency
    by Kang Yang on October 17, 2020 at 8:09 pm

    Direct Anonymous Attestation (DAA) is an anonymous signature scheme, which allows the Trusted Platform Module (TPM), a small chip embedded in a host computer, to attest to the state of the host system, while preserving the privacy of the user. DAA provides two signature modes: fully anonymous signatures and pseudonymous signatures. One main goal of designing DAA schemes is to reduce the TPM signing workload as much as possible, as the TPM has only limited resources. In an optimal DAA scheme, the signing workload on the TPM will be no more than that required for a normal signature like ECSchnorr. To date, no scheme has achieved the optimal signing efficiency for both signature modes. In this paper, we propose the first DAA scheme which achieves the optimal TPM signing efficiency for both signature modes. In this scheme, the TPM takes only a single exponentiation to generate a signature, and this single exponentiation can be pre-computed. Our scheme can be implemented using the existing TPM 2.0 commands, and thus is compatible with the TPM 2.0 specification. We benchmarked the TPM 2.0 commands needed for three DAA use cases on an Infineon TPM 2.0 chip, and also implemented the host signing and verification algorithm for our scheme on a laptop with 1.80GHz Intel Core i7-8550U CPU. Our experimental results show that our DAA scheme obtains a total signing time of about 144 ms for either of two signature modes (compared to an online signing time of about 65 ms). Based on our benchmark results for the pseudonymous signature mode, our scheme is roughly 2x (resp., 5x) faster than the existing DAA schemes supported by TPM 2.0 in terms of total (resp., online) signing efficiency. In addition, our DAA scheme supports selective attribute disclosure, which can satisfy more application require- ments. We also extend our DAA scheme to support signature-based revocation and to guarantee privacy against subverted TPMs. The two extended DAA schemes keep the TPM signing efficiency optimal for both of two signa- ture modes, and outperform existing related schemes in terms of signing performance.

  • Kachina - Foundations of Private Smart Contracts
    by Thomas Kerber on October 17, 2020 at 5:50 am

    Smart contracts present a uniform approach for deploying distributed computation and have become a popular means to develop security critical applications. A major barrier to adoption for many applications is the public nature of existing systems, such as Ethereum. Several systems satisfying various definitions of privacy and requiring various trust assumptions have been proposed; however, none achieved the universality and uniformity that Ethereum achieved for non-private contracts: One unified method to construct most contracts. We provide a unified security model for private smart contracts which is based on the Universal Composition (UC) model and propose a novel core protocol, Kachina, for deploying privacy-preserving smart contracts, which encompasses previous systems. We demonstrate the Kachina method of smart contract development, using it to construct a contract that implements privacy-preserving payments, along the lines of Zerocash, which is provably secure in the UC setting and facilitates concurrency.

  • When Theory Meets Practice: A Framework for Robust Profiled Side-channel Analysis
    by Stjepan Picek on October 17, 2020 at 2:40 am

    Profiling side-channel attacks are considered the most potent form of side-channel attacks. They consist of two steps. First, the adversary builds a leakage model using a device similar to the target one. This leakage model is then exploited to extract the secret information from the victim's device. These attacks can be seen as a classification problem, where the adversary needs to decide to what class (and consequently, the secret key) the traces collected from the victim's device belong. The research community investigated profiling attacks in-depth, mostly by using an empirical approach. As such, it emerges that a theoretical framework to analyze profiling side-channel attacks comprehensively is still missing. In this paper, we propose a theory-grounded framework capable of modeling and evaluating profiling side-channel analysis. The framework is based on the expectation estimation problem that has strong theoretical foundations. We quantify the effects of perturbations injected at different points in our framework through the robustness analysis, where the perturbations represent sources of uncertainty associated with measurements, non-optimal classifiers, and countermeasures. Finally, we use our framework to evaluate the performance of different classifiers using publicly available traces.

  • An Instruction Set Extension to Support Software-Based Masking
    by Si Gao on October 16, 2020 at 10:07 am

    In both hardware and software, masking can represent an effective means of hardening an implementation against side channel attack vectors such as Differential Power Analysis (DPA). Focusing on software, however, the use of masking can present various challenges: specifically, it often 1) requires significant effort to translate any theoretical security properties into practice, and, even then, 2) imposes a significant overhead in terms of efficiency. To address both challenges, this paper explores use of an Instruction Set Extension (ISE) to support masking in software-based implementations of a range of (symmetric) cryptographic kernels including AES: we design, implement, and evaluate such an ISE, using RISC-V as the base ISA. Our ISE-supported first-order masked implementation of AES, for example, is an order of magnitude more efficient than a software-only alternative wrt. both execution latency and memory footprint; this renders it comparable to an unmasked implementation using the same metrics, but also first-order secure.

  • DORY: An Encrypted Search System with Distributed Trust
    by Emma Dauterman on October 16, 2020 at 10:07 am

    Efficient, leakage-free search on encrypted data has remained an unsolved problem for the last two decades; efficient schemes are vulnerable to leakage-abuse attacks, and schemes that eliminate leakage are impractical to deploy. To overcome this tradeoff, we reexamine the system model. We surveyed five companies providing end-to-end encrypted filesharing to better understand what they require from an encrypted search system. Based on our findings, we design and build DORY, an encrypted search system that addresses real-world requirements and protects search access patterns; namely, when a user searches for a keyword over the files within a folder, the server learns only that a search happens in that folder, but does not learn which documents match the search, the number of documents that match, or other information about the keyword. DORY splits trust between multiple servers to protect against a malicious attacker who controls all but one of the servers. We develop new cryptographic and systems techniques to meet the efficiency and trust model requirements outlined by the companies we surveyed. We implement DORY and show that it performs orders of magnitude better than a baseline built on ORAM. Parallelized across 8 servers, each with 16 CPUs, DORY takes 116ms to search roughly 50K documents and 862ms to search over 1M documents.

  • Post-Quantum Secure Architectures for Automotive Hardware Secure Modules
    by Wen Wang on October 16, 2020 at 10:00 am

    The rapid development of information technology in the automotive industry has driven increasing requirements on incorporating security functionalities in the in-vehicle architecture, which is usually realized by adding a Hardware Secure Module (HSM) in the Electronic Central Unit (ECU). Therefore, secure communications can be enforced by carrying out secret cryptographic computations within the HSM by use of the embedded hardware accelerators. However, there is no common standard for designing the architecture for an automotive HSM. A future design of a common automotive HSM is desired by the automotive industry which not only fits to the increasing performance demand, but also further defends against future attacks by attackers exploiting large-scale quantum computers. The arrival of future quantum computers motivates the investigation into post-quantum cryptography (PQC), which will retain the security of an HSM in the future. We analyzed the candidates in NIST's PQC standardization process, and proposed new sets of hardware accelerators for the future generation of the automotive HSMs. Our evaluation results show that building a post-quantum secure automotive HSM is feasible and can meet the hard requirements imposed by a modern vehicle ECU.

  • Strength in Numbers: Improving Generalization with Ensembles in Profiled Side-channel Analysis
    by Guilherme Perin on October 16, 2020 at 8:59 am

    The adoption of deep neural networks for profiled side-channel attacks provides powerful options for leakage detection and key retrieval of secure products. When training a neural network for side-channel analysis, it is expected that the trained model can implement an approximation function that can detect leaking side-channel samples and, at the same time, be insensible to noisy (or non-leaking) samples. This outlines a generalization situation where the model can identify the main representations learned from the training set in a separate test set. In this paper, we first discuss how output class probabilities represent a strong metric when conducting the side-channel analysis. Further, we observe that these output probabilities are sensitive to small changes, like the selection of specific test traces or weight initialization for a neural network. Next, we discuss the hyper-parameter tuning, where one commonly uses only a single out of dozens of trained models, where each of those models will result in different output probabilities. We show how ensembles of machine learning models based on averaged class probabilities can improve generalization. Our results emphasize that ensembles increase the performance of a profiled side-channel attack and reduce the variance of results stemming from different groups of hyper-parameters, regardless of the selected dataset or leakage model.

  • Keep it Unsupervised: Horizontal Attacks Meet Deep Learning
    by Guilherme Perin on October 16, 2020 at 8:51 am

    To mitigate side-channel attacks, real-world implementations of public-key cryptosystems adopt state-of-the-art countermeasures based on randomization of the private or ephemeral keys. Usually, for each private key operation, a "scalar blinding" is performed using 32 or 64 randomly generated bits. Nevertheless, horizontal attacks based on a single trace still pose serious threats to protected ECC or RSA implementations. If the secrets learned through a single-trace attack contain too many wrong (or noisy) bits, the cryptanalysis methods for recovering remaining bits become impractical due to time and computational constraints. This paper proposes a deep learning-based framework to iteratively correct partially correct secret keys resulting from a clustering-based horizontal attack. By testing the trained network on scalar multiplication (or exponentiation) traces, we demonstrate that a deep neural network can significantly reduce the number of error bits from randomized scalars (or exponents). When a simple horizontal attack can recover around 52% of private key bits, the proposed iterative framework improves the private key correctness to 100%. Our attack model remains fully unsupervised and excludes the need to know where the error or noisy bits are located in each separate randomized private key.

  • A Side-Channel Resistant Implementation of SABER
    by Michiel Van Beirendonck on October 16, 2020 at 5:55 am

    The candidates for the NIST Post-Quantum Cryptography standardization have undergone extensive studies on efficiency and theoretical security, but research on their side-channel security is largely lacking. This remains a considerable obstacle for their real-world deployment, where side-channel security can be a critical requirement. This work describes a side-channel resistant instance of Saber, one of the lattice-based candidates, using masking as a countermeasure. Saber proves to be very efficient to mask due to two specific design choices: power-of-two moduli, and limited noise sampling of learning with rounding. A major challenge in masking lattice-based cryptosystems is the integration of bit-wise operations with arithmetic masking, requiring algorithms to securely convert between masked representations. The described design includes a novel primitive for masked logical shifting on arithmetic shares, as well as adapts an existing masked binomial sampler for Saber. An implementation is provided for an ARM Cortex-M4 microcontroller, and its side-channel resistance is experimentally demonstrated. The masked implementation features a 2.5x overhead factor, significantly lower than the 5.7x previously reported for a masked variant of NewHope. Masked key decapsulation requires less than 3,000,000 cycles on the Cortex-M4 and consumes less than 12kB of dynamic memory, making it suitable for deployment in embedded platforms.

  • Efficient Composable Oblivious Transfer from CDH in the Global Random Oracle Model
    by Bernardo David on October 16, 2020 at 12:50 am

    Oblivious Transfer (OT) is a fundamental cryptographic protocol that finds a number of applications, in particular, as an essential building block for two-party and multi-party computation. We construct the first universally composable (UC) protocol for oblivious transfer secure against active static adversaries based on the Computational Diffie-Hellman (CDH) assumption. Our protocol is proven secure in the observable Global Random Oracle model. We start by constructing a protocol that realizes an OT functionality with a selective failure issue, but shown to be sufficient to instantiate efficient OT extension protocols. In terms of complexity, this protocol only requires the computation of 6 modular exponentiations and the communication of 5 group elements, five binary strings of security parameter length, and two binary strings of message length. Finally, we lift this weak construction to obtain a protocol that realizes the standard OT functionality (without any selective failures) at an additional cost of computing 9 modular exponentiations and communicating 4 group elements, four binary strings of security parameter length and two binary strings of message length. As an intermediate step before constructing our CDH based protocols, we design generic OT protocols from any OW-CPA secure public-key encryption scheme with certain properties, which could potentially be instantiated from more assumptions other than CDH.

  • FORTIS: FORgeable TImeStamps Thwart Selfish Mining
    by Osman Biçer on October 16, 2020 at 12:50 am

    Selfish mining (SM) attack (Eyal and Sirer, CACM ’13) endangered Proof-of-Work blockchains by allowing a rational mining pool with a hash power (α) much less than 50% of the whole network to deviate from the honest mining algorithm and to steal from the fair shares of honest miners. Since then, the attack has been studied extensively in various settings, for understanding its interesting dynamics, optimizing it, and mitigating it. In this context, first, we propose generalized formulas for the calculation of revenue and profitability from SM-type attacks. Second, we propose two different SM-type attacks on the state-of-the-art mitigation algorithm “Freshness Preferred” (Heilman, FC ’14). Our Oracle mining attack works on the setting with forgeable timestamps (i.e., if timestamps are not generated by an authority) and our Bold mining attack works on the setting with both forgeable or unforgeable timestamps (i.e., even if an authority issues timestamps). Although the use of timestamps would be promising for selfish mining mitigation, the analyses of our attacks show that Freshness Preferred is quite vulnerable in the presence of rational miners, as any rational miner with α >0 can directly benefit from our attacks. Third, we propose an SM mitigation algorithm Fortis with forgeable timestamps, which protects the honest miners’ shares against any attacker with α<27.0% against all the known SM-type attacks.

  • Sword: An Opaque Blockchain Protocol
    by Farid Elwailly on October 16, 2020 at 12:49 am

    I describe a blockchain design that hides the transaction graph from Blockchain Analyzers. The design is based on the realization that today the miner creating a block needs enough information to verify the validity of transactions, which makes details about the transactions public and thus allows blockchain analysis. Some protocols, such as Mimblewimble, obscure the transaction amounts but not the source of the funds which is enough to allow for analysis. The insight in this technical note is that the block creator can be restricted to the task of ensuring no double spends. The task of actually verifying transaction balances really belongs to the receiver. The receiver is the one motivated to verify that she is receiving a valid transaction output since she has to convince the next receiver that the balances are valid, otherwise no one will accept her spending transaction. The bulk of the transaction can thus be encrypted in such a manner that only the receiver can decrypt and examine it. Opening this transaction allows the receiver to also open previous transactions to allow her to work her way backward in a chain until she arrives at the coin generation blocks and completely verify the validity of the transaction. Since transactions are encrypted on the blockchain a blockchain analyzer cannot create a transaction graph until he is the receiver of a transaction that allows backward tracing through to some target transaction.

  • Improved attacks against key reuse in learning with errors key exchange
    by Nina Bindel on October 16, 2020 at 12:49 am

    Basic key exchange protocols built from the learning with errors (LWE) assumption are insecure if secret keys are reused in the face of active attackers. One example of this is Fluhrer's attack on the Ding, Xie, and Lin (DXL) LWE key exchange protocol, which exploits leakage from the signal function for error correction. Protocols aiming to achieve security against active attackers generally use one of two techniques: demonstrating well-formed keyshares using re-encryption like in the Fujisaki--Okamoto transform; or directly combining multiple LWE values, similar to MQV-style Diffie--Hellman-based protocols. In this work, we demonstrate improved and new attacks exploiting key reuse in several LWE-based key exchange protocols. First, we show how to greatly reduce the number of samples required to carry out Fluhrer's attack and reconstruct the secret period of a noisy square waveform, speeding up the attack on DXL key exchange by a factor of over 200. We show how to adapt this to attack a protocol of Ding, Branco, and Schmitt (DBS) designed to be secure with key reuse, breaking the claimed 128-bit security level in under a minute. We also apply our technique to a second authenticated key exchange protocol of DBS that uses an additive MQV design, although in this case our attack makes use of ephemeral key compromise powers of the eCK security model, which was not in scope of the claimed BR-model security proof. Our results show that building secure authenticated key exchange protocols directly from LWE remains a challenging and mostly open problem.

  • Multivariate Cryptographic Primitive based on the product of the roots of a polynomial over a field
    by Borja Gómez on October 16, 2020 at 12:48 am

    Cryptographic Primitives in Multivariate Public Key Cryptography are of relevant interest, specially in the quadratic case. These primitives classify the families of schemes that we encounter in this field. In this paper, the reader can find a new primitive based on the product of the roots of a polynomial over a field, where the coefficients of this polynomials are the elementary symmetric polynomials on $n$ variables, which guarantees a solution when inverting the scheme. Moreover, a cryptosystem and a digital signature scheme are built on top of this primitive, where distinct parametrizations and criteria that define the schemes are commented, along with applications of attacks available in literature.

  • Secure Quantum Two-Party Computation: Impossibility and Constructions
    by Michele Ciampi on October 16, 2020 at 12:48 am

    Secure two-party computation considers the problem of two parties computing a joint function of their private inputs without revealing anything beyond the output of the computation. In this work, we take the first steps towards understanding the setting in which the two parties want to evaluate a joint quantum functionality while using only a classical communication channel between them. Our first result indicates that it is in general impossible to realize a two-party quantum functionality against malicious quantum adversaries with black-box simulation, relying only on classical channels. The negative result stems from reducing the existence of a black-box simulator to the existence of an extractor for classical proof of quantum knowledge, which in turn leads to violation of the quantum no-cloning. Towards the positive results, we first introduce the notion of Oblivious Quantum Function Evaluation (OQFE). An OQFE is a two-party quantum cryptographic primitive with one fully classical party (Alice) whose input is (a classical description of a) quantum unitary, $U$, and a quantum party (Bob) whose input is a quantum state, $\psi$. In particular, Alice receives the classical output corresponding to the measurement of $U (\psi)$ while Bob receives no output. At the same time, the functionality guarantees that Bob remains oblivious to Alice's input $U$, while Alice learns nothing about $\psi$ more than what can be learned from the output of the computation. We present two concrete constructions, one secure against semi-honest parties and the other secure against malicious parties. Due to the no-go result mentioned above, we consider what is arguably the best possible notion obtainable in our model with respect to malicious adversaries: one-sided simulation security. This notion protects the input of one party (the quantum Bob) in the standard simulation-based sense, and protects the privacy of the other party's input (the classical Alice). We realize our protocol relying on the assumption of quantum secure injective homomorphic trapdoor one-way functions, which in turn rely on the learning with errors problem. As a result, we put forward a first, simple and modular construction of secure one-sided quantum two-party computation and quantum oblivious transfer over classical networks.

  • Multi-Input Quadratic Functional Encryption from Pairings
    by Junichi Tomida on October 16, 2020 at 12:48 am

    Multi-input functional encryption (MIFE) is a generalization of functional encryption and allows decryptor to learn only function values $f(x_{1},\ldots,x_{n})$ from ciphertexts of $x_{1},\ldots,x_{n}$. We present the first MIFE schemes for quadratic functions (MQFE) from pairings. We first observe that public-key MQFE can be obtained from inner product functional encryption in a relatively simple manner whereas obtaining secret-key MQFE from standard assumptions is completely nontrivial. The main contribution of this paper is to construct the first secret-key MQFE scheme that achieves indistinguishability-based selective security against unbounded collusion under the standard bilateral matrix Diffie-Hellman assumption. All previous MIFE schemes either support only inner products (linear functions) or rely on non-standard cryptographic assumptions such as indistinguishability obfuscation or multi-linear maps. Thus, our schemes are the first MIFE for functionality beyond linear functions from polynomial hardness of standard assumptions.

  • Entropy Estimation of Physically Unclonable Functions
    by Mitsuru Shiozaki on October 16, 2020 at 12:47 am

    Physically unclonable functions (PUFs) are gaining attention as a promising cryptographic technique; the main applications using PUFs include challenge-response authentication and key generation (key storage). When a PUF is applied to these applications, min-entropy estimation is essential. Min-entropy is a measure of the lower bound of the unpredictability of PUF responses. A prominent scheme for estimating min-entropy is the National Institute of Standards and Technology (NIST) specification (SP) 800-90B. It includes several statistical tests and ten kinds of estimators aimed at estimating the min-entropy of random number generators (RNGs). Several studies have estimated the min-entropy of PUFs as well as those of RNGs by using SP 800-90B. In this paper, we point out two problems in this scheme to estimate the min-entropy of PUFs. One is that the estimation results vary widely by the ordering of the PUF responses. The other is that the entropy estimation suite of SP 800-90B can overestimate PUF min-entropy. Both problems are related to the cause of lower entropy due to variations in the manufacturing of circuits and transistors (except for the PUF sources, which are circuits and transistors used to extract intrinsic physical properties and to generate device unique responses), named ``multiple sources.'' We call these circuits and transistors ``entropy-loss sources'' in contrast to the PUF sources. We applied three orderings to the PUF responses of our static random-access memory (SRAM) PUF and our complementary metal-oxide-semiconductor (CMOS) image sensor with a PUF (CIS PUF): row-direction ordering, column-direction ordering, and random-shuffle ordering. We demonstrated that the estimated min-entropy varies with the ordering. In particular, we found that arranging the PUF responses in readout order results in the overestimation of the min-entropy. We used numerical simulation to create numerical PUFs with the entropy-loss source. We demonstrated that the entropy estimation suite overestimates their entropy.

  • Anonymous, Attribute Based, Decentralized, Secure, and Fair e-Donation
    by Osman Biçer on October 15, 2020 at 6:57 pm

    E-cash and cryptocurrency schemes have been a focus of applied cryptography for a long time. However, we acknowledge the continuing need for a cryptographic protocol that provides global scale, decentralized, secure, and fair delivery of donations. Such a protocol would replace central trusted entities (e.g., charity organizations) and guarantee the privacy of the involved parties (i.e., donors and recipients of the donations). In this work, we target this online donation problem and propose a practical solution for it. First, we propose a novel decentralized e-donation framework, along with its operational components and security definitions. Our framework relies on a public ledger that can be realized via a distributed blockchain. Second, we instantiate our e-donation framework with a practical scheme employing privacy-preserving cryptocurrencies and attribute-based signatures. Third, we provide implementation results showing that our operations have feasible computation and communication costs. Finally, we prove the security of our e-donation scheme via formal reductions to the security of the underlying primitives.

  • Generic Constructions of Secure-Channel Free Searchable Encryption with Adaptive Security
    by Keita Emura on October 15, 2020 at 6:21 pm

    For searching keywords against encrypted data, the public key encryption scheme with keyword search (PEKS), and its an extension called secure-channel free PEKS (SCF-PEKS) have been proposed. In SCF-PEKS, a receiver makes a trapdoor for a keyword, and uploads it on a server. A sender computes an encrypted keyword, and sends it to the server. The server executes the searching procedure (called the test algorithm, which takes as inputs an encrypted keyword, trapdoor, and secret key of the server). In this paper, we extend the security of SCF-PEKS, calling it adaptive SCF-PEKS, wherein an adversary (modeled as a ``malicious-but-legitimate" receiver) is allowed to issue test queries \emph{adaptively}, and show that adaptive SCF-PEKS can be generically constructed by anonymous identity-based encryption (anonymous IBE) only. That is, for constructing adaptive SCF-PEKS we need not require any additional cryptographic primitive when compared to the Abdalla et al. PEKS construction (J. Cryptology 2008), even though adaptive SCF-PEKS requires additional functionalities. Note that our generic construction needs to apply the KEM/DEM framework (a.k.a. hybrid encryption), where KEM stands for key encapsulation mechanism, and DEM stands for data encapsulation mechanism. We also show that there is a class of anonymous IBE that can be applied for constructing adaptive SCF-PEKS without using hybrid encryption, and propose an adaptive SCF-PEKS construction based on this IBE. Although our second construction is not fully generic, it is efficient compared to the first, since we can exclude the DEM part. Finally, we instantiate an adaptive SCF-PEKS scheme (via our second construction) that achieves a similar level of efficiency for the costs of the test procedure and encryption, compared to the (non-adaptive secure) SCF-PEKS scheme by Fang et al. (CANS2009).

  • Lattice-Based Blind Signatures, Revisited
    by Eduard Hauck on October 15, 2020 at 1:48 pm

    We observe that all previously known lattice-based blind signature schemes contain subtle flaws in their security proofs (e.g., R\"uckert, ASIACRYPT '08) or can be attacked (e.g., BLAZE by Alkadri et al., FC '20). Motivated by this, we revisit the problem of constructing blind signatures from standard lattice assumptions. We propose a new three-round lattice-based blind signature scheme whose security can be proved, in the random oracle model, from the standard SIS assumption. Our starting point is a modified version of the (insecure) BLAZE scheme, which itself is based Lyubashevsky's three-round identification scheme combined with a new aborting technique to reduce the correctness error. Our proof builds upon and extends the recent modular framework for blind signatures of Hauck, Kiltz, and Loss (EUROCRYPT '19). It also introduces several new techniques to overcome the additional challenges posed by the correctness error which is inherent to all lattice-based constructions. While our construction is mostly of theoretical interest, we believe it to be an important stepping stone for future works in this area.

  • An Algorithmic Reduction Theory for Binary Codes: LLL and more
    by Thomas Debris-Alazard on October 15, 2020 at 1:01 pm

    In this article, we propose an adaptation of the algorithmic reduction theory of lattices to binary codes. This includes the celebrated LLL algorithm (Lenstra, Lenstra, Lovasz, 1982), as well as adaptations of associated algorithms such as the Nearest Plane Algorithm of Babai (1986). Interestingly, the adaptation of LLL to binary codes can be interpreted as an algorithmic version of the bound of Griesmer (1960) on the minimal distance of a code. Using these algorithms, we demonstrate ---both with a heuristic analysis and in practice--- a small polynomial speed-up over the Information-Set Decoding algorithm of Lee and Brickell (1988) for random binary codes. This appears to be the first such speed-up that is not based on a time-memory trade-off. The above speed-up should be read as a very preliminary example of the potential of a reduction theory for codes, for example in cryptanalysis. In constructive cryptography, this algorithmic reduction theory could for example also be helpful for designing trapdoor functions from codes.

  • Alibi: A Flaw in Cuckoo-Hashing based Hierarchical ORAM Schemes and a Solution
    by Brett Hemenway Falk on October 15, 2020 at 10:53 am

    There once was a table of hashes That held extra items in stashes It all seemed like bliss But things went amiss When the stashes were stored in the caches The first Oblivious RAM protocols introduced the ``hierarchical solution,'' (STOC '90) where the servers store a series of hash tables of geometrically increasing capacities. Each ORAM query would read a small number of locations from each level of the hierarchy, and each level of the hierarchy would be reshuffled and rebuilt at geometrically increasing intervals to ensure that no single query was ever repeated twice at the same level. This yielded an ORAM protocol with polylogarithmic (amortized) overhead. Future works extended and improved the hierarchical solution, replacing traditional hashing with cuckoo hashing (ICALP '11) and cuckoo hashing with a combined stash (Goodrich et al. SODA '12). In this work, we identify a subtle flaw in the protocol of Goodrich et al. (SODA '12) that uses cuckoo hashing with a stash in the hierarchical ORAM solution. We give a concrete distinguishing attack against this type of hierarchical ORAM that uses cuckoo hashing with a \emph{combined} stash. This security flaw has propagated to at least 5 subsequent hierarchical ORAM protocols, including the recent optimal ORAM scheme, OptORAMa (Eurocrypt '20). In addition to our attack, we identify a simple fix that does not increase the asymptotic complexity. We note, however, that our attack only affects more recent \emph{hierarchical ORAMs}, but does not affect the early protocols that predate the use of cuckoo hashing, or other types of ORAM solutions (e.g. Path ORAM or Circuit ORAM).

  • CHIP and CRISP: Compromise Resilient Identity-based Symmetric PAKEs
    by Moni Naor on October 15, 2020 at 10:32 am

    Password Authenticated Key Exchange (PAKE) protocols allow parties to establish a shared key based only on the knowledge of a low entropy password. In this work, we propose a novel notion called ``Identity-based PAKE'' (iPAKE) -- providing resilience against compromise of one or more parties. iPAKE protocols protect all parties in the symmetric setting, whereas in Asymmetric PAKE (aPAKE) only one party (a server) is protected w.r.t compromise. Binding each party to its identity prevents impersonation between devices with different roles and allows the revocation of compromised parties. We achieve this by using ideas from Identity-Based Key-Agreement (IB-KA) while using only a low entropy password, without requiring a trusted center. We further strengthen the notion by introducing ``Strong iPAKE'' (siPAKE) that is additionally immune to pre-computation (analogous to ``Strong aPAKE'' (saPAKE) strengthening of aPAKE). To mount an (inevitable) offline dictionary attack, an adversary must first compromise a device and only then start an exhaustive search over the entire password dictionary. Rather than storing its password in the clear, each party derives a password file using its identity and a secret random salt (``salted hash''). The challenge is that although the random salts are independently selected, any pair of parties should be able to establish a cryptographically secure shared key from these files. We formalize the iPAKE and siPAKE notions in the Universally Composable (UC) framework. We propose CHIP: a compiler from PAKE to iPAKE using IB-KA and prove its UC-security in the Random Oracle Model (ROM). We then present CRISP: a construction of siPAKE from any PAKE using bilinear groups with ``Hash2Curve''. We prove CRISP's UC-security in the Generic Group Model (GGM) and show that each offline password guess requires at least one pairing operation.

  • Combining Optimization Objectives: New Machine-Learning Attacks on Strong PUFs
    by Johannes Tobisch on October 15, 2020 at 8:22 am

    Strong Physical Unclonable Functions (PUFs), as a promising security primitive, are supposed to be a lightweight alternative to classical cryptography for purposes such as device authentication. Most of the proposed candidates, however, have been plagued by machine-learning attacks breaking their security claims. The Interpose PUF (iPUF), which has been introduced at CHES 2019, was explicitly designed with state-of-the-art machine-learning attacks in mind and is supposed to be impossible to break by classical and reliability attacks. In this paper, we analyze its vulnerability to reliability attacks. Despite the increased difficulty, these attacks are still feasible, against the original authors’ claim. We explain how adding constraints to the machine-learning objective streamlines reliability attacks and allows us to model all individual components of an iPUF successfully. In order to build a practical attack, we give several novel contributions. First, we demonstrate that reliability attacks can be performed not only with CMA-ES but also with gradient-based optimization. Second, we show that the switch to gradient-based reliability attacks makes it possible to combine reliability attacks, weight constraints, and Logistic Regression (LR) into a single optimization objective. This framework makes machine-learning attacks more efficient, as it exploits knowledge of responses and reliability information at the same time. Third, we show that a differentiable model of the iPUF exists and how it can be utilized in a combined reliability attack. We confirm that iPUFs are harder to break than regular XOR Arbiter PUFs. However, we are still able to break (1,10)-iPUF instances, which were originally assumed to be secure, with less than 10^7 PUF response queries.

  • Towards Defeating Backdoored Random Oracles: Indifferentiability with Bounded Adaptivity
    by Yevgeniy Dodis on October 15, 2020 at 7:02 am

    In the backdoored random-oracle (BRO) model, besides access to a random function $H$, adversaries are provided with a backdoor oracle that can compute arbitrary leakage functions $f$ of the function table of $H$. Thus, an adversary would be able to invert points, find collisions, test for membership in certain sets, and more. This model was introduced in the work of Bauer, Farshim, and Mazaheri (Crypto 2018) and extends the auxiliary-input idealized models of Unruh (Crypto 2007), Dodis, Guo, and Katz (Eurocrypt 2017), Coretti et al. (Eurocrypt 2018), and Coretti, Dodis, and Guo (Crypto 2018). It was shown that certain security properties, such as one-wayness, pseudorandomness, and collision resistance can be re-established by combining two independent BROs, even if the adversary has access to both backdoor oracles. In this work we further develop the technique of combining two or more independent BROs to render their backdoors useless in a more general sense. More precisely, we study the question of building an indifferentiable and backdoor-free random function by combining multiple BROs. Achieving full indifferentiability in this model seems very challenging at the moment. We however make progress by showing that the xor combiner goes well beyond security against preprocessing attacks and offers indifferentiability as long as the adaptivity of queries to different backdoor oracles remains logarithmic in the input size of the BROs. We even show that an extractor-based combiner of three BROs can achieve indifferentiability with respect to a linear adaptivity of backdoor queries. Furthermore, a natural restriction of our definition gives rise to a notion of indifferentiability with auxiliary input, for which we give two positive feasibility results. To prove these results we build on and refine techniques by Göös et al. (STOC 2015) and Kothari et al. (STOC 2017) for decomposing distributions with high entropy into distributions with more structure and show how they can be applied in the more involved adaptive settings.

  • A Secret-Sharing Based MPC Protocol for Boolean Circuits with Good Amortized Complexity
    by Ignacio Cascudo on October 15, 2020 at 6:08 am

    We present a new secure multiparty computation protocol in the preprocessing model that allows for the evaluation of a number of instances of a boolean circuit in parallel, with a small online communication complexity per instance of 10 bits per party and multiplication gate. Our protocol is secure against an active dishonest majority, and can be also transformed, via known techniques, into a protocol for the evaluation of a single “well-formed” boolean circuit with the same complexity per multiplication gate at the cost of some overhead that depends on the topology of the circuit. Our protocol uses an approach introduced recently in the setting of honest majority and information-theoretical security which, using an algebraic notion called reverse multiplication friendly embeddings, essentially transforms a batch of evaluations of an arithmetic circuit over a small field into one evaluation of another arithmetic circuit over a larger field. To obtain security against a dishonest majority we combine this approach with the well-known SPDZ protocol that operates over a large field. Structurally our protocol is most similar to MiniMAC, a protocol which bases its security on the use of error-correcting codes, but our protocol has a communication complexity which is half of that of MiniMAC when the best available binary codes are used. This makes it fully compatible with the technique from MiniMAC that allows to adapt the protocol for the computation of a well-formed boolean circuit. With respect to certain variant of MiniMAC that utilizes codes over large fields, our communication complexity is slightly worse; however, that variant of MiniMAC needs a much larger preprocessing than ours. We also show that our protocol also has smaller amortized communication complexity than Committed MPC, a protocol for general fields based on homomorphic commitments, if we use the best available constructions for those commitments. Finally, we construct a preprocessing phase from oblivious transfer based on ideas from MASCOT and Committed MPC.

  • MuSig-DN: Schnorr Multi-Signatures with Verifiably Deterministic Nonces
    by Jonas Nick on October 15, 2020 at 4:33 am

    MuSig is a multi-signature scheme for Schnorr signatures, which supports key aggregation and is secure in the plain public key model. Standard derandomization techniques for discrete logarithm-based signatures such as RFC 6979, which make the signing procedure immune to catastrophic failures in the randomness generation, are not applicable to multi-signatures as an attacker could trick an honest user into producing two different partial signatures with the same randomness, which would reveal the user's secret key. In this paper, we propose a variant of MuSig in which signers generate their nonce deterministically as a pseudorandom function of the message and all signers' public keys and prove that they did so by providing a non-interactive zero-knowledge proof to their cosigners. The resulting scheme, which we call MuSig-DN, is the first Schnorr multi-signature scheme with deterministic signing. Therefore its signing protocol is robust against failures in the randomness generation as well as attacks trying to exploit the statefulness of the signing procedure, e.g., virtual machine rewinding attacks. As an additional benefit, a signing session in MuSig-DN requires only two rounds instead of three as required by all previous Schnorr multi-signatures including MuSig. To instantiate our construction, we identify a suitable algebraic pseudorandom function and provide an efficient implementation of this function as an arithmetic circuit. This makes it possible to realize MuSig-DN efficiently using zero-knowledge proof frameworks for arithmetic circuits which support inputs given in Pedersen commitments, e.g., Bulletproofs. We demonstrate the practicality of our technique by implementing it for the secp256k1 elliptic curve used in Bitcoin.

  • The Mother of All Leakages: How to Simulate Noisy Leakages via Bounded Leakage (Almost) for Free
    by Gianluca Brian on October 15, 2020 at 4:28 am

    We show that noisy leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to a small statistical simulation error and a slight loss in the leakage parameter. The latter holds true in particular for one of the most used noisy-leakage models, where the noisiness is measured using the conditional average min-entropy (Naor and Segev, CRYPTO'09 and SICOMP'12). Our reductions between noisy and bounded leakage are achieved in two steps. First, we put forward a new leakage model (dubbed the dense leakage model) and prove that dense leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to small statistical distance. Second, we show that the most common noisy-leakage models fall within the class of dense leakage, with good parameters. We also provide a complete picture of the relationships between different noisy-leakage models, and prove lower bounds showing that our reductions are nearly optimal. Our result finds applications to leakage-resilient cryptography, where we are often able to lift security in the presence of bounded leakage to security in the presence of noisy leakage, both in the information-theoretic and in the computational setting. Additionally, we show how to use lower bounds in communication complexity to prove that bounded-collusion protocols (Kumar, Meka, and Sahai, FOCS'19) for certain functions do not only require long transcripts, but also necessarily need to reveal enough information about the inputs.

  • Compact Dilithium Implementations on Cortex-M3 and Cortex-M4
    by Denisa O. C. Greconici on October 15, 2020 at 3:33 am

    We present implementations of the lattice-based digital signature scheme Dilithium for ARM Cortex-M3 and ARM Cortex-M4. Dilithium is one of the three signature finalists of the NIST post-quantum cryptography competition. As our Cortex-M4 target, we use the popular STM32F407-DISCOVERY development board. Compared to the previous speed records on the Cortex-M4 by Ravi, Gupta, Chattopadhyay, and Bhasin we speed up the key operations NTT and $\text{NTT}^{-1}$ by 20% which together with other optimizations results in speedups of 7%, 15%, and 9% for Dilithium3 key generation, signing, and verification respectively. We also present the first constant-time Dilithium implementation on the Cortex-M3 and use the Arduino Due for benchmarks. For Dilithium3, we achieve on average 2 562 kilocycles for key generation, 10 667 kilocycles for signing, and 2 321 kilocycles for verification. Additionally, we present stack consumption optimizations applying to both our CortexM3 and Cortex-M4 implementation. Due to the iterative nature of the Dilithium signing algorithm, there is no optimal way to achieve the best speed and lowest stack consumption at the same time. We present three different strategies for the signing procedure which allow trading more stack and flash memory for faster speed or vice-versa. Our implementation of Dilithium3 with the smallest memory footprint uses less than 12kB. As an additional output of this work, we present the first Cortex-M3 implementations of the key-encapsulation schemes NewHope and Kyber.

  • Fault Attacks In Symmetric Key Cryptosystems
    by Anubhab Baksi on October 15, 2020 at 3:29 am

    Fault attacks are among the well-studied topics in the area of cryptography. These attacks constitute a powerful tool to recover the secret key used in the encryption process. Fault attacks work by forcing a device to work under non-ideal environmental conditions (such as high temperature) or external disturbances (such as glitch in the power supply) while performing a cryptographic operation. The recent trend shows that the amount of research in this direction; which ranges from attacking a particular primitive, proposing a fault countermeasure, to attacking countermeasures; has grown up substantially and going to stay as an active research interest for a foreseeable future. Hence, it becomes apparent to have a comprehensive yet compact study of the (major) works. This work, which covers a wide spectrum in the present day research on fault attacks that fall under the purview of the symmetric key cryptography, aims at fulfilling the absence of an up-to-date survey. We present mostly all aspects of the topic in a way which is not only understandable for a non-expert reader, but also helpful for an expert as a reference.

  • Weak Keys in the Rekeying Paradigm: Application to COMET and mixFeed
    by Mustafa Khairallah on October 15, 2020 at 2:43 am

    In this paper, we study a group of AEAD schemes that use rekeying as a technique to increase efficiency by reducing the state size of the algorithm. We provide a unified model to study the behavior of the keys used in these schemes, called Rekey-and-Chain (RaC). This model helps understand the design of several AEAD schemes. We show generic attacks on these schemes based on the existence of certain types of weak keys. We also show that the borderline between multi-key and single-key analyses of these schemes is not solid and the analysis can be performed independent of the master key, leading sometimes to practical attacks in the multi-key setting. More importantly, the multi-key analysis can be applied in the single key setting, since each message is encrypted with a different key. Consequently, we show gaps in the security analysis of COMET and mixFeed in the single key setting, which led the designers to provide overly optimistic security claims. In the case of COMET, full key recovery can be performed with 2^64 online queries and 2^64 offline queries in the single-key setting, or 2^40 online queries per user and 2^64 offline queries in the multi-key setting with 2^24 users. In the case of mixFeed, we enhance the forgery adversarial advantage in the single-key setting with a factor of 2^67 compared to what the designers claim. More importantly, our result is just a lower bound of this advantage, since we show that the gap in the analysis of mixFeed depends on properties of the AES Key Schedule that are not well understood and require more cryptanalytic efforts to find a more tight advantage. After reporting these findings, the designers updated their security analyses and accommodated the proposed attacks.

  • Asymptotically Good Multiplicative LSSS over Galois Rings and Applications to MPC over Z/p^k Z
    by Mark Abspoel on October 15, 2020 at 1:28 am

    We study information-theoretic multiparty computation (MPC) protocols over rings $\mathbb{Z}/p^k \mathbb{Z}$ that have good asymptotic communication complexity for a large number of players. An important ingredient for such protocols is arithmetic secret sharing, i.e., linear secret-sharing schemes with multiplicative properties. The standard way to obtain these over fields is with a family of linear codes $C$, such that $C$, $C^\perp$ and $C^2$ are asymptotically good (strongly multiplicative). For our purposes here it suffices if the square code $C^2$ is not the whole space, i.e., has codimension at least 1 (multiplicative). Our approach is to lift such a family of codes defined over a finite field $\mathbb F$ to a Galois ring, which is a local ring that has $\mathbb F$ as its residue field and that contains $\mathbb{Z}/p^k \mathbb{Z}$ as a subring, and thus enables arithmetic that is compatible with both structures. Although arbitrary lifts preserve the distance and dual distance of a code, as we demonstrate with a counterexample, the multiplicative property is not preserved. We work around this issue by showing a dedicated lift that preserves \emph{self-orthogonality} (as well as distance and dual distance), for $p \geq 3$. Self-orthogonal codes are multiplicative, therefore we can use existing results of asymptotically good self-dual codes over fields to obtain arithmetic secret sharing over Galois rings. For $p = 2$ we obtain multiplicativity by using existing techniques of secret-sharing using both $C$ and $C^\perp$, incurring a constant overhead. As a result, we obtain asymptotically good arithmetic secret-sharing schemes over Galois rings. With these schemes in hand, we extend existing field-based MPC protocols to obtain MPC over $\mathbb{Z}/p^k \mathbb{Z}$, in the setting of a submaximal adversary corrupting less than a fraction $1/2 - \varepsilon$ of the players, where $\varepsilon > 0$ is arbitrarily small. We consider 3 different corruption models. For passive and active security with abort, our protocols communicate $O(n)$ bits per multiplication. For full security with guaranteed output delivery we use a preprocessing model and get $O(n)$ bits per multiplication in the online phase and $O(n \log n)$ bits per multiplication in the offline phase. Thus, we obtain true linear bit complexities, without the common assumption that the ring size depends on the number of players.

  • Compact Authenticated Key Exchange in the Quantum Random Oracle Model
    by Haiyang Xue on October 14, 2020 at 10:47 pm

    We propose a generic construction of two-message authenticated key exchange (AKE) in the quantum random oracle model (QROM). It can be seen as a QROM-secure version of X3LH-AKE [Xue et al. ASIACRYPT 2018], a generic AKE based on double-key PKE. We prove that, with some modification, the security of X3LH-AKE in QROM can be reduced to the one-way security of double-key PKE. In addition to answering several open problems on the QROM security of prior works, such as SIAKE [Xu et al. ASIACRYPT 2019], FSXY-AKE and 2Kyber-AKE, we propose a new construction, CSIAKE, based on commutative supersingular isogenies. Our frame enjoys the following desirable features. First of all, it supports PKEs with non-perfect correctness. Secondly, the security reduction is relatively tight. In addition, the basic building block is weak and compact. Finally, the resulting AKE achieves the security in CK$^+$ model as strong as in X3LH-AKE, and the transformation overhead is low.

  • Silent Two-party Computation Assisted by Semi-trusted Hardware
    by Yibiao Lu on October 14, 2020 at 10:39 pm

    With the advancement of the trusted execution environment (TEE) technologies, hardware-supported secure computing becomes increasingly popular due to its efficiency. During the protocol execution, typically, the players need to contact a third-party server for remote attestation, ensuring the validity of the involved trusted hardware component, such as Intel SGX, as well as the integrity of the computation result. When the hardware manufacturer is not fully trusted, sensitive information may be leaked to the third-party server through backdoors, side-channels, steganography, and kleptography, etc. In this work, we introduce a new security notion called semi-trusted hardware model, where the adversary is allowed to passively and/or maliciously corrupt the hardware component. Therefore, she can learn the input of the hardware component and might also tamper the output. We show that two-party computation can still be significantly sped up in this new model. When the semi-trusted hardware is instantiated by Intel SGX, to generate 10k random OT's, our protocol is 50X and 270X faster than the EMP-ROT in the LAN and WAN setting, respectively. For the AES, SHA-1, and SHA-256 evaluation, our protocol is 4-5X and 40-50X faster than the EMP-SH2PC in the LAN and WAN setting, respectively.

  • A Note on Separating Classical and Quantum Random Oracles
    by Takashi Yamakawa on October 14, 2020 at 8:26 pm

    In this note, we observe that a proof of quantumness in the random oracle model recently proposed by Brakerski et al. can be seen as a proof of quantum access to a random oracle. Based on this observation, we give the first examples of natural cryptographic schemes that separate classical and quantum random oracle models. Specifically, we construct digital signature and public key encryption schemes that are secure in the classical random oracle model but insecure in the quantum random oracle model assuming the quantum hardness of learning with error problem.

  • Delay Encryption
    by Jeffrey Burdges on October 14, 2020 at 5:37 pm

    We introduce a new primitive named Delay Encryption, and give an efficient instantation based on isogenies of supersingular curves and pairings. Delay Encryption is related to Time-lock Puzzles and Verifiable Delay Functions, and can be roughly described as "identity based encryption with slow derived private key issuance". It has several applications in distributed protocols, such as sealed bid Vickrey auctions and electronic voting. We give an instantiation of Delay Encryption by modifying Boneh and Frankiln's IBE scheme, where we replace the master secret key by a long chain of isogenies, as in the isogeny VDF of De Feo, Masson, Petit and Sanso. Similarly to the isogeny-based VDF, our Delay Encryption requires a trusted setup before parameters can be safely used; our trusted setup is identical to that of the VDF, thus the same parameters can be generated once and shared for many executions of both protocols, with possibly different delay parameters. We also discuss several topics around delay protocols based on isogenies that were left untreated by De Feo et al., namely: distributed trusted setup, watermarking, and implementation issues.

  • Tightly-Secure Authenticated Key Exchange, Revisited
    by Tibor Jager on October 14, 2020 at 2:05 pm

    We introduce new tightly-secure authenticated key exchange (AKE) protocols that are extremely efficient, yet have only a constant security loss and can be instantiated in the random oracle model both from the standard DDH assumption and a subgroup assumption over RSA groups. These protocols can be deployed with optimal parameters, independent of the number of users or sessions, without the need to compensate a security loss with increased parameters and thus decreased computational efficiency. We use the standard “Single-Bit-Guess” AKE security (with forward secrecy and state corruption) requiring all challenge keys to be simultaneously pseudo-random. In contrast, most previous papers on tightly secure AKE protocols (Bader et al., TCC 2015; Gjøsteen and Jager, CRYPTO 2018; Liu et al., ASIACRYPT 2020) concentrated on a non-standard “Multi-Bit-Guess” AKE security which is known not to compose tightly with symmetric primitives to build a secure communication channel. Our key technical contribution is a new generic approach to construct tightly-secure AKE protocols based on non-committing key encapsulation mechanisms. The resulting DDH-based protocols are considerably more efficient than all previous constructions.

  • Offline Witness Encryption with Semi-Adaptive Security
    by Peter Chvojka on October 14, 2020 at 1:24 pm

    The first construction of Witness Encryption (WE) by Garg et al. (STOC 2013) has led to many exciting avenues of research in the past years. A particularly interesting variant is Offline WE (OWE) by Abusalah et al. (ACNS 2016), as the encryption algorithm uses neither obfuscation nor multilinear maps. Current OWE schemes provide only selective security. That is, the adversary must commit to their challenge messages $m_0$ and $m_1$ before seeing the public parameters. We provide a new, generic framework to construct OWE, which achieves adaptive security in the sense that the adversary may choose their challenge messages adaptively. We call this semi-adaptive security, because - as in prior work - the instance of the considered NP language that is used to create the challenge ciphertext must be fixed before the parameters are generated in the security proof. We show that our framework gives the first OWE scheme with constant ciphertext overhead even for messages of polynomially-bounded size. We achieve this by introducing a new variant of puncturable encryption defined by Green and Miers (S&P 2015) and combining it with the iO-based approach of Abusalah et al. Finally, we show that our framework can be easily extended to construct the first Extractable Offline Witness Encryption (EOWE), by using extractability obfuscation of Boyle et al. (TCC 2014) in place of iO, opening up even more possible applications. The obfuscation is needed only for our public parameters, but its functionality can be realised with a Trusted Execution Environment (TEE), which means we have a very efficient scheme with ciphertexts consisting of only 5 group elements.

  • Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem
    by Craig Costello on October 14, 2020 at 12:27 pm

    We give a sieving algorithm for finding pairs of consecutive smooth numbers that utilizes solutions to the Prouhet-Tarry-Escott (PTE) problem. Any such solution induces two degree-$n$ polynomials, $a(x)$ and $b(x)$, that differ by a constant integer $C$ and completely split into linear factors in $\mathbb{Z}[x]$. It follows that for any $\ell \in \mathbb{Z}$ such that $a(\ell) \equiv b(\ell) \equiv 0 \bmod{C}$, the two integers $a(\ell)/C$ and $b(\ell)/C$ differ by 1 and necessarily contain $n$ factors of roughly the same size. For a fixed smoothness bound $B$, restricting the search to pairs of integers that are parameterized in this way increases the probability that they are $B$-smooth. Our algorithm combines a simple sieve with parametrizations given by a collection of solutions to the PTE problem. The motivation for finding large twin smooth integers lies in their application to compact isogeny-based post-quantum protocols. The recent key exchange scheme B-SIDH and the recent digital signature scheme SQISign both require large primes that lie between two smooth integers; finding such a prime can be seen as a special case of finding twin smooth integers under the additional stipulation that their sum is a prime $p$. When searching for cryptographic parameters with $2^{240} \leq p <2^{256}$, an implementation of our sieve found primes $p$ where $p+1$ and $p-1$ are $2^{15}$-smooth; the smoothest prior parameters had a similar sized prime for which $p-1$ and $p+1$ were $2^{19}$-smooth.

  • Key Agreement for Decentralized Secure Group Messaging with Strong Security Guarantees
    by Matthew Weidner on October 14, 2020 at 12:25 pm

    Secure group messaging protocols provide end-to-end encryption for group communication. Practical protocols face many challenges, including mobile devices frequently being offline, group members being added or removed, and the possibility of device compromises during long-lived chat sessions. Existing work targets a centralized network model in which all messages are routed through a single server, which is trusted to provide a consistent total order on updates to the the group state. In this paper we adapt secure group messaging for decentralized networks that have no central authority. Servers may still optionally be used, but their trust requirements are reduced. We define decentralized continuous group key agreement (DCGKA), a new cryptographic primitive encompassing the core of a decentralized secure group messaging protocol; we give a practical construction of a DCGKA protocol and prove its security; and we describe how to construct a full messaging protocol from DCGKA. In the face of device compromise our protocol achieves forward secrecy and post-compromise security. We evaluate the performance of a prototype implementation, and demonstrate that our protocol has practical efficiency.

  • A Simple Protocol to Compare EMFI Platforms
    by J. Toulemont on October 14, 2020 at 12:22 pm

    Several electromagnetic fault injection (EMFI) platforms have been developed these last years. They rely on different technical solutions and figures of merit used in the related datasheets or publications are also different. This renders difficult the comparison of the various EMFI platforms and the choice of the one adapted to its own usage. This paper suggests a characterization protocol which application is fast and requires equipment usually available in labs involved in security characterization. It also introduces an effective solution to enhance (by a factor 5) the timing resolution of EMFI platforms built around a commercial voltage pulse generator designed to drive 50 Ohm termination.

  • Lattice-based Key Sharing Schemes - A Survey
    by Prasanna Ravi on October 14, 2020 at 12:21 pm

    Public key cryptography is an indispensable component used in almost all of our present day digital infrastructure. However, most if not all of it is predominantly built upon hardness guarantees of number theoretic problems that can be broken by large scale quantum computers in the future. Sensing the imminent threat from continued advances in quantum computing, NIST has recently initiated a global level standardization process for quantum resistant public-key cryptographic primitives such as public key encryption, digital signatures and key encapsulation mechanisms. While the process received proposals from various categories of post-quantum cryptography, lattice-based cryptography features most prominently among all the submissions. Lattice-based cryptography offers a very attractive alternative to traditional public-key cryptography mainly due to the variety of lattice-based schemes offering varying flavors of security and efficiency guarantees. In this paper, we survey the evolution of lattice-based key sharing schemes (public key encryption and key encapsulation schemes) and cover various aspects ranging from theoretical security guarantees, general algorithmic frameworks, practical implementation aspects and physical attack security, with special focus on lattice-based key sharing schemes competing in the NIST's standardization process. Please note that our work is focussed on the results available from the second round of the NIST's standardization process while the standardization process has progressed to the third and final round at the time of publishing this document.

  • Quarks: Quadruple-efficient transparent zkSNARKs
    by Srinath Setty on October 14, 2020 at 12:21 pm

    We introduce Xiphos and Kopis, new transparent zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for R1CS. They do not require a trusted setup, and their security relies on the standard SXDH problem. They achieve non-interactivity in the random oracle model using the Fiat-Shamir transform. Unlike prior transparent zkSNARKs, which support either a fast prover, short proofs, or quick verification, our work is the first to simultaneously achieve all three properties (both asymptotically and concretely) and in addition an inexpensive setup phase, thereby providing the first quadruple-efficient transparent zkSNARKs (Quarks). Under both schemes, for an R1CS instance of size n and security parameter $\lambda$, the prover incurs $O_{\lambda}(n)$ costs to produce a proof of size $O_{\lambda}(\log{n})$. In Xiphos, verification time is $O_{\lambda}(\log{n})$, and in Kopis it is $O_{\lambda}(\sqrt{n})$. In terms of concrete efficiency, compared to prior state-of-the-art transparent zkSNARKs, Xiphos offers the fastest verification; its proof sizes are competitive with those of SuperSonic [EUROCRYPT 2020], a prior transparent SNARK with the shortest proofs in the literature. Xiphos’s prover is fast: its prover is $\approx$$5\times$ of Spartan [CRYPTO 2020], a prior transparent zkSNARK with the fastest prover in the literature, and is $250$$\times$ faster than SuperSonic. Kopis, at the cost of increased verification time (which is still concretely faster than SuperSonic), shortens Xiphos’s proof sizes further, thereby producing proofs shorter than SuperSonic. Xiphos and Kopis incur $10$--$10,000\times$ lower preprocessing costs for the verifier in the setup phase depending on the baseline. Finally, a byproduct of Kopis is Lakonia, a NIZK for R1CS with $O_{\lambda}(\log{n})$-sized proofs, which provides an alternative to Bulletproofs [S&P 2018] with over an order of magnitude faster proving and verification times.

  • Dory: Efficient, Transparent arguments for Generalised Inner Products and Polynomial Commitments
    by Jonathan Lee on October 14, 2020 at 12:20 pm

    This paper presents Dory, a transparent setup, public-coin interactive argument for proving correctness of an inner-pairing product between committed vectors of elements of the two source groups. For an inner product of length $n$, proofs are $6 \log n$ target group elements, $1$ element of each source group and $3$ scalars. Verifier work is dominated by an $O(\log n)$ multi-exponentiation in the target group. Security is reduced to the symmetric external Diffie Hellman assumption in the standard model. We also show an argument reducing a batch of two such instances to one, requiring $O(n^{1/2})$ work on the Prover and $O(1)$ communication. We apply Dory to build a multivariate polynomial commitment scheme via the Fiat-Shamir transform. For $n$ the product of one plus the degree in each variable, Prover work to compute a commitment is dominated by a multi-exponentiation in one source group of size $n$. Prover work to show that a commitment to an evaluation is correct is $O(n^{\log 8 / \log 25})$ in general and $O(n^{1/2})$ for univariate or multilinear polynomials, whilst communication complexity and Verifier work are both $O(\log n)$. Using batching, the Verifier can validate $\ell$ polynomial evaluations for polynomials of size at most $n$ with $O(\ell + \log n)$ group operations and $O(\ell \log n)$ field operations.

  • Classical Verification of Quantum Computations with Efficient Verifier
    by Nai-Hui Chia on October 14, 2020 at 12:19 pm

    In this paper, we extend the protocol of classical verification of quantum computations (CVQC) recently proposed by Mahadev to make the verification efficient. Our result is obtained in the following three steps: - We show that parallel repetition of Mahadev's protocol has negligible soundness error. This gives the first constant round CVQC protocol with negligible soundness error. In this part, we only assume the quantum hardness of the learning with error (LWE) problem similar to Mahadev's work. - We construct a two-round CVQC protocol in the quantum random oracle model (QROM) where a cryptographic hash function is idealized to be a random function. This is obtained by applying the Fiat-Shamir transform to the parallel repetition version of Mahadev's protocol. -We construct a two-round CVQC protocol with an efficient verifier in the CRS+QRO model where both prover and verifier can access a (classical) common reference string generated by a trusted third party in addition to quantum access to QRO. Specifically, the verifier can verify a $\mathsf{QTIME}(T)$ computation in time $\mathsf{poly}(\lambda,\log T)$ where $\lambda$ is the security parameter. For proving soundness, we assume that a standard model instantiation of our two-round protocol with a concrete hash function (say, SHA-3) is sound and the existence of post-quantum indistinguishability obfuscation and post-quantum fully homomorphic encryption in addition to the quantum hardness of the LWE problem.

  • Bent Functions from Cellular Automata
    by Maximilien Gadouleau on October 14, 2020 at 12:19 pm

    In this work, we present a primary construction of bent functions based on cellular automata (CA). We consider the well-known characterization of bent functions in terms of Hadamard matrices and employ some recent results about mutually orthogonal Latin squares (MOLS) based on linear bipermutive CA (LBCA) to design families of Hadamard matrices of the form required for bent functions. In particular, the main question to address in this construction can be reduced to finding a large enough set of coprime polynomials over $\mathbb{F}_q$, which are used to define a set of MOLS via LBCA. This set of MOLS is, in turn, used to define a Hadamard matrix of the specific structure characterizing a bent function. We settle the existence question of such bent functions by proving that the required coprime sets exist if and only if the degree of the involved polynomials is either $1$ or $2$, and we count the resulting sets. Next, we check if the functions of $8$ variables arising from our construction are EA-equivalent to Maiorana-McFarland functions, observing that most of them are not. Finally, we show how to represent the support of these bent functions as a union of the kernels of the underlying linear CA. This allows us, in turn, to prove that the functions generated by our construction belong to the partial spread class $\mathcal{PS}^-$. In particular, we remark that for degree $1$ our construction is a particular case of the Desarguesian spread construction.

  • (F)unctional Sifting: A Privacy-Preserving Reputation System Through Multi-Input Functional Encryption (extended version)
    by Alexandros Bakas on October 14, 2020 at 12:18 pm

    Functional Encryption (FE) allows users who hold a specific secret key (known as the functional key) to learn a specific function of encrypted data whilst learning nothing about the content of the underlying data. Considering this functionality and the fact that the field of FE is still in its infancy, we sought a route to apply this potent tool to solve the existing problem of designing decentralised additive reputation systems. To this end, we first built a symmetric FE scheme for the $\ell_1$ norm of a vector space, which allows us to compute the sum of the components of an encrypted vector (i.e. the votes). Then, we utilized our construction, along with functionalities offered by Intel SGX, to design the first FE-based decentralized additive reputation system with Multi-Party Computation. While our reputation system faces certain limitations, this work is amongst the first attempts that seek to utilize FE in the solution of a real-life problem.

  • Classical vs Quantum Random Oracles
    by Takashi Yamakawa on October 14, 2020 at 12:17 pm

    In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20) can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that separate the ROM and QROM under a standard cryptographic assumption. On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to Fiat-Shamir non-interactive arguments, Fiat-Shamir signatures, and Full-Domain-Hash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.

  • PRINCEv2 - More Security for (Almost) No Overhead
    by Dusan Bozilov on October 14, 2020 at 12:17 pm

    In this work, we propose tweaks to the PRINCE block cipher that help us to increase its security without changing the number of rounds or round operations. We get substantially higher security for the same complexity. From an implementation perspective, PRINCEv2 comes at an extremely low overhead compared to PRINCE in all key categories, such as area, latency and energy. We expect, as it is already the case for PRINCE, that the new cipher PRINCEv2 will be deployed in various settings.

  • A Novel Duplication Based Countermeasure To Statistical Ineffective Fault Analysis
    by Anubhab Baksi on October 14, 2020 at 12:16 pm

    The Statistical Ineffective Fault Analysis, SIFA, is a recent addition to the family of fault based cryptanalysis techniques. SIFA based attack is shown to be formidable and is able to bypass virtually all the conventional fault attack countermeasures. Reported countermeasures to SIFA incur overheads of the order of at least thrice the unprotected cipher. We propose a novel countermeasure that reduces the overhead (compared to all existing countermeasures) as we rely on a simple duplication based technique. In essence, our countermeasure eliminates the observation that enables the attacker to perform SIFA. The core idea we use here is to choose the encoding for the state bits randomly. In this way, each bit of the state is free from statistical bias, which renders SIFA unusable. Our approach protects against stuck-at faults and also does not rely on any side channel countermeasure. We show the effectiveness of the countermeasure through an open source gate-level fault attack simulation tool. Our approach is probably the simplest and the most cost effective.

  • Multi-Party Functional Encryption
    by Shweta Agrawal on October 14, 2020 at 12:14 pm

    We initiate the study of multi-party functional encryption (MPFE) which unifies and abstracts out various notions of functional encryption which support distributed ciphertexts or secret keys, such as multi-input FE, multi-client FE, decentralized multi-client FE, multi-authority ABE, multi-authority FE and such others. We provide a unifying framework to capture existing multi-party FE primitives, define new, natural primitives that emerge from the framework, provide a feasibility result assuming general multi-input functional encryption (MIFE), and also provide constructions for restricted function families from standard assumptions. In more detail, we provide the following new constructions: 1. Two Input Quadratic Functional Encryption: We provide the first two input functional encryption scheme for quadratic functions from the SXDH assumption on bilinear groups. To the best of our knowledge, this is the first construction of MIFE from standard assumptions that goes beyond the inner product functionality. 2. Decentralized Inner Product Functional Encryption: We provide the first decentralized version of an inner product functional encryption scheme, generalizing the recent work of Michalevsky and Joye (ESORICS'18). Our construction supports access policies C that are representable as inner product predicates, and is secure based on the k-linear assumption, in the random oracle model. 3. Distributed Ciphertext-Policy Attribute Based Encryption. We provide a decentralized variant of the recent ciphertext-policy attribute based encryption scheme, constructed by Agrawal and Yamada (Eurocrypt'20). Our construction supports NC1 access policies, and is secure based on Learning With Errors and relies on the generic bilinear group model as well as the random oracle model. Our new abstraction predicts meaningful new primitives for multi-party functional encryption which we describe but do not instantiate — these may be constructed in future work.

  • Revisiting ECM on GPUs
    by Jonas Wloka on October 14, 2020 at 12:13 pm

    Modern public-key cryptography is a crucial part of our contemporary life where a secure communication channel with another party is needed. With the advance of more powerful computing architectures – especially Graphics Processing Units (GPUs) – traditional approaches like RSA and Diffie-Hellman schemes are more and more in danger of being broken. We present a highly optimized implementation of Lenstra’s ECM algorithm customized for GPUs. Our implementation uses state-of-the-art elliptic curve arithmetic and optimized integer arithmetic while providing the possibility of arbitrarily scaling ECM’s parameters allowing an application even for larger discrete logarithm problems. Furthermore, the proposed software is not limited to any specific GPU generation and is to the best of our knowledge the first implementation supporting multiple device computation. To this end, for a bound of B1=8,192 and a modulus size of 192 bit, we achieve a throughput of 214 thousand ECM trials per second on a modern RTX 2080 Ti GPU considering only the first stage of ECM. To solve the Discrete Logarithm Problem for larger bit sizes, our software can easily support larger parameter sets such that a throughput of 2,781 ECM trials per second is achieved using B1=50,000, B2=5,000,000, and a modulus size of 448 bit.

  • The i-Chip as One-Time Password (OTP) & digital signature generator
    by Slawomir Matelski on October 14, 2020 at 12:12 pm

    For safe resource management - an effective mechanism/system is necessary that identifies a person and his rights to these resources, using an appropriate key, and its degree of security determines not only the property, but sometimes even the life of its owner. For several decades, it has been based on the security of (bio)material keys, which only guarantee their own authenticity, but not their owner, due to weak of static password protection. The development of an effective SecHCI (Secure Human-Computer Identification) system is becoming an increasingly urgent problem in modern civilization. In the article will be presented an innovative challenge-response protocol, that requires only 1 image to be remember by an user, showing the structure of a virtual microchip, and a mental simulation of the operation of its components, thanks to which reverse engineering of the virtual structure is as difficult as such operation on a physical microchip, and their diversity is limited only by the fantasy of the user, inspired by the analogous features of the physical elements. It will be shown how the i-Chip generates the one-time password (OTP) or whole digital signature, also offline on paper documents, or acts as a virtual key for an electronic lock opened with OTP.

  • Improved Fault Analysis on SIMECK Ciphers
    by Duc-Phong Le on October 14, 2020 at 12:05 pm

    The advances of the Internet of Things (IoT) have had a fundamental impact and influence in sharping our rich living experiences. However, since IoT devices are usually resource-constrained, lightweight block ciphers have played a major role in serving as a building block for secure IoT protocols. In CHES 2015, SIMECK, a family of block ciphers, was designed for resource-constrained IoT devices. Since its publication, there have been many analyses on its security. In this paper, under the one bit-flip model, we propose a new efficient fault analysis attack on SIMECK ciphers. Compared to those previously reported attacks, our attack can recover the full master key by injecting faults into only a single round of all SIMECK family members. This property is crucial, as it is infeasible for an attacker to inject faults into different rounds of a SIMECK implementation on IoT devices in the real world. Specifically, our attack is characterized by exercising a deep analysis of differential trail between the correct and faulty immediate ciphertexts. Extensive simulation evaluations are conducted, and the results demonstrate the effectiveness and correctness of our proposed attack.

  • On (multi-stage) Proof-of-Work blockchain protocols
    by Paolo D'Arco on October 14, 2020 at 12:03 pm

    In this paper, we analyze permissionless blockchain protocols, whose distributed consensus algorithm lies on a Proof-of-Work composed of $k \geq 1$ sequential hash-puzzles. We put our focus in a restricted scenario, widely used in the blockchain literature, in which the number of miners, their hash rates, and the difficulty values of the hash-puzzles are constant throughout time. Our main contribution is a closed-form expression for the mining probability of a miner, that is, the probability the miner completes the Proof-of-Work of the next block to be added to the blockchain before every other miner does. Our theoretical results can be applied to existing Proof-of-Work based blockchain protocols, such as Bitcoin or Ethereum. We also point out some security issues implied by our findings, which makes not trivial at all the design of multi-stage (i.e. $k \geq 2$) Proof-of-Work blockchain protocols.

  • MuSig2: Simple Two-Round Schnorr Multi-Signatures
    by Jonas Nick on October 14, 2020 at 12:02 pm

    Multi-signatures enable a group of signers to produce a single signature on a given message. Recently, Drijvers et al. (S&P'19) showed that all thus far proposed two-round multi-signature schemes in the DL setting (without pairings) are insecure under concurrent sessions, i.e., if a single signer participates in multiple signing sessions concurrently. While Drijvers et al. improve the situation by constructing a secure two-round scheme, saving a round comes with the price of having less compact signatures. In particular, the signatures produced by their scheme are more than twice as large as Schnorr signatures, which arguably are the most natural and compact among all practical DL signatures and are therefore becoming popular in cryptographic applications (e.g., support for Schnorr signature verification has been proposed to be included in Bitcoin). If one needs a multi-signature scheme that can be used as a drop-in replacement for Schnorr signatures, then one is either forced to resort to a three-round scheme such as MuSig (Maxwell et al., DCC 2019) or MSDL-pop (Boneh, Drijvers, and Neven, ASIACRYPT 2018), or to accept that signing sessions are only secure when run sequentially, which may be hard to enforce in practice, e.g., when the same signing key is used by multiple devices. In this work, we propose MuSig2, a novel and simple two-round multi-signature scheme variant of the MuSig scheme. Our scheme is the first multi-signature scheme that simultaneously i) is secure under concurrent signing sessions, ii) supports key aggregation, iii) outputs ordinary Schnorr signatures, iv) needs only two communication rounds, and v) has similar signer complexity as regular Schnorr signatures. Furthermore, our scheme is the first multi-signature scheme in the DL setting that supports preprocessing of all but one rounds, effectively enabling a non-interactive signing process, without forgoing security under concurrent sessions. The combination of all these features makes MuSig2 highly practical. We prove the security of MuSig2 under the one-more discrete logarithm (OMDL) assumption in the random oracle model, and the security of a more efficient variant in the combination of the random oracle and algebraic group models.

  • Lattice Reduction with Approximate Enumeration Oracles: Practical Algorithms and Concrete Performance
    by Martin R. Albrecht on October 14, 2020 at 12:01 pm

    This work provides a systematic investigation of the use of approximate enumeration oracles in BKZ, building on recent technical progress on speeding-up lattice enumeration: relaxing (the search radius of) enumeration and extended preprocessing which preprocesses in a larger rank than the enumeration rank. First, we heuristically justify that relaxing enumeration with certain extreme pruning asymptotically achieves an exponential speed-up for reaching the same root Hermite factor (RHF). Second, we perform simulations/experiments to validate this and the performance for relaxed enumeration with numerically optimised pruning for both regular and extended preprocessing. Upgrading BKZ with such approximate enumeration oracles gives rise to our main result, namely a practical and faster (compared to previous work) polynomial-space lattice reduction algorithm for reaching the same RHF in practical and cryptographic parameter ranges. We assess its concrete time/quality performance with extensive simulations and experiments. As a consequence, we update the extrapolation of the crossover rank between a square-root cost estimate for quantum enumeration using our algorithm and the Core-SVP cost estimate for quantum sieving to 547.

  • TranSCA: Cross-Family Profiled Side-Channel Attacks using Transfer Learning on Deep Neural Networks
    by Dhruv Thapar on October 14, 2020 at 12:00 pm

    Side-channel analysis (SCA) utilizing the power consumption of a device has proved to be an efficient technique for recovering secret keys exploiting the implementation vulnerability of mathematically secure cryptographic algorithms. Recently, Deep Learning-based profiled SCA (DL-SCA) has gained popularity, where an adversary trains a deep learning model using profiled traces obtained from a dummy device (a device that is similar to the target device) and uses the trained model to retrieve the secret key from the target device. \emph{However, for efficient key recovery from the target device, training of such a model requires a large number of profiled traces from the dummy device and extensive training time}. In this paper, we propose \emph{TranSCA}, a new DL-SCA strategy that tries to address the issue. \emph{TranSCA} works in three steps -- an adversary (1) performs a one-time training of a \emph{base model} using profiled traces from \emph{any} device, (2) fine-tunes the parameters of the \emph{base model} using significantly less profiled traces from a dummy device with the aid of \emph{transfer learning} strategy in lesser time than training from scratch, and (3) uses the fine-tuned model to attack the target device. We validate \emph{TranSCA} on simulated power traces created to represent different FPGA families. Experimental results show that the transfer learning strategy makes it possible to attack a new device from the knowledge of another device even if the new device belongs to a different family. Also, \emph{TranSCA} requires very few power traces from the dummy device compared to when applying DL-SCA without any previous knowledge.

  • Improved Reduction Between SIS Problems over Structured Lattices
    by ZaHyun Koo on October 14, 2020 at 12:00 pm

    Lattice-based cryptographic scheme is constructed based on hard problems on a structured lattice such as the short integer solution (SIS) problem and the learning with error (LWE), called ring-SIS (R-SIS), ring-LWE (R-LWE), module-SIS (M-SIS), and module-LWE (M-LWE). Generally, it has been considered that problems defined on the module-lattice are more difficult than the problems defined on the ideal-lattices. However, Albrecht and Deo showed that there is a reduction from M-LWE to R-LWE in the polynomial ring by handling the error rate and modulus. Also, Koo, No, and Kim showed that there is a reduction from M-SIS$_{q^{k},m^{k},\beta'}$ to R-SIS$_{q,m,\beta}$ under some norm constraint of R-SIS, where $k > 1$. In this paper, we propose the improved reductions related to M-SIS and R-SIS compared to the previous work. To show the improved reduction, we propose the three novel reductions related to M-SIS to R-SIS on the polynomial ring. First, we propose the reduction from R-SIS$_{q^{k},m,\beta'}$ to R-SIS$_{q^{k},m^{k},\beta^{k}}$. Combining one of the previous works, we obtain the reduction between R-SIS problems with distinct parameters preserving the number of samples of R-SIS. Second, we propose the improved reduction from M-SIS$_{q^{k},m,\beta'}$ to R-SIS$_{q^{k},m,\beta}$ with $k \ge 1$ under some norm constraint of R-SIS. Comparing to the previous work, the upper bound of the norm of the solution of M-SIS is decreased. Finally, we propose a reduction between M-SIS with different moduli. Combining these three results implies that R-SIS$_{q,m,\beta}$ is more difficult than M-SIS$_{C,m,\beta'}$, where $C$ is a multiple of $q^{k}$ for some $k \ge 1$ under some norm constraint of R-SIS, which provides a double extension of the possible range of module ranks for M-SIS compared to the previous work.

  • Boolean Ring Cryptographic Equation Solving
    by Sean Murphy on October 14, 2020 at 11:58 am

    This paper considers multivariate polynomial equation systems over GF(2) that have a small number of solutions. This paper gives a new method EGHAM2 for solving such systems of equations that uses the properties of the Boolean quotient ring to potentially reduce memory and time complexity relative to existing XL-type or Groebner basis algorithms applied in this setting. This paper also establishes a direct connection between solving such a multivariate polynomial equation system over GF(2), an MQ problem, and an instance of the LPN problem.

  • Broadcast-Optimal Two Round MPC with an Honest Majority
    by Ivan Damgård on October 14, 2020 at 11:58 am

    This paper closes the question of the possibility of two-round MPC protocols achieving different security guarantees with and without the availability of broadcast in any given round. Cohen et al. [CGZ20] study this question in the dishonest majority setting; we complete the picture by studying the honest majority setting. In the honest majority setting, given broadcast in both rounds, it is known that the strongest guarantee — guaranteed output delivery — is achievable [GLS15]. We show that in this setting, given broadcast in the first round only, guaranteed output delivery is still achievable. Given broadcast in the second round only, identifiable abort and all weaker guarantees are achievable, but fairness — and thus guaranteed output delivery — are not. Finally, using only peer-to-peer channels, for corruption thresholds $t > 1$ we show that the weakest guarantee — selective abort — is the only one achievable. For $t = 1$ and $n \geq 4$, it is known [IKP10,IKKP15] that guaranteed output delivery (and thus all weaker guarantees) are possible. We show that for $t = 1$ and $n = 3$ the strongest achievable guarantee is selective abort, resolving the question of best achievable guarantees in two-round secure computation protocols.

  • New Representations of the AES Key Schedule
    by Gaëtan Leurent on October 14, 2020 at 11:57 am

    In this paper we present a new representation of the AES key schedule, with some implications to the security of AES-based schemes. In particular, we show that the AES-128 key schedule can be split into four independent parallel computations operating on 32 bits chunks, up to linear transformation. Surprisingly, this property has not been described in the literature after more than 20 years of analysis of AES. We show two consequences of our new representation, improving previous cryptanalysis results of AES-based schemes. First, we observe that iterating an odd number of key schedule rounds results in a function with short cycles. This explains an observation of Khairallah on mixFeed, a second-round candidate in the NIST lightweight competition. Our analysis actually shows that his forgery attack on mixFeed succeeds with probability 0.44 (with data complexity 220GB), breaking the scheme in practice. The same observation also leads to a novel attack on ALE, another AES-based AEAD scheme. Our new representation also gives efficient ways to combine information from the first sub-keys and information from the last sub-keys, in order to reconstruct the corresponding master keys. In particular we improve previous impossible-differential attacks against AES-128.

  • The Provable Security of Ed25519: Theory and Practice
    by Jacqueline Brendel on October 14, 2020 at 9:27 am

    A standard requirement for a signature scheme is that it is existentially unforgeable under chosen message attacks (EUF-CMA), alongside other properties of interest such as strong unforgeability (SUF-CMA), and resilience against key substitution attacks. Remarkably, no detailed proofs have ever been given for these security properties for EdDSA, and in particular its Ed25519 instantiations. Ed25519 is one of the most efficient and widely used signature schemes, and different instantiations of Ed25519 are used in protocols such as TLS 1.3, SSH, Tor, Zcash, and WhatsApp/Signal. The differences between these instantiations are subtle, and only supported by informal arguments, with many works assuming results can be directly transferred from Schnorr signatures. Similarly, several proofs of protocol security simply assume that Ed25519 satisfies properties such as EUF-CMA or SUF-CMA. In this work we provide the first detailed analysis and security proofs of Ed25519 signature schemes. While the design of the schemes follows the well-established Fiat-Shamir paradigm, which should guarantee existential unforgeability, there are many side cases and encoding details that complicate the proofs, and all other security properties needed to be proven independently. Our work provides scientific rationale for choosing among several Ed25519 variants and understanding their properties, fills a much needed proof gap in modern protocol proofs that use these signatures, and supports further standardisation efforts.

  • Re-Consolidating First-Order Masking Schemes - Nullifying Fresh Randomness
    by Aein Rezaei Shahmirzadi on October 14, 2020 at 4:50 am

    Application of masking, known as the most robust and reliable countermeasure to side-channel analysis attacks, on various cryptographic algorithms has dedicated a lion’s share of research to itself. The difficulty originates from the fact that the overhead of application of such an algorithmic-level countermeasure might not be affordable. This includes the area- and latency overheads as well as the amount of fresh randomness required to fulfill the security properties of the resulting design. There are already techniques applicable in hardware platforms which consider glitches into account. Among them, classical threshold implementations force the designers to use at least three shares in the underlying masking. The other schemes, which can deal with two shares, often necessitates the use of fresh randomness. Here, in this work, we present a technique allowing us to use two shares to realize the first-order glitch-extended probing secure masked realization of several functions including the Sbox of Midori, PRESENT, PRINCE, and AES ciphers without any fresh randomness.

  • Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma
    by Stefan Dziembowski on October 14, 2020 at 3:52 am

    Blockchain is a disruptive new technology introduced around a decade ago. It can be viewed as a method for recording timestamped transactions in a public database. Most of blockchain protocols do not scale well, i.e., they cannot process quickly large amounts of transactions. A natural idea to deal with this problem is to use the blockchain only as a timestamping service, i.e., to hash several transactions $\mathit{tx}_1,\ldots,\mathit{tx}_m$ into one short string, and just put this string on the blockchain, while at the same time posting the hashed transactions $\mathit{tx}_1,\ldots,\mathit{tx}_m$ to some public place on the Internet (``off-chain''). In this way the transactions $\mathit{tx}_i$ remain timestamped, but the amount of data put on the blockchain is greatly reduced. This idea was introduced in 2017 under the name \emph{Plasma} by Poon and Buterin. Shortly after this proposal, several variants of Plasma have been proposed. They are typically built on top of the Ethereum blockchain, as they strongly rely on so-called \emph{smart contracts} (in order to resolve disputes between the users if some of them start cheating). Plasmas are an example of so-called \emph{off-chain protocols}. In this work we initiate the study of the inherent limitations of Plasma protocols. More concretely, we show that in every Plasma system the adversary can either (a) force the honest parties to communicate a lot with the blockchain, even though they did not intend to (this is traditionally called \emph{mass exit}); or (b) an honest party that wants to leave the system needs to quickly communicate large amounts of data to the blockchain. What makes these attacks particularly hard to handle in real life is that these attacks do not have so-called \emph{uniquely attributable faults}, i.e.~the smart contract cannot determine which party is malicious, and hence cannot force it to pay the fees for the blockchain interaction. An important implication of our result is that the benefits of two of the most prominent Plasma types, called \emph{Plasma Cash} and \emph{Fungible Plasma}, cannot be achieved simultaneously. Besides of the direct implications on real-life cryptocurrency research, we believe that this work may open up a new line of theoretical research, as, up to our knowledge, this is the first work that provides an impossibility result in the area of off-chain protocols.

  • CRYSTALS -- Kyber: a CCA-secure module-lattice-based KEM
    by Joppe Bos on October 14, 2020 at 3:51 am

    Rapid advances in quantum computing, together with the announcement by the National Institute of Standards and Technology (NIST) to define new standards for digital-signature, encryption, and key-establishment protocols, have created significant interest in post-quantum cryptographic schemes. This paper introduces Kyber (part of CRYSTALS -- Cryptographic Suite for Algebraic Lattices -- a package submitted to NIST post-quantum standardization effort in November 2017), a portfolio of post-quantum cryptographic primitives built around a key-encapsulation mechanism (KEM),based on hardness assumptions over module lattices. Our KEM is most naturally seen as a successor to the NewHope KEM (Usenix 2016). In particular, the key and ciphertext sizes of our new construction are about half the size, the KEM offers CCA instead of only passive security, the security is based on a more general (and flexible) lattice problem, and our optimized implementation results in essentially the same running time as the aforementioned scheme. We first introduce a CPA-secure public-key encryption scheme, apply a variant of the Fujisaki--Okamoto transform to create a CCA-secure KEM, and eventually construct, in a black-box manner, CCA-secure encryption, key exchange, and authenticated-key-exchange schemes. The security of our primitives is based on the hardness of Module-LWE in the classical and quantum random oracle models, and our concrete parameters conservatively target more than $128$ bits of post-quantum security.

  • Analysing and Improving Shard Allocation Protocols for Sharded Blockchains
    by Runchao Han on October 13, 2020 at 9:00 pm

    Sharding is a promising approach to scale permissionless blockchains. In a sharded blockchain, participants are split into groups, called shards, and each shard only executes part of the workloads. Despite its wide adoption in permissioned systems, transferring such success to permissionless blockchains is still an open problem. In permissionless networks, participants may join and leave the system at any time, making load balancing challenging. In addition, there exists Byzantine participants, who may launch various attacks on the blockchain. To address these issues, participants should be securely and dynamically allocated into different shards uniformly. However, the protocol capturing such functionality – which we call shard allocation – is overlooked. In this paper, we study shard allocation protocols for permissionless blockchains. We formally define the shard allocation protocol and propose an evaluation framework. We then apply the framework to evaluate the shard allocation subprotocols of seven state-of-the-art sharded blockchains, and show that none of them is fully correct or achieves satisfactory performance. We attribute these deficiencies to their redundant security assumptions and their extreme choices between two performance metrics: self-balance and operability. We further prove a fundamental trade-off between these two metrics, and identify a fundamental property non-memorylessness that enables parametrisation on this trade-off. Based on these insights, we propose WORMHOLE, a correct and efficient shard allocation protocol with minimal security assumptions and parameterisable self-balance and operability.

  • Fixslicing AES-like Ciphers: New bitsliced AES speed records on ARM-Cortex M and RISC-V
    by Alexandre Adomnicai on October 13, 2020 at 7:43 pm

    The fixslicing implementation strategy was originally introduced as a new representation for the hardware-oriented GIFT block cipher to achieve very efficient software constant-time implementations. In this article, we show that the fundamental idea underlying the fixslicing technique is not of interest only for GIFT, but can be applied to other ciphers as well. Especially, we study the benefits of fixslicing in the case of AES and show that it allows to reduce by 41% the amount of operations required by the linear layer when compared to the current fastest bitsliced implementation on 32-bit platforms. Overall, we report that fixsliced AES-128 allows to reach 83 and 98 cycles per byte on ARM Cortex-M and E31 RISC-V processors respectively (assuming pre-computed round keys), improving the previous records on those platforms by 17% and 20%. In order to highlight that our work also directly improves masked implementations that rely on bitslicing, we report implementation results when integrating first-order masking that outperform by 12% the fastest results reported in the literature on ARM Cortex-M4. Finally, we demonstrate the genericity of the fixslicing technique for AES-like designs by applying it to the Skinny-128 tweakable block ciphers.

  • Privacy-preserving auditable token payments in a permissioned blockchain system
    by Elli Androulaki on October 13, 2020 at 2:02 pm

    Token management systems were the first application of blockchain technology and are still the most widely used one. Early implementations such as Bitcoin or Ethereum provide virtually no privacy beyond basic pseudonymity: all transactions are written in plain to the blockchain, which makes them perfectly linkable and traceable. Several more recent blockchain systems, such as Monero or Zerocash, implement improved levels of privacy. Most of these systems target the permissionless setting, just like Bitcoin. Many practical scenarios, in contrast, require token systems to be permissioned, binding the tokens to user identities instead of pseudonymous addresses, and also requiring auditing functionality in order to satisfy regulation such as AML/KYC. We present a privacy-preserving token management system that is designed for permissioned blockchain systems and supports fine-grained auditing. The scheme is secure under computational assumptions in bilinear groups, in the random-oracle model.

  • Implementation and Benchmarking of Round 2 Candidates in the NIST Post-Quantum Cryptography Standardization Process Using Hardware and Software/Hardware Co-design Approaches
    by Viet Ba Dang on October 13, 2020 at 7:19 am

    Performance in hardware has typically played a major role in differentiating among leading candidates in cryptographic standardization efforts. Winners of two past NIST cryptographic contests (Rijndael in case of AES and Keccak in case of SHA-3) were ranked consistently among the two fastest candidates when implemented using FPGAs and ASICs. Hardware implementations of cryptographic operations may quite easily outperform software implementations for at least a subset of major performance metrics, such as speed, power consumption, and energy usage, as well as in terms of security against physical attacks, including side-channel analysis. Using hardware also permits much higher flexibility in trading one subset of these properties for another. A large number of candidates at the early stages of the standardization process makes the accurate and fair comparison very challenging. Nevertheless, in all major past cryptographic standardization efforts, future winners were identified quite early in the evaluation process and held their lead until the standard was selected. Additionally, identifying some candidates as either inherently slow or costly in hardware helped to eliminate a subset of candidates, saving countless hours of cryptanalysis. Finally, early implementations provided a baseline for future design space explorations, paving a way to more comprehensive and fairer benchmarking at the later stages of a given cryptographic competition. In this paper, we first summarize, compare, and analyze results reported by other groups until mid-May 2020, i.e., until the end of Round 2 of the NIST PQC process. We then outline our own methodology for implementing and benchmarking PQC candidates using both hardware and software/hardware co-design approaches. We apply our hardware approach to 6 lattice-based CCA-secure Key Encapsulation Mechanisms (KEMs), representing 4 NIST PQC submissions. We then apply a software-hardware co-design approach to 12 lattice-based CCA-secure KEMs, representing 8 Round 2 submissions. We hope that, combined with results reported by other groups, our study will provide NIST with helpful information regarding the relative performance of a significant subset of Round 2 PQC candidates, assuming that at least their major operations, and possibly the entire algorithms, are off-loaded to hardware.

  • Improving Non-Profiled Side-Channel Attacks using Autoencoder based Preprocessing
    by Donggeun Kwon on October 12, 2020 at 4:41 pm

    In recent years, deep learning-based side-channel attacks have established their position as mainstream. However, most deep learning techniques for cryptanalysis mainly focused on classifying side-channel information in a profiled scenario where attackers can obtain a label of training data. In this paper, we introduce a novel approach with deep learning for improving side-channel attacks, especially in a non-profiling scenario. We also propose a new principle of training that trains an autoencoder through the noise from real data using noise-reduced labels. It notably diminishes the noise in measurements by modifying the autoencoder framework to the signal preprocessing. We present convincing comparisons on our custom dataset, captured from ChipWhisperer-Lite board, that demonstrate our approach outperforms conventional preprocessing methods such as principal component analysis and linear discriminant analysis. Furthermore, we apply the proposed methodology to realign de-synchronized traces that applied hiding countermeasures, and we experimentally validate the performance of the proposal. Finally, we experimentally show that we can improve the performance of higher-order side-channel attacks by using the proposed technique with domain knowledge for masking countermeasures.

  • Lower Bounds for Encrypted Multi-Maps and Searchable Encryption in the Leakage Cell Probe Model
    by Sarvar Patel on October 12, 2020 at 11:26 am

    Encrypted multi-maps (EMMs) enable clients to outsource the storage of a multi-map to a potentially untrusted server while maintaining the ability to perform operations in a privacy-preserving manner. EMMs are an important primitive as they are an integral building block for many practical applications such as searchable encryption and encrypted databases. In this work, we formally examine the tradeoffs between privacy and efficiency for EMMs. Currently, all known dynamic EMMs with constant overhead reveal if two operations are performed on the same key or not that we denote as the $\mathit{global\ key\text{-}equality\ pattern}$. In our main result, we present strong evidence that the leakage of the global key-equality pattern is inherent for any dynamic EMM construction with $O(1)$ efficiency. In particular, we consider the slightly smaller leakage of $\mathit{decoupled\ key\text{-}equality\ pattern}$ where leakage of key-equality between update and query operations is decoupled and the adversary only learns whether two operations of the $\mathit{same\ type}$ are performed on the same key or not. We show that any EMM with at most decoupled key-equality pattern leakage incurs $\Omega(\log n)$ overhead in the $\mathit{leakage\ cell\ probe\ model}$. This is tight as there exist ORAM-based constructions of EMMs with logarithmic slowdown that leak no more than the decoupled key-equality pattern (and actually, much less). Furthermore, we present stronger lower bounds that encrypted multi-maps leaking at most the decoupled key-equality pattern but are able to perform one of either the update or query operations in the plaintext still require $\Omega(\log n)$ overhead. Finally, we extend our lower bounds to show that dynamic, $\mathit{response\text{-}hiding}$ searchable encryption schemes must also incur $\Omega(\log n)$ overhead even when one of either the document updates or searches may be performed in the plaintext.

  • Practical and Secure Circular Range Search on Private Spatial Data
    by Zhihao Zheng on October 12, 2020 at 12:54 am

    With the location-based services booming, the volume of spatial data inevitably explodes. In order to reduce local storage and computational overhead, users tend to outsource data and initiate queries to the cloud. However, sensitive data or queries may be compromised if cloud server has access to raw data and plaintext token. To cope with this problem, searchable encryption for geometric range is applied. Geometric range search has wide applications in many scenarios, especially the circular range search. In this paper, we propose a practical and secure circular range search scheme (PSCS) to support searching for spatial data in a circular range. With Our scheme, a semi-honest cloud server will return the data for a given circular range correctly without revealing index privacy or query privacy. We propose a polynomial split algorithm which can decompose the inner product calculation neatly. Then, We formally define the security of our scheme and theoretically prove that it is secure under same-closeness-pattern chosen-plaintext attacks (CLS-CPA). In addition, we demonstrate the efficiency and accuracy through analysis and experiments compared with existing schemes.

  • Elliptic Curves in Generalized Huff's Model
    by Ronal Pranil Chand on October 11, 2020 at 8:45 pm

    Abstract This paper introduces a new form of elliptic curves in generalized Huff's model. These curves endowed with the addition are shown to be a group over a finite field. We present formulae for point addition and doubling point on the curves, and evaluate the computational cost of point addition and doubling point using projective, Jacobian, Lopez-Dahab coordinate systems, and embedding of the curves into \mathbb{P}^{1}\times\mathbb{P}^{1} system. We also prove that the curves are birationally equivalent to Weierstrass form. We observe that the computational cost on the curves for point addition and doubling point is lowest by embedding the curves into \mathbb{P}^{1}\times\mathbb{P}^{1} system than the other mentioned coordinate systems and is nearly optimal to other known Huff's models.

  • Two-Round Oblivious Linear Evaluation from Learning with Errors
    by Pedro Branco on October 11, 2020 at 1:45 pm

    Oblivious Linear Evaluation (OLE) is a simple yet powerful cryptographic primitive which allows a sender, holding an affine function $f(x)=a+bx$ over a finite field, to let a receiver learn $f(w)$ for a $w$ of the receiver's choice. In terms of security, the sender remains oblivious of the receiver's input $w$, whereas the receiver learns nothing beyond $f(w)$ about $f$. In recent years, OLE has emerged as an essential building block to construct efficient, reusable and maliciously-secure two-party computation. In this work, we present efficient two-round protocols for OLE based on the Learning with Errors (LWE) assumption. Our first protocol for OLE is secure against malicious unbounded receivers and semi-honest senders. The receiver's first message may carry information about a batch of inputs, and not just a single input. We then show how we can extend the above protocol to provide malicious security for both parties.

  • On the Attack Evaluation and the Generalization Ability in Profiling Side-channel Analysis
    by Lichao Wu on October 11, 2020 at 10:17 am

    Guessing entropy is a common metric in side-channel analysis, and it represents the average key rank position of the correct key among all possible key guesses. By evaluating it, we estimate the effort needed to break the implementation. As such, the guessing entropy behavior should be stable to avoid misleading conclusions about the attack performance. In this work, we investigate this problem of misleading conclusions from the guessing entropy behavior, and we define two new notions: simple and generalized guessing entropy. We demonstrate that the first one needs only a limited number of attack traces but can lead to wrong interpretations about the attack performance. The second notion requires a large (sometimes unavailable) number of attack traces, but it represents the optimal way of calculating guessing entropy. We propose a new metric (denoted the profiling model fitting metric) to estimate how reliable the guessing entropy estimation is. With it, we also obtain additional information about the generalization ability of the profiling model. We confirm our observations with extensive experimental analysis.

  • Hardness vs. (Very Little) Structure in Cryptography: A Multi-Prover Interactive Proofs Perspective
    by Gil Segev on October 11, 2020 at 8:17 am

    The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich's early work (PhD Thesis '88) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO '17). Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has $\mathsf{NP}$-verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes $\mathsf{NP} \cap \mathsf{coNP}$ or $\mathsf{SZK}$, respectively, from a black-box perspective. In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class $\mathsf{MIP} \cap \mathsf{coMIP}$ from a black-box perspective. Conceptually, given that $\mathsf{MIP} = \mathsf{NEXP}$, our result rules out languages with very little structure. Additionally, we prove a similar result for collision-resistant hash functions, and more generally for any cryptographic primitive that exists relative to a random oracle. Already the cases of languages that have $\mathsf{IP}$ or $\mathsf{AM}$ proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of $\mathsf{IP}$, where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of $\mathsf{AM}$, where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have $\mathsf{NP}$-verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system.

  • SAVER: SNARK-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization
    by Jiwon Lee on October 10, 2020 at 9:43 am

    In the pairing-based zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), there often exists a requirement for the proof system to be combined with encryption. As a typical example, a blockchain-based voting system requires the vote to be confidential (using encryption), while verifying voting validity (using zk-SNARKs). In these combined applications, a typical solution is to extend the zk-SNARK circuit to include the encryption code. However, complex cryptographic operations in the encryption algorithm increase the circuit size, which leads to impractically large proving time and CRS size. In this paper, we propose SNARK-friendly, Additively-homomorphic, and Verifiable Encryption and decryption with Rerandomization or SAVER, which is a novel approach to detach the encryption from the SNARK circuit. The encryption in SAVER holds many useful properties. It is SNARK-friendly: the encryption is conjoined with an existing pairing-based SNARK, in a way that the encryptor can prove pre-defined properties while encrypting the message apart from the SNARK. It is additively-homomorphic: the ciphertext holds a homomorphic property from the ElGamal-based encryption. It is a verifiable encryption: one can verify arbitrary properties of encrypted messages by connecting with the SNARK system. It provides a verifiable decryption: anyone without the secret can still verify that the decrypted message is indeed from the given ciphertext. It provides rerandomization: the proof and the ciphertext can be rerandomized as independent objects so that even the encryptor (or prover) herself cannot identify the origin. For the representative application, we also propose a Vote-SAVER based on SAVER, which is a novel voting system where voter's secret key lies only with the voter himself. The Vote-SAVER satisfies receipt-freeness (which implies ballot privacy), individual verifiability (which implies non-repudiation), vote verifiability, tally uniqueness, and voter anonymity. The experimental results show that our SAVER with respect to the Vote-SAVER relation yields 0.7s for zk-SNARK proving time and 10ms for encryption, with the CRS size of 16MB.

  • A Complete Analysis of the BKZ Lattice Reduction Algorithm
    by Jianwei Li on October 10, 2020 at 4:28 am

    We present the first rigorous dynamic analysis of BKZ, the most widely used lattice reduction algorithm besides LLL: previous analyses were either heuristic or only applied to variants of BKZ. Namely, we provide guarantees on the quality of the current lattice basis during execution. Our analysis extends to a generic BKZ algorithm where the SVP-oracle is replaced by an approximate oracle and/or the basis update is not necessarily performed by LLL. Interestingly, it also provides quantitative improvements, such as better and simpler bounds for both the output quality and the running time. As an application, we observe that in certain approximation regimes, it is more efficient to use BKZ with an approximate rather than exact SVP-oracle.

  • Factoring and Pairings are not Necessary for iO: Circular-Secure LWE Suffices
    by Zvika Brakerski on October 10, 2020 at 12:25 am

    We construct indistinguishability obfuscation (iO) solely under circular-security properties of encryption schemes based on the Learning with Errors (LWE) problem. Circular-security assumptions were used before to construct (non-leveled) fully-homomorphic encryption (FHE), but our assumption is stronger and requires circular randomness-leakage-resilience. In contrast with prior works, this assumption can be conjectured to be post-quantum secure; yielding the first provably secure iO construction that is (plausibly) post-quantum secure. Our work follows the high-level outline of the recent work of Gay and Pass [ePrint 2020], who showed a way to remove the heuristic step from the homomorphic-encryption based iO approach of Brakerski, D\"ottling, Garg, and Malavolta [EUROCRYPT 2020]. They thus obtain a construction proved secure under circular security assumption of natural homomorphic encryption schemes --- specifically, they use homomorphic encryption schemes based on LWE and DCR, respectively. In this work we show how to remove the DCR assumption and remain with a scheme based on the circular security of LWE alone. Along the way we relax some of the requirements in the Gay-Pass blueprint and thus obtain a scheme that is secure under a relaxed assumption. Specifically, we do not require security in the presence of a key-cycle, but rather only in the presence of a key-randomness cycle.

  • Mimblewimble Non-Interactive Transaction Scheme
    by Gary Yu on October 9, 2020 at 10:14 pm

    I describe a non-interactive transaction scheme for Mimblewimble protocol, so as to overcome the usability issue of the Mimblewimble wallet. With the Diffie–Hellman, we can use an Ephemeral Key shared between the sender and the receiver, a public nonce R is added to the output for that, removing the interactive cooperation procedure. And an additional one-time public key P' is used to lock the output to make it only spendable for the receiver, i.e. the owner of P'. The new data R and P' can be committed into the bulletproof to avoid the miner’s modification. Furtherly, to keep Mimblewimble privacy character, the Stealth Address is used in this new transaction scheme. All the cost of these new features is 66-bytes additional data (the public nonce R and the one-time public key P') in each output, and 64-bytes additional signature data in each input. That is about 12% payload size increasing in a typical single input double outputs Mimblewimble transaction.

  • Assessing Block Cipher Security using Linear and Nonlinear Machine Learning Models
    by Ting Rong Lee on October 9, 2020 at 7:54 pm

    In this paper, we investigate the use of machine learning classifiers to assess block cipher security from the perspective of differential cryptanalysis. The models are trained using the general block cipher features, making them generalizable to an entire class of ciphers. The features include the number of rounds, permutation pattern, and truncated differences whereas security labels are based on the number of differentially active substitution boxes. Prediction accuracy is further optimized by investigating the different ways of representing the cipher features in the dataset. Machine learning experiments involving six classifiers (linear and nonlinear) were performed on a simplified generalized Feistel cipher as a proof-of-concept, achieving a prediction accuracy of up to 95%. When predicting the security of unseen cipher variants, prediction accuracy of up to 77% was obtained. Our findings show that nonlinear classifiers outperform linear classifiers for the prediction task due to the nonlinear nature of block ciphers. In addition, results also indicate the feasibility of using the proposed approach in assessing block cipher security or as machine learning distinguishers

  • On the Family of Elliptic Curves $y^2=x^3+b/\mathbb{F}_p$
    by Han Wu on October 9, 2020 at 5:30 pm

    This paper is devoted to a more precise classification of the family of curves $E_b:y^2=x^3+b/\mathbb{F}_p$. For prime $p\equiv 1 \pmod 3$, explicit formula of the number of $\mathbb{F}_p$-rational points on $E_b$ is given based on the the coefficients of a (primary) decomposition of $p=(c+d\omega)\overline{(c+d\omega)}$ in the ring $\mathbb{Z}[\omega]$ of Eisenstein integers. More specifically, \[ \#E_b(\mathbb{F}_p)\in p+1-\big\{\pm(d-2c),\pm(c+d), \pm(c-2d)\big\}. \] The correspondence between these $6$ numbers of points and the $6$ isomorphism classes of the groups $E_b(\mathbb{F}_p)$ can be efficiently determined. Each of these numbers can also be interpreted as a norm of an Eisenstein integer. For prime $p\equiv 2 \pmod 3$, it is known that $E_b(\mathbb{F}_p)\cong \mathbb{Z}_{p+1}$. Two efficiently computable isomorphisms are described within the single isomorphism class of groups for representatives $E_1(\mathbb{F}_p)$ and $E_{-3}(\mathbb{F}_p)$. The explicit formulas $\#E_b(\mathbb{F}_p)$ for $p\equiv 1 \pmod 3$ are used in searching prime (or almost prime) order Koblitz curves over prime fields. An efficient procedure is described and analyzed. The procedure is proved to be deterministic polynomial time, assuming the Generalized Riemann Hypothesis. These formulas are also useful in using GLV method to perform fast scalar multiplications on the curve $E_b$. Several tools that are useful in computing cubic residues are also developed in this paper.

  • On new V\'elu's formulae and their applications to CSIDH and B-SIDH constant-time implementations
    by Gora Adj on October 9, 2020 at 12:28 pm

    At a combined computational expense of about $6{\ell}$ field operations, V\'elu's formulae are used to construct and evaluate degree-$\ell$ isogenies in the vast majority of isogeny-based primitive implementations. Recently, Bernstein, de Feo, Leroux and Smith introduced a new approach for solving this same problem at a reduced cost of just $\tilde{O}(\sqrt{\ell})$ field operations. In this work, we present a concrete computational analysis of these novel formulae, along with several algorithmic tricks that helped us to significantly reduce their practical cost. Furthermore, we report a Python-3 implementation of several instantiations of CSIDH and B-SIDH using a combination of the novel formulae and an adaptation of the optimal strategies commonly used in the SIDH/SIKE protocols. Compared to a traditional V\'elu constant-time implementation of CSIDH, our experimental results report a saving of 5.357\%, 13.68\% and 25.938\% base field operations for CSIDH-512, CSIDH-1024, and CSIDH-1792, respectively. Additionally, the first implementation of the B-SIDH scheme in the open literature is reported here.

  • On the Power of an Honest Majority in Three-Party Computation Without Broadcast
    by Bar Alon on October 9, 2020 at 10:04 am

    Fully secure multiparty computation (MPC) allows a set of parties to compute some function of their inputs, while guaranteeing correctness, privacy, fairness, and output delivery. Understanding the necessary and sufficient assumptions that allow for fully secure MPC is an important goal. Cleve (STOC'86) showed that full security cannot be obtained in general without an honest majority. Conversely, by Rabin and Ben-Or (STOC'89), assuming a broadcast channel and an honest majority enables a fully secure computation of any function. Our goal is to characterize the set of functionalities that can be computed with full security, assuming an honest majority, but no broadcast. This question was fully answered by Cohen et al. (TCC'16) -- for the restricted class of symmetric functionalities (where all parties receive the same output). Instructively, their results crucially rely on agreement and do not carry over to general asymmetric functionalities. In this work, we focus on the case of three-party asymmetric functionalities, providing a variety of necessary and sufficient conditions to enable fully secure computation. An interesting use-case of our results is server-aided computation, where an untrusted server helps two parties to carry out their computation. We show that without a broadcast assumption, the resource of an external non-colluding server provides no additional power. Namely, a functionality can be computed with the help of the server if and only if it can be computed without it. For fair coin tossing, we further show that the optimal bias for three-party (server-aided) $r$-round protocol remains $\Theta(1/r)$ (as in the two-party setting).

  • Optimized and secure pairing-friendly elliptic curves suitable for one layer proof composition
    by Youssef El Housni on October 9, 2020 at 9:35 am

    A zero-knowledge proof is a method by which one can prove knowledge of general non-deterministic polynomial (NP) statements. SNARKs are in addition non-interactive, short and cheap to verify. This property makes them suitable for recursive proof composition, that is proofs attesting to the validity of other proofs. To achieve this, one moves the arithmetic operations to the exponents. Recursive proof composition has been empirically demonstrated for pairing-based SNARKs via tailored constructions of expensive pairing-friendly elliptic curves namely a pair of 753-bit MNT curves, so that one curve’s order is the other curve’s base field order and vice-versa. The ZEXE construction restricts to one layer proof composition and uses a pair of curves, BLS12-377 and CP6-782, which improve significantly the arithmetic on the first curve. In this work we construct a new pairing-friendly elliptic curve to be used with BLS12- 377, which is STNFS-secure and fully optimized for one layer composition. We propose to name the new curve BW6-761. This work shows that it is at least five times faster to verify a composed SNARK proof on this curve compared to the previous state-of-the-art, and proposes an optimized Rust implementation that is almost thirty times faster than the one available in ZEXE library.

  • Random-index PIR with Applications to Large-Scale Secure MPC
    by Craig Gentry on October 9, 2020 at 8:43 am

    Private information retrieval (PIR) lets a client retrieve an entry from a database held by a server, without the server learning which entry was retrieved. Here we study a weaker variant that we call *random-index PIR* (RPIR). It differs from standard PIR in that the retrieved index is an output rather than an input of the protocol, and it is chosen at random. Our motivation for studying RPIR comes from a recent work of Benhamouda et al. (TCC'20) about maintaining secret values on public blockchains. Their solution involves choosing a small anonymous committee from among a large universe, and here we show that RPIR can be used for that purpose. The RPIR client must be implemented via secure MPC for this use case, stressing the need to make it as efficient as can be. Combined with recent techniques for secure-MPC with stateless parties, our results yield a new secrets-on-blockchain construction (and more generally large-scale MPC). Our solution tolerates any fraction $f \lneq 1/2$ of corrupted parties, solving an open problem left from the work of Benhamouda et al. Considering RPIR as a primitive, we show that it is in fact equivalent to PIR when there are no restrictions on the number of communication rounds. On the other hand, RPIR can be implemented in a ``noninteractive'' setting, which is clearly impossible for PIR. We also study batch RPIR, where multiple indexes are retrieved at once. Specifically we consider a weaker security guarantee than full RPIR, which is still good enough for our motivating application. We show that this weaker variant can be realized more efficiently than standard PIR or RPIR, and we discuss one protocol in particular that may be attractive for practical implementations.

  • SoK: Communication Across Distributed Ledgers
    by Alexei Zamyatin on October 9, 2020 at 8:08 am

    Since the inception of Bitcoin, a plethora of distributed ledgers differing in design and purpose has been created. While by design, blockchains provide no means to securely communicate with external systems, numerous attempts towards trustless cross-chain communication have been proposed over the years. Today, cross-chain communication (CCC) plays a fundamental role in cryptocurrency exchanges, scalability efforts via sharding, extension of existing systems through sidechains, and bootstrapping of new blockchains. Unfortunately, existing proposals are designed ad-hoc for specific use-cases, making it hard to gain confidence on their correctness and composability. We provide the first systematic exposition of cross-chain communication protocols. We formalize the underlying research problem and show that CCC is impossible without a trusted third party, contrary to common beliefs in the blockchain community. With this result in mind, we develop a framework to design new and evaluate existing CCC protocols, focusing on the inherent trust assumptions thereof, and derive a classification covering the field of cross-chain communication to date. We conclude by discussing open challenges for CCC research and the implications of interoperability on the security and privacy of blockchains.

  • The Area-Latency Symbiosis: Towards Improved Serial Encryption Circuits
    by Fatih Balli on October 9, 2020 at 7:03 am

    The bit-sliding paper of Jean et al. (CHES 2017) showed that the smallest-size circuit for SPN based block ciphers such as AES, SKINNY and PRESENT can be achieved via bit-serial implementations. Their technique decreases the bit size of the datapath and naturally leads to a significant loss in latency (as well as the maximum throughput). Their designs complete a single round of the encryption in 168 (resp. 68) clock cycles for 128 (resp. 64) bit blocks. A follow-up work by Banik et al. (FSE 2020) introduced the swap-and-rotate technique that both eliminates this loss in latency and achieves even smaller footprints. In this paper, we extend these results on bit-serial implementations all the way to four authenticated encryption schemes from NIST LWC. Our first focus is to decrease latency and improve throughput with the use of the swap-and-rotate technique. Our block cipher implementations have the most efficient round operations in the sense that a round function of an $n$-bit block cipher is computed in exactly $n$ clock cycles. This leads to implementations that are similar in size to the state of the art, but have much lower latency (savings up to 20 percent). We then extend our technique to 4- and 8-bit implementations. Although these results are promising, block ciphers themselves are not end-user primitives, as they need to be used in conjunction with a mode of operation. Hence, in the second part of the paper, we use our serial block ciphers to bootstrap four active NIST authenticated encryption candidates: SUNDAE-GIFT, Romulus, SAEAES and SKINNY-AEAD. In the wake of this effort, we provide the smallest block-cipher-based authenticated encryption circuits known in the literature so far.

  • Energy Analysis of Lightweight AEAD Circuits
    by Andrea Caforio on October 9, 2020 at 6:57 am

    The selection criteria for NIST's Lightweight Crypto Standardization (LWC) have been slowly shifting towards the lightweight efficiency of designs, given that a large number of candidates already establish their security claims on conservative, well-studied paradigms. The research community has accumulated a decent level of experience on authenticated encryption primitives, thanks mostly to the recently completed CAESAR competition, with the advent of the NIST LWC, the de facto focus is now on evaluating efficiency of the designs with respect to hardware metrics like area, throughput, power and energy. In this paper, we focus on a less investigated metric under the umbrella term lightweight, i.e. energy consumption. Quantitatively speaking, energy is the sum total electrical work done by a voltage source and thus is a critical metric of lightweight efficiency. Among the thirty-two second round candidates, we give a detailed evaluation of the ten that only make use of a lightweight or semi-lightweight block cipher at their core. We use this pool of candidates to investigate a list of generic implementation choices that have considerable effect on both the size and the energy consumption of modes of operation circuit, which function as an authenticated encryption primitive. Besides providing energy and circuit size metrics of these candidates, our results provide useful insights for designers who wish to understand what particular choices incur significant energy consumption in AEAD schemes. In the second part of the paper we shift our focus to threshold implementations that offer protection against first order power analysis attacks. There has been no study focusing on energy efficiency of such protected implementations and as such the optimizations involved in such circuits are not well established. We explore the simplest possible protected circuit: the one in which only the state path of the underlying block cipher is shared, and we explore how design choices like number of shares, implementation of the masked s-box and the circuit structure of the AEAD scheme affect the energy consumption.

  • Constant Rate (Non-malleable) Secret Sharing Schemes Tolerating Joint Adaptive Leakage
    by Nishanth Chandran on October 9, 2020 at 5:39 am

    A Leakage Resilient Secret Sharing (LRSS) is a secure secret sharing scheme, even when the adversary obtains some (bounded) leakage on honest shares. Ideally, such schemes must be secure against adaptive and joint leakage queries - i.e., the adversary can make a sequence of adaptive leakage queries where each query can be a joint function of many of the shares. The most important parameters of interest are the rate (= $\frac{|secret|}{|longest share|}$) and the leakage rate (ratio of the total allowable leakage from a single leakage query to the size of a share). None of the prior works tolerating such adaptive and joint leakage could attain a constant rate and constant leakage rate, even for the threshold access structure. An LRSS is non-malleable (LRNMSS) when an adversary cannot tamper shares in a way that the reconstructed secret is related to the original secret. Similar to LRSSs, none of the prior LRNMSS schemes in the information theoretic setting could attain a constant rate, even for the threshold access structure. In this work, we provide the first constant rate LRSS (for the general access structure) and LRNMSS (for the threshold access structure) schemes that tolerate such joint and adaptive leakage in the information-theoretic setting. We show how to make use of our constructions to also provide constant rate constructions of leakage-resilient (and non-malleable) secure message transmission. We obtain our results by introducing a novel object called Adaptive Extractors. Adaptive extractors can be seen as a generalization of the notion of exposure-resilient extractors (Zimand, CCC 2006). Such extractors provide security guarantees even when an adversary obtains leakage on the source of the extractor after observing the extractor output. We make a compelling case for the study of such extractors by demonstrating their critical use for obtaining adaptive leakage and believe that such an object will be of independent interest.

  • Bit Security Estimation Using Various Information-Theoretic Measures
    by Dong-Hoon Lee on October 9, 2020 at 5:39 am

    In this paper, various quantitative information-theoretic security reductions which correlate statistical difference between two probability distributions with security level’s gap between two cryptographic schemes are proposed. Security is the most important prerequisite for cryptographic primitives. In general, there are two kinds of security; one is computational security, and the other is information-theoretic security. We focus on latter one in this paper, especially the view point of bit security which is a convenient notion to indicate the quantitative security level. We propose tighter and more generalized version of information-theoretic security reductions than those of the previous works [1,2]. More specifically, we obtain about 2.5-bit tighter security reduction than that in previous work [2], and we devise a further generalized version of security reduction in the previous work [1] by relaxing the constraint on the upper bound of the information-theoretic measure, that is, λ-efficient. Through this work, we propose the methodology to estimate the affects on security level when κ-bit secure original scheme is implemented on p-bit precision system. (Here, p can be set to any value as long as certain condition is satisfied.) In the previous work [1], p was fixed as κ/2, but our result is generalized to make it possible to security level κ and precision p variate independently. Moreover, we provide diverse types of security reduction formulas for the five kinds of information-theoretic measures. We are expecting that our results could provide an information-theoretic guideline for how much the two identical cryptographic schemes (i.e., the only difference is probability distribution) may show the difference in their security level when extracting their randomness from two different probability distributions. Especially, our results can be used to obtain the quantitative estimation of how much the statistical difference between the ideal distribution and the real distribution affects the security level.

  • A New Code Based Signature Scheme without Trapdoors
    by Zhe Li on October 9, 2020 at 5:38 am

    We present a signature scheme for Hamming metric random linear codes via the Schnorr-Lyubashevsky framework that employs the rejection sampling on appropriate probability distributions instead of using trapdoors. Such an approach has been widely believed to be more challenging for linear codes as compared to lattices with Gaussian distributions. We prove that our signature scheme achieves EUF-CMA security under the assumption of the decoding one out of many problem or achieves strong EUF-CMA security under the assumption of the codeword finding problem under relaxed parameters. We provide an instantiation of the signature scheme based on Ring-LPN instances as well as quasi-cyclic codes and present some concrete parameters. In addition, a proof of concept implementation of the scheme is provided. We compare our scheme with previous unsuccessful similar attempts and provide a rigorous security analysis of our scheme. Our construction primarily relies on an efficient rejection sampling lemma for binary linear codes with respect to suitably defined variants of the binomial distribution. Essentially, the rejection sampling lemma indicates that adding a small weight vector to a large weight vector has no significant effect on the distribution of the large weight vector. Concretely, we prove that if the large weight is at least the square of the small weight and the large weight vector admits binomial distribution, the sum distribution of the two vectors can be efficiently adjusted to a binomial distribution via the rejection step and independent from the small weight vector. As a result, our scheme outputs a signature distribution that is independent of the secret key. Compared to two existing code based signature schemes, namely Durandal and Wave, the security of our scheme is reduced to full-fledged hard coding problems i.e., codeword finding problem and syndrome decoding problem for random linear codes. By contrast, the security of the Durandal and Wave schemes is reduced to newly introduced product spaces subspaces indistinguishability problem and the indistinguishability of generalized $(U,U+V)$ codes problem, respectively. We believe that building our scheme upon the more mature hard coding problems provides stronger confidence to the security of our signature scheme.

  • Adversarial Level Agreements for Two-Party Protocols
    by Marilyn George on October 9, 2020 at 5:38 am

    Adversaries in cryptography have traditionally been modeled as either semi-honest or malicious. Over the years, however, several lines of work have investigated the design of cryptographic protocols against rational adversaries. The most well-known example are covert adversaries in secure computation (Aumann & Lindell, TCC '07) which are adversaries that wish to deviate from the protocol but without being detected. To protect against such adversaries, protocols secure in the covert model guarantee that deviations are detected with probability at least $\varepsilon$ which is known as the deterrence factor. In this work, we initiate the study of contracts in cryptographic protocol design. We show how to design, use and analyze contracts between parties for the purpose of incentivizing honest behavior from rational adversaries. We refer to such contracts as adversarial level agreements (ALA). The framework we propose can result in more efficient protocols and can enforce deterrence in covert protocols; meaning that one can guarantee that a given deterrence factor will deter the adversary instead of assuming it. We show how to apply our framework to two-party protocols, including secure two-party computation (2PC) and proofs of storage (PoS). In the 2PC case, we integrate ALAs to publicly-verifiable covert protocols and show, through a game-theoretic analysis, how to set the parameters of the ALA to guarantee honest behavior. We do the same in the setting of PoS which are two-party protocols that allow a client to efficiently verify the integrity of a file stored in the cloud.

  • Doubly Efficient Interactive Proofs for General Arithmetic Circuits with Linear Prover Time
    by Jiaheng Zhang on October 9, 2020 at 5:37 am

    We propose a new doubly efficient interactive proof protocol for general arithmetic circuits. The protocol generalizes the doubly efficient interactive proof for layered circuits proposed by Goldwasser, Kalai and Rothblum to arbitrary circuits while preserving the optimal prover complexity that is strictly linear to the size of the circuits. The proof size remains succinct for low depth circuits and the verifier time is sublinear for structured circuits. We then construct a new zero knowledge argument scheme for general arithmetic circuits using our new interactive proof protocol together with polynomial commitments. Not only does our new protocol achieve optimal prover complexity asymptotically, but it is also efficient in practice. Our experiments show that it only takes 1 second to generate the proof for a circuit with 600,000 gates, which is 7 times faster than the original interactive proof protocol on the corresponding layered circuit. The proof size is 229 kilobytes and the verifier time is 0.56 second. Our implementation can take general arithmetic circuits generated by existing tools directly, without transforming them to layered circuits with high overhead on the size of the circuits.

  • Two-round trip Schnorr multi-signatures via delinearized witnesses
    by Handan Kilinc Alper on October 9, 2020 at 5:36 am

    We introduce a new m-entwined ROS problem that tweaks a random inhomogeneities in an overdetermined solvable system of linear equations (ROS) problem in a scalar field using an associated group. We prove hardness of the 2-entwined ROS-like problem in AGM plus ROM, assuming DLOG hardness in the associated group. Assuming AGM plus ROM plus KOSK and OMDL, we then prove security for a two-round trip Schnorr multi-signature protocol DWMS that creates its witness aka nonce by delinearizing two pre-witnesses supplied by each signer. At present, DWMS and MuSig-DN are the only known provably secure two-round Schnorr multi-signatures, or equivalently threshold Schnorr signatures.

  • A New Variant of Unbalanced Oil and Vinegar Using Quotient Ring: QR-UOV
    by Hiroki Furue on October 9, 2020 at 5:35 am

    The unbalanced oil and vinegar signature scheme (UOV) is a multivariate signature scheme that has essentially not been broken for over 20 years. However, it requires the use of a large public key, so various methods have been proposed to reduce its size. In this paper, we propose a new variant of UOV with the public key represented by block matrices whose components are represented as an element of a quotient ring. We discuss how the irreducibility of the polynomial used to generate the quotient ring affects the security of our proposed scheme. Furthermore, we propose parameters for our proposed scheme and discuss their security against currently known and possible attacks. We show that our proposed scheme can reduce the public key size without significantly increasing the signature size compared with other UOV variants. For example, the public key size of our proposed scheme is 66.7 KB for NIST's Post-Quantum Cryptography Project (security level 3), while that of cyclic Rainbow is 252.3 KB, where Rainbow is a variant of UOV and one of the third round finalists of NIST PQC project.

  • Improved (Related-key) Differential Cryptanalysis on GIFT
    by Fulei Ji on October 9, 2020 at 5:35 am

    In this paper, we reevaluate the security of GIFT against differential cryptanalysis under both single-key scenario and related-key scenario. Firstly, we apply Matsui's algorithm to search related-key differential trails of GIFT. We add three constraints to limit the search space and search the optimal related-key differential trails on the limited search space. We obtain related-key differential trails of GIFT-64/128 for up to 15/14 rounds, which are the best results on related-key differential trails of GIFT so far. Secondly, we propose an automatic algorithm to increase the probability of the related-key boomerang distinguisher of GIFT by searching the clustering of the related-key differential trails utilized in the boomerang distinguisher. We find a 20-round related-key boomerang distinguisher of GIFT-64 with probability 2^-58.557. The 25-round related-key rectangle attack on GIFT-64 is constructed based on it. This is the longest attack on GIFT-64. We also find a 19-round related-key boomerang distinguisher of GIFT-128 with probability 2^-109.626. We propose a 23-round related-key rectangle attack on GIFT-128 utilizing the 19-round distinguisher, which is the longest related-key attack on GIFT-128. The 24-round related-key rectangle attack on GIFT-64 and 22-round related-key boomerang attack on GIFT-128 are also presented. Thirdly, we search the clustering of the single-key differential trails. We increase the probability of a 20-round single-key differential distinguisher of GIFT-128 from 2^-121.415 to 2^-120.245. The time complexity of the 26-round differential attack on GIFT-128 is improved from 2^124:415 to 2^123:245.

  • DAPA: Differential Analysis aided Power Attack on (Non-)Linear Feedback Shift Registers (Extended version)
    by Siang Meng Sim on October 9, 2020 at 5:34 am

    Differential power analysis (DPA) is a form of side-channel analysis (SCA) that performs statistical analysis on the power traces of cryptographic computations. DPA is applicable to many cryptographic primitives, including block ciphers, stream ciphers and even hash-based message authentication code (HMAC). At COSADE 2017, Dobraunig~et~al. presented a DPA on the fresh re-keying scheme Keymill to extract the bit relations of neighbouring bits in its shift registers, reducing the internal state guessing space from 128 to 4 bits. In this work, we generalise their methodology and combine with differential analysis, we called it differential analysis aided power attack (DAPA), to uncover more bit relations and take into account the linear or non-linear functions that feedback to the shift registers (i.e. LFSRs or NLFSRs). Next, we apply our DAPA on LR-Keymill, the improved version of Keymill designed to resist the aforementioned DPA, and breaks its 67.9-bit security claim with a 4-bit internal state guessing. We experimentally verified our analysis. In addition, we improve the previous DPA on Keymill by halving the amount of data resources needed for the attack. We also applied our DAPA to Trivium, a hardware-oriented stream cipher from the eSTREAM portfolio and reduces the key guessing space from 80 to 14 bits.

  • SQISign: compact post-quantum signatures from quaternions and isogenies
    by Luca De Feo on October 9, 2020 at 5:34 am

    We introduce a new signature scheme, SQISign, (for Short Quaternion and Isogeny Signature) from isogeny graphs of supersingular elliptic curves. The signature scheme is derived from a new one-round, high soundness, interactive identification protocol. Targeting the post-quantum NIST-1 level of security, our implementation results in signatures of $204$ bytes, secret keys of $16$ bytes and public keys of $64$ bytes. In particular, the signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. On a modern workstation, our implementation in C takes 0.6s for key generation, 2.5s for signing, and 50ms for verification. While the soundness of the identification protocol follows from classical assumptions, the zero-knowledge property relies on the second main contribution of this paper. We introduce a new algorithm to find an isogeny path connecting two given supersingular elliptic curves of known endomorphism rings. A previous algorithm to solve this problem, due to Kohel, Lauter, Petit and Tignol, systematically reveals paths from the input curves to a `special' curve. This leakage would break the zero-knowledge property of the protocol. Our algorithm does not directly reveal such a path, and subject to a new computational assumption, we prove that the resulting identification protocol is zero-knowledge.

  • Authenticated Dictionaries with Cross-Incremental Proof (Dis)aggregation
    by Alin Tomescu on October 9, 2020 at 5:33 am

    Authenticated dictionaries (ADs) are a key building block of many cryptographic systems, such as transparency logs, distributed file systems and cryptocurrencies. In this paper, we propose a new notion of cross-incremental proof (dis)aggregation for authenticated dictionaries, which enables aggregating multiple proofs with respect to different dictionaries into a single, succinct proof. Importantly, this aggregation can be done incrementally and can be later reversed via disaggregation. We give an efficient authenticated dictionary construction from hidden-order groups that achieves cross-incremental (dis)aggregation. Our construction also supports updating digests, updating (cross-)aggregated proofs and precomputing all proofs efficiently. This makes it ideal for stateless validation in cryptocurrencies with smart contracts. As an additional contribution, we give a second authenticated dictionary construction, which can be used in more malicious settings where dictionary digests are adversarially-generated, but features only “one-hop” proof aggregation (with respect to the same digest). We add support for append-only proofs to this construction, which gives us an append-only authenticated dictionary (AAD) that can be used for transparency logs and, unlike previous AAD constructions, supports updating and aggregating proofs.

  • Hardness of Module-LWE and Ring-LWE on General Entropic Distributions
    by Hao Lin on October 9, 2020 at 5:33 am

    The hardness of Entropic LWE has been studied in a number of works. However, there is not work study the hardness of algebraically structured LWE with entropic secrets. In this work, we conduct a comprehensive study on establishing hardness reductions for Entropic Module-LWE and Entropic Ring-LWE. We show an entropy bound that guarantees the security of arbitrary Entropic Module-LWE and Entropic Ring-LWE, these are the first results on the hardness of algebraically structured LWE with entropic secrets. One of our central techniques is a new generalized leftover hash lemma over ring and a new decomposition theorem for continuous Gaussian distribution on KR, which might be of independent interests.

  • Round-Efficient Byzantine Broadcast under Strongly Adaptive and Majority Corruptions
    by Jun Wan on October 9, 2020 at 5:32 am

    The round complexity of Byzantine Broadcast (BB) has been a central question in distributed systems and cryptography. In the honest majority setting, expected constant round protocols have been known for decades even in the presence of a strongly adaptive adversary. In the corrupt majority setting, however, no protocol with sublinear round complexity is known, even when the adversary is allowed to {\it strongly adaptively} corrupt only 51\% of the players, and even under reasonable setup or cryptographic assumptions. Recall that a strongly adaptive adversary can examine what original message an honest player would have wanted to send in some round, adaptively corrupt the player in the same round and make it send a completely different message instead. In this paper, we are the first to construct a BB protocol with sublinear round complexity in the corrupt majority setting. Specifically, assuming the existence of time-lock puzzles with suitable hardness parameters and that the decisional linear assumption holds in suitable bilinear groups}, we show how to achieve BB in $(\frac{n}{n-f})^2 \cdot \poly\log \lambda$ rounds with $1-\negl(\lambda)$ probability, where $n$ denotes the total number of players, $f$ denotes the maximum number of corrupt players, and $\lambda$ is the security parameter. Our protocol completes in polylogarithmically many rounds even when 99\% of the players can be corrupt.

  • Impossibility on the Schnorr Signature from the One-more DL Assumption in the Non-programmable Random Oracle Model
    by Masayuki Fukumitsu on October 9, 2020 at 5:31 am

    In the random oracle model (ROM), it is provable from the DL assumption, whereas there is negative circumstantial evidence in the standard model. Fleischhacker, Jager, and Schr\"{o}der showed that the tight security of the Schnorr signature is unprovable from a strong cryptographic assumption, such as the One-More DL (OM-DL) assumption and the computational and decisional Diffie-Hellman assumption, in the ROM via a generic reduction as long as the underlying cryptographic assumption holds. However, it remains open whether or not the impossibility of the provable security of the Schnorr signature from a strong assumption via a non-tight and reasonable reduction. In this paper, we show that the security of the Schnorr signature is unprovable from the OM-DL assumption in the non-programmable ROM as long as the OM-DL assumption holds. Our impossibility result is proven via a non-tight Turing reduction.

  • BVOT: Self-Tallying Boardroom Voting with Oblivious Transfer
    by Farid Javani on October 9, 2020 at 5:27 am

    A boardroom election is an election with a small number of voters carried out with public communications. We present BVOT, a self-tallying boardroom voting protocol with ballot secrecy, fairness (no tally information is available before the polls close), and dispute-freeness (voters can observe that all voters correctly followed the protocol). BVOT works by using a multiparty threshold homomorphic encryption system in which each candidate is associated with a masked unique prime. Each voter engages in an oblivious transfer with an untrusted distributor: the voter selects the index of a prime associated with a candidate and receives the selected prime in masked form. The voter then casts their vote by encrypting their masked prime and broadcasting it to everyone. The distributor does not learn the voter's choice, and no one learns the mapping between primes and candidates until the audit phase. By hiding the mapping between primes and candidates, BVOT provides voters with insufficient information to carry out effective cheating. The threshold feature prevents anyone from computing any partial tally---until everyone has voted. Multiplying all votes, their decryption shares, and the unmasking factor yields a product of the primes each raised to the number of votes received. In contrast to some existing boardroom voting protocols, BVOT does not rely on any zero-knowledge proof; instead, it uses oblivious transfer to assure ballot secrecy and correct vote casting. Also, BVOT can handle multiple candidates in one election. BVOT prevents cheating by hiding crucial information: an attempt to increase the tally of one candidate might increase the tally of another candidate. After all votes are cast, any party can tally the votes.

  • On the Existence of Weak Keys for QC-MDPC Decoding
    by Nicolas Sendrier on October 9, 2020 at 5:27 am

    We study in this work a particular class of QC-MDPC codes for which the decoding failure rate is significantly larger than for typical QC-MDPC codes of same parameters. Our purpose is to figure out whether the existence of such weak codes impacts the security of cryptographic schemes using QC-MDPC codes as secret keys. A class of weak keys was exhibited in [DGK19]. We generalize it and show that, though their Decoding Failure Rate (DFR) is higher than normal, the set is not large enough to contribute significantly to the average DFR. It follows that with the proper semantically secure transform [HHK17], those weak keys do not affect the IND-CCA status of key encapsulation mechanisms, like BIKE, which are using QC-MDPC codes.

  • vault1317/signal-dakez: An authenticated key exchange protocol with a public key concealing and a participation deniability designed for secure messaging
    by Richard B. Riddick on October 9, 2020 at 5:26 am

    A deniable authenticated key exchange can establish a secure communication channel while leaving no cryptographic evidence of communication. Some well-designed protocol today, even in the case of betrayal by some participants and disclosure of long-term key materials, cannot leave any cryptographic evidence. However, this is no longer enough: If “Big data” technology is used to analyse data fetched from pivotal nodes, it’s not difficult to register your identity through your long-term public keys. (although it can’t be a solid evidence due to deniability) In this article, we have analysed the advantages and disadvantages of existing solutions which are claimed to be deniable to some degree, and proposed an authenticated key exchange protocol that is able to conceal the public keys from the outside of the secure channel, and deniable to some degree, and a reference implementation is provided.

  • Improved Meet-in-the-Middle Preimage Attacks against AES Hashing Modes
    by Zhenzhen Bao on October 9, 2020 at 2:53 am

    Hashing modes are ways to convert a block cipher into a hash function, and those with AES as the underlying block cipher are referred to as AES hashing modes. Sasaki in 2011, introduced the first preimage attack against AES hashing modes with the AES block cipher reduced to 7 rounds, by the method of meet-in-the-middle. In his attack, the key-schedules are not taken into account. Hence, the same attack applies to all three versions of AES. In this paper, by introducing neutral bits from the key, extra degree of freedom is gained, which is utilized in two ways, i.e., to reduce the time complexity and to extend the attack to more rounds. As an immediate result, the complexities of 7-round pseudo-preimage attacks are reduced from $2^{120}$ to $2^{104}$, $2^{96}$, and $2^{96}$ for AES-128, AES-192, and AES-256, respectively. By carefully choosing the neutral bits from the key to cancel those from the state, the attack is extended to 8 rounds for AES-192 and AES-256 with complexities $2^{112}$ and $2^{96}$. Similar results are obtained for Kiasu-BC, a tweakable block cipher based on AES-128, and interestingly the additional input tweak helps reduce the complexity and extend the attack to one more round. To the best of our knowledge, these are the first preimage attacks against 8-round AES hashing modes.

  • Automatic Search of Meet-in-the-Middle Preimage Attacks on AES-like Hashing
    by Zhenzhen Bao on October 9, 2020 at 2:21 am

    The Meet-in-the-Middle (MITM) preimage attack is highly effective in breaking the preimage resistance of many hash functions, including but not limited to the full MD5, HAVAL, and Tiger, and reduced SHA-0/1/2. It was also shown to be a threat to hash functions built on block ciphers like AES by Sasaki in 2011. Recently, such attacks on AES hashing modes evolved from merely using the freedom of choosing the internal state to also exploiting the freedom of choosing the message state. However, detecting such attacks especially those evolved variants is difficult. In previous works, the search space of the configurations of such attacks is limited, such that manual analysis is practical, which results in sub-optimal solutions. In this paper, we remove artificial limitations in previous works, formulate the essential ideas of the construction of the attack in well-defined ways, and translate the problem of searching for the best attacks into optimization problems under constraints in Mixed-Integer-Linear-Programming (MILP) models. The MILP models capture a large solution space of valid attacks and the objectives are for the optimal. With such MILP models and using the off-the-shelf solver, it is efficient to search for the best attacks exhaustively. As a result, we obtain the first attacks against the full (5-round) and an extended (5.5-round) version of Haraka-512 v2, and 8-round AES-128 hashing modes, as well as improved attacks covering more rounds of Haraka-256 v2 and other members of AES and Rijndael hashing modes.

  • Provable Security Analysis of FIDO2
    by Manuel Barbosa on October 8, 2020 at 8:02 pm

    We carry out the first provable security analysis of the new FIDO2 protocols, the promising FIDO Alliance's proposal for a standard for passwordless user authentication. Our analysis covers the core components of FIDO2: the W3C’s Web Authentication (WebAuthn) specification and the new Client-to-Authenticator Protocol (CTAP2). Our analysis is modular. For WebAuthn and CTAP2, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze their security. First, our proof confirms the authentication security of WebAuthn. Then, we show CTAP2 can only be proved secure in a weak sense; meanwhile we identify a series of its design flaws and provide suggestions for improvement. To withstand stronger yet realistic adversaries, we propose a generic protocol called sPACA and prove its strong security; with proper instantiations sPACA is also more efficient than CTAP2. Finally, we analyze the overall security guarantees provided by FIDO2 and WebAuthn+sPACA based on the security of its components. We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, we advocate the adoption of our sPACA protocol as a substitute of CTAP2 for both stronger security and better performance.

  • Low-Cost Body Biasing Injection (BBI) Attacks on WLCSP Devices
    by Colin O'Flynn on October 8, 2020 at 7:58 pm

    Body Biasing Injection (BBI) uses a voltage applied with a physical probe onto the backside of the integrated circuit die. Compared to other techniques such as electromagnetic fault injection (EMFI) or Laser Fault Injection (LFI), this technique appears less popular in academic literature based on published results. It is hypothesized being due to (1) moderate cost of equipment, and (2) effort required in device preperation. This work demonstrates that BBI (and indeed many other backside attacks) can be trivially performed on Wafer-Level Chip-Scale Packaging (WLCSP), which inherently expose the die backside. A low-cost ($15) design for the BBI tool is introduced, and validated with faults introduced on a STM32F415OG against code flow, RSA, and some initial results on various hardware block attacks are discussed.

  • High-Precision Bootstrapping of RNS-CKKS Homomorphic Encryption Using Optimal Minimax Polynomial Approximation and Inverse Sine Function
    by Joon-Woo Lee on October 8, 2020 at 5:54 pm

    Approximate homomorphic encryption with the residue number system (RNS), called RNS-variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme, is a practical fully homomorphic encryption scheme that supports arithmetic operations for real or complex number data encrypted. Although the RNS-CKKS scheme is a fully homomorphic encryption scheme, most of the applications with RNS-CKKS scheme use it as only leveled homomorphic encryption scheme because of the lack of the practicality of the bootstrapping operation of the RNS-CKKS scheme. One of the crucial problems of the bootstrapping operation is its poor precision. While other basic homomorphic operations ensure sufficiently high precision for practical use, the bootstrapping operation only supports about 20-bit fixed-point precision at best, which is lower than the standard single-precision of the floating-point number system. Due to this limitation of the current bootstrapping technology, the RNS-CKKS scheme is difficult to be used for the reliable deep-depth homomorphic computations until now. In this paper, we improve the message precision in the bootstrapping operation of the RNS-CKKS scheme. Since the homomorphic modular reduction process is one of the most important steps in determining the precision of the bootstrapping, we focus on the homomorphic modular reduction process. Firstly, we propose a fast algorithm of obtaining the optimal minimax approximate polynomial of modular reduction function and the scaled sine/cosine function over the union of the approximation regions, called the modified Remez algorithm. In fact, this algorithm derives the optimal minimax approximate polynomial of any continuous functions over any union of the finite number of intervals. Next, we propose the composite function method using inverse sine function to reduce the difference between the scaling factor used in the bootstrapping and the default scaling factor. With these methods, we reduce the approximation error in the bootstrapping of the RNS-CKKS scheme by 1/200~1/53 (5.8~7.6-bit precision improvement) for each parameter setting. While the bootstrapping without the composite function method has 16.5~20.1-bit precision at maximum, the bootstrapping with the composite function method has 22.4~27.8-bit precision, most of which are better precision than that of the single-precision number system.

  • Efficient Bootstrapping for Approximate Homomorphic Encryption with Non-Sparse Keys
    by Jean-Philippe Bossuat on October 8, 2020 at 2:44 pm

    We present a bootstrapping procedure for the full-RNS variant of the approximate homomorphic-encryption scheme of Cheon et al., CKKS (Asiacrypt 17, SAC 18). Compared to the previously proposed procedures (Eurocrypt 18 \& 19, CT-RSA 20), our bootstrapping procedure is both more precise and more efficient, in terms of CPU cost and number of consumed levels. Unlike the previous approaches, it does not require the use of sparse secret-keys. Therefore, to the best of our knowledge, this is the first procedure that enables a highly efficient and precise bootstrapping for parameters that are 128-bit-secure under more recent attacks on sparse R-LWE secrets. We achieve this by introducing two novel contributions applicable to the CKKS scheme: (i) We propose a generic algorithm for homomorphic polynomial-evaluation that is scale-invariant and optimal in level consumption. (ii) We optimize the key-switch procedure and propose a new technique for performing rotations (\textit{double hoisting}); it significantly reduces the complexity of homomorphic matrix-vector products. Our scheme improvements and bootstrapping procedure are implemented in the open-source Lattigo library. For example, bootstrapping a plaintext in $\mathbb{C}^{32768}$ takes 17 seconds, with an output coefficient modulus of 505 bits and a mean precision of 19.2 bits. Thus, we achieve an order of magnitude improvement in bootstrapped throughput (plaintext-bit per second) with respect to the previous best results, while ensuring 128-bit of security.

  • Generalized Matsui Algorithm 1 with application for the full DES
    by Tomer Ashur on October 8, 2020 at 9:23 am

    In this paper we introduce the strictly zero-correlation attack. We extend the work of Ashur and Posteuca in BalkanCryptSec 2018 and build a 0-correlation key-dependent linear trails covering the full DES. We show how this approximation can be used for a key recovery attack and empirically verify our claims through a series of experiments. To the best of our knowledge, this paper is the first to use this kind of property to leverage a meaningful attack against a symmetric-key algorithm.

  • MPC with Synchronous Security and Asynchronous Responsiveness
    by Chen-Da Liu-Zhang on October 8, 2020 at 3:33 am

    Two paradigms for secure MPC are synchronous and asynchronous protocols. While synchronous protocols tolerate more corruptions and allow every party to give its input, they are very slow because the speed depends on the conservatively assumed worst-case delay $\Delta$ of the network. In contrast, asynchronous protocols allow parties to obtain output as fast as the actual network allows, a property called responsiveness, but unavoidably have lower resilience and parties with slow network connections cannot give input. It is natural to wonder whether it is possible to leverage synchronous MPC protocols to achieve responsiveness, hence obtaining the advantages of both paradigms: full security with responsiveness up to $t$ corruptions, and extended security (full security or security with unanimous abort) with no responsiveness up to $T \ge t$ corruptions. We settle the question by providing matching feasibility and impossibility results: -For the case of unanimous abort as extended security, there is an MPC protocol if and only if $T + 2t < n$. -For the case of full security as extended security, there is an MPC protocol if and only if $T < n/2$ and $T + 2t < n$. In particular, setting $t = n/4$ allows to achieve a fully secure MPC for honest majority, which in addition benefits from having substantial responsiveness.

  • Fixslicing: A New GIFT Representation
    by Alexandre Adomnicai on October 8, 2020 at 3:15 am

    The GIFT family of lightweight block ciphers, published at CHES 2017, offers excellent hardware performance figures and has been used, in full or in part, in several candidates of the ongoing NIST lightweight cryptography competition. However, implementation of GIFT in software seems complex and not efficient due to the bit permutation composing its linear layer (a feature shared with PRESENT cipher). In this article, we exhibit a new non-trivial representation of the GIFT family of block ciphers over several rounds. This new representation, that we call fixslicing, allows extremely efficient software bitsliced implementations of GIFT, using only a few rotations, surprisingly placing GIFT as a very efficient candidate on micro-controllers. Our constant time implementations show that, on ARM Cortex-M3, 128-bit data can be ciphered with only about 800 cycles for GIFT-64 and about 1300 cycles for GIFT-128 (assuming pre-computed round keys). In particular, this is much faster than the impressive PRESENT implementation published at CHES 2017 that requires 2116 cycles in the same setting, or the current best AES constant time implementation reported that requires 1617 cycles. This work impacts GIFT, but also improves software implementations of all other cryptographic primitives directly based on it or strongly related to it.

  • Two-Pass Authenticated Key Exchange with Explicit Authentication and Tight Security
    by Xiangyu Liu; Shengli Liu; Dawu Gu; Jian Weng on October 8, 2020 at 1:35 am

    We propose a generic construction of 2-pass authenticated key exchange (AKE) scheme with explicit authentication from key encapsulation mechanism (KEM) and signature (SIG) schemes. We improve the security model due to Gjosteen and Jager [Crypto2018] to a stronger one. In the strong model, if a replayed message is accepted by some user, the authentication of AKE is broken. We define a new security notion named ''IND-mCPA with adaptive reveals'' for KEM. When the underlying KEM has such a security and SIG has unforgeability with adaptive corruptions, our construction of AKE equipped with counters as states is secure in the strong model, and stateless AKE without counter is secure in the traditional model. We also present a KEM possessing tight ''IND-mCPA security with adaptive reveals'' from the Computation Diffie-Hellman assumption in the random oracle model. When the generic construction of AKE is instantiated with the KEM and the available SIG by Gjosteen and Jager [Crypto2018], we obtain the first practical 2-pass AKE with tight security and explicit authentication. In addition, the integration of the tightly IND-mCCA secure KEM (derived from PKE by Han et al. [Crypto2019]) and the tightly secure SIG by Bader et al. [TCC2015] results in the first tightly secure 2-pass AKE with explicit authentication in the standard model.

  • Impossible Differential-Linear Cryptanalysis of Reduced-Round CLEFIA-128
    by Zheng Yuan on October 7, 2020 at 10:56 pm

    CLEFIA is a 128-bit block cipher proposed by Sony Corporation in 2007. Our paper introduces a new chosen text attack, the impossible differential-linear attack, on iterated cryptosystems. The attack is efficient for $16$-round CLEFIA with whitening keys. In the paper, we construct a $13$-round impossible differential-linear distinguisher. Based on the distinguisher, we present an effective attack on 16-round CLEFIA-$128$ with data complexity of $2^{122.73}$, recovering $96$-bit subkeys in total. Our attack can also be applied to CLEFIA-192 and CLEFIA-$256$.

  • More Efficient MPC from Improved Triple Generation and Authenticated Garbling
    by Kang Yang on October 7, 2020 at 10:32 pm

    Recent works on distributed garbling have provided highly efficient solutions for constant-round MPC tolerating an arbitrary number of corruptions. In this work, we improve upon state-of-the-art protocols in this paradigm for further performance gain. First, we propose a new protocol for generating authenticated AND triples, which is a key building block in many recent works. -- We propose a new authenticated bit protocol in the two-party and multi-party settings from bare IKNP OT extension, allowing us to reduce the communication by about 24% and eliminate many computation bottlenecks. We further improve the computational efficiency for multi-party authenticated AND triples with cheaper and fewer consistency checks and fewer hash function calls. -- We implemented our triple generation protocol and observe around 4x to 5x improvement compared to the best prior protocol in most settings. For example, in the two-party setting with 10 Gbps network and 8 threads, our protocol can generate more than 4 million authenticated triples per second, while the best prior implementation can only generate 0.8 million triples per second. In the multi-party setting, our protocol can generate more than 37000 triples per second over 80 parties, while the best prior protocol can only generate the same number of triples per second over 16 parties. We also improve the state-of-the-art multi-party authenticated garbling protocol. -- We take the first step towards applying half-gates in the multi-party setting, which enables us to reduce the size of garbled tables by 2\kappa bits per gate per garbler, where \kappa is the computational security parameter. This optimization is also applicable in the semi-honest multi-party setting. -- We further reduce the communication of circuit authentication from 4\rho bits to 1 bit per gate, using a new multi-party batched circuit authentication, where \rho is the statistical security parameter. Prior solution with similar efficiency is only applicable in the two-party setting. For example, in the three-party setting, our techniques can lead to roughly a 35% reduction in the size of a distributed garbled circuit.

  • Provably Secure PKI Schemes
    by Hemi Leibowitz on October 7, 2020 at 3:49 pm

    PKI schemes have significantly evolved since X.509, with more complex goals, e.g., transparency, to ensure security against corrupt issuers. However, due to the significant challenges involved and lack of suitable framework, the security properties of PKI schemes have not been rigorously defined or established. This is concerning as PKIs are the basis for security of many critical systems, and security concerns exist, even for well known and deployed PKI schemes, e.g., Certificate Transparency (CT). We present precise definitions allowing provably secure PKI schemes, with properties such as accountability, transparency and non-equivocation. We demonstrate usage of the PKI framework against X.509 version 2.

  • Synchronizable Exchange
    by Ranjit Kumaresan on October 7, 2020 at 2:32 pm

    Fitzi, Garay, Maurer, and Ostrovsky (Journal of Cryptology 2005) showed that in the presence of a dishonest majority, no primitive of cardinality $n - 1$ is complete for realizing an arbitrary $n$-party functionality with guaranteed output delivery. In this work, we introduce a new $2$-party primitive $\mathcal{F}_{\mathsf{SyX}}$ (``synchronizable fair exchange'') and show that it is complete for realizing any $n$-party functionality with fairness in a setting where all $n$ parties are pairwise connected by independent instances of $\mathcal{F}_{\mathsf{SyX}}$. In the $\mathcal{F}_{\mathsf{SyX}}$-hybrid model, the two parties load $\mathcal{F}_{\mathsf{SyX}}$ with some input, and following this, either party can trigger $\mathcal{F}_{\mathsf{SyX}}$ with a suitable ``witness'' at a later time to receive the output from $\mathcal{F}_{\mathsf{SyX}}$. Crucially the other party also receives output from $\mathcal{F}_{\mathsf{SyX}}$ when $\mathcal{F}_{\mathsf{SyX}}$ is triggered. The trigger witnesses allow us to synchronize the trigger phases of multiple instances of $\mathcal{F}_{\mathsf{SyX}}$, thereby aiding in the design of fair multiparty protocols. Additionally, a pair of parties may reuse a single a priori loaded instance of $\mathcal{F}_{\mathsf{SyX}}$ in any number of multiparty protocols (possibly involving different sets of parties).

  • Approximate Homomorphic Encryption with Reduced Approximation Error
    by Andrey Kim on October 7, 2020 at 10:49 am

    The Cheon-Kim-Kim-Song (CKKS) homomorphic encryption scheme is currently the most efficient method to perform approximate homomorphic computations over real and complex numbers. Although the CKKS scheme can already be used to achieve practical performance for many advanced applications, e.g., in machine learning, its broader use in practice is hindered by several major usability issues, most of which are brought about by relatively high approximation errors and the complexity of dealing with them. We present a reduced-error CKKS variant that removes the approximation errors due to the Learning With Errors (LWE) noise in the encryption and key switching operations. We also propose and implement its RNS instantiation that has a lower error than the original CKKS scheme implementation based on multiprecision integer arithmetic. While formulating the RNS instantiation, we develop an intermediate RNS variant that has a smaller approximation error than the prior RNS variant of CKKS. The high-level idea of our main RNS-specific improvements is to remove the approximate scaling error using an automated procedure that computes different scaling factors for each level and performs all necessary adjustments. The rescaling procedure and scaling factor adjustments in our implementation are done automatically and are not exposed to the application developer. We implement both RNS variants in PALISADE and compare their approximation error and efficiency to the prior RNS variant. Our results for uniform ternary secret key distribution, which is the most efficient setting included in the community homomorphic encryption security standard, show that the reduced-error CKKS RNS implementation typically has an approximation error that is 6 to 9 bits smaller for computations with multiplications than the prior RNS variant. For computations without a multiplication, the approximation error can be up to 20 bits lower than in the prior RNS variant. As compared to the original CKKS using multiprecision integer arithmetic, our reduced-error CKKS RNS implementation has an error that is smaller by 4 and up to 20 bits for computations with multiplications and without multiplications, respectively. For the sparse ternary secret key setting, which was used in the original CKKS paper, the approximate error reduction of reduced-error CKKS w.r.t. original CKKS typically ranges from 6 to 8 bits for computations with multiplications.

  • Decentralized Custody Scheme with Game-Theoretic Security
    by Zhaohua Chen on October 7, 2020 at 8:58 am

    Custody is a core financial service in which the custodian holds in safekeeping assets on behalf of the client. Although traditional custody service is typically endorsed by centralized authorities, decentralized custody scheme has become technically feasible since the emergence of digital assets, and furthermore it is badly needed by new applications such as blockchain and DeFi (Decentralized Finance). In this work, we propose a framework of decentralized asset custody scheme that is able to support a large number of custodians and safely hold customer assets of multiple times value of the total security deposit. The proposed custody scheme distributes custodians and assets into many custodian groups via combinatorial designs and random sampling, where each group fully controls the assigned assets. Since every custodian group is small, the overhead cost is significantly reduced. The liveness is also improved because even a single alive group would be able to process transactions. The security of this custody scheme is guaranteed in the game-theoretic sense, such that any adversary corrupting a bounded fraction of custodians cannot move assets more than his own security deposit. We further analyze the security and performance of our constructions, and give explicit examples with concrete numbers and figures for a better understanding of our results.

  • WBCD: White-box Block Cipher Scheme Based on Dynamic Library
    by Yatao Yang on October 7, 2020 at 8:57 am

    The aim of white-box cryptography is to protect a secret key in a whitebox environment in which an adversary has full control ability over the computer’s execution process and the running environment. In order to solve the issues of lower security in static white-box algorithm and inconvenient application in traditional dynamic white-box algorithm, it is proposed that a white-box block cipher scheme based on dynamic library named WBCD. In this scheme, look-up tables and affine transformations are used to construct dynamic white-box library, which ensure that the different look-up tables can be used for each round of encryptions. In order to illustrate the effectiveness of WBCD, it is designed a novel white-box mechanism (WBDL) based on dynamic library, ,which adopt MDS matrix. In this mechanism, different round-keys have been employed to implement encryption by randomly selecting look-up tables in each round of operations. According to the analysis, WBDL mechanism can resist differential attack, linear attack, BGE attack and side channel energy attack against SM4. After being calculated and tested, WBDL mechanism requires 466.914KB of memory to store the look-up tables, maximum differential probability(MDP) of each round is 2^−26, maximum linear probability(MLP) of each round is 2^−25.61, the encryption speed can reach to 0.273×10^−3 Gbps, and decryption speed can achieve 0.234×10^−3 Gbps. Our mechanism has better security and working efficiency, which can be used in mobile communication security and digital payment security.

  • On The Distinguishability of Ideal Ciphers
    by Roberto Avanzi on October 7, 2020 at 3:40 am

    We present distinguishing attacks (based on the Birthday Paradox) which show that the use of $2^{\ell}$ permutations for a block cipher is insufficient to obtain a security of $\ell$ bits in the Ideal Cipher Model. The context is that of an Oracle that can provide an Adversary the ciphertexts of a very small number of known plaintexts under a large number of (session) keys and IVs/nonces. Our attacks distinguish an ideal cipher from a ``perfectly ideal'' block cipher, realised as an Oracle that can always produce new permutations up to the cardinality of the symmetric group on the block space. The result is that in order to guarantee that an Adversary which is time limited to $O(2^{\ell})$ encryption requests has only a negligible advantage, the cipher needs to express $2^{3\ell}$ distinct permutations. This seems to contradict a folklore belief about the security of using a block cipher in the multi-key setting, i.e.\ to obtain $\ell$-bit security it is sufficient to use $\ell$- or $2\,\ell$-bit keys depending on the mode of operation and the use case.

  • Triply Adaptive UC NIZK
    by Ran Canetti on October 7, 2020 at 1:02 am

    The only known non-interactive zero-knowledge (NIZK) protocol that is secure against adaptive corruption of the prover is based on that of Groth-Ostrovsky-Sahai (JACM'11) (GOS). However that protocol does not guarantee full adaptive soundness. Abe and Fehr (TCC'07) construct an adaptively sound variant of the GOS protocol under a knowledge-of-exponent assumption, but knowledge assumptions of this type are inherently incompatible with universally composable (UC) security. We show the first NIZK which is triply adaptive: it is a UC NIZK protocol in a multi-party, multi-instance setting, with adaptive corruptions and no data erasures. Furthermore, the protocol provides full adaptive soundness. Our construction is very different than that of GOS: it is based on the recent NIZK of Canetti et al (STOC'19), and can be based on a variety of assumptions (e.g. LWE, or LPN and DDH). We also show how to get a succinct reference string assuming LWE or DDH from GOS-like techniques.

  • Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN
    by Yu Yu on October 6, 2020 at 7:06 pm

    Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic primitives such as fully homomorphic encryption [Gentry, STOC 2009]. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. In this paper, we carry on the worst-case to average-case reduction for LPN [BLVW19]. We first expand the underlying binary linear codes (of the worst-case NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{min(c,1-c)})},q=2^{\Omega(n^{min(c,1-c)})}$)-hard assuming that the NCP (on either code) at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions would need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.

  • Wolverine: Fast, Scalable, and Communication-Efficient Zero-Knowledge Proofs for Boolean and Arithmetic Circuits
    by Chenkai Weng on October 6, 2020 at 3:09 pm

    Efficient zero-knowledge (ZK) proofs for arbitrary boolean or arithmetic circuits have recently attracted much attention. Existing solutions suffer from either significant prover overhead (superlinear running time and/or high memory usage) or relatively high communication complexity (at least $\kappa$ bits per gate, for computational security parameter $\kappa$). In this paper, we propose a new protocol for constant-round interactive ZK proofs that simultaneously allows for a highly efficient prover and low communication. Specifically: - The prover in our ZK protocol has linear running time and, perhaps more importantly, memory usage linear in the memory needed to evaluate the circuit non-cryptographically. This allows our proof system to scale easily to very large circuits. - For statistical security parameter $\rho= 40$, our ZK protocol communicates roughly 9 bits/gate for boolean circuits and 2–4 field elements/gate for arithmetic circuits over large fields. Using 5 threads, 400 MB of memory, and a 200 Mbps network to evaluate a circuit with billions of gates, our implementation ($\rho = 40, \kappa = 128$) runs at a rate of 0.45 $\mu$s/gate in the boolean case, and 1.6 $\mu$s/gate for an arithmetic circuit over a 61-bit field. We also present an improved subfield Vector Oblivious Linear Evaluation (sVOLE) protocol with malicious security that is of independent interest.

  • Another Look at Extraction and Randomization of Groth's zk-SNARK
    by Karim Baghery on October 6, 2020 at 11:19 am

    Due to the simplicity and performance of zk-SNARKs they are widely used in real-world cryptographic protocols, including blockchain and smart contract systems. Simulation Extractability (SE) is a necessary security property for a NIZK argument to achieve Universal Composability (UC), a common requirement for such protocols. Most of the works that investigated SE focus on its strong variant which implies proof non-malleability. In this work we investigate a relaxed weaker notion, that allows proof randomization, while guaranteeing statement non-malleability, which we argue to be a more natural security property. First, we show that it is already achievable by Groth16, arguably the most efficient and widely deployed SNARK nowadays. Second, we show that because of this, Groth16 can be efficiently transformed into a black-box weakly SE NIZK, which is sufficient for UC protocols. To support the second claim, we present and compare two practical constructions, both of which strike different performance trade-offs: * Int-Groth16 makes use of a known transformation that encrypts the witness inside the SNARK circuit. We instantiate this transformation with an efficient SNARK-friendly encryption scheme. * Ext-Groth16 is based on the SAVER encryption scheme (Lee et al.) that plugs the encrypted witness directly into the verification equation, externally to the circuit. We prove that Ext-Groth16 is black-box weakly SE and, contrary to Int-Groth16, that its proofs are fully randomizable.

  • An Attack on Some Signature Schemes Constructed From Five-Pass Identification Schemes
    by Daniel Kales on October 6, 2020 at 10:12 am

    We present a generic forgery attack on signature schemes constructed from 5-round identification schemes made non-interactive with the Fiat-Shamir transform. The attack applies to ID schemes that use parallel repetition to decrease the soundness error. The attack can be mitigated by increasing the number of parallel repetitions, and our analysis of the attack facilitates parameter selection. We apply the attack to MQDSS, a post-quantum signature scheme relying on the hardness of the MQ-problem. Concretely, forging a signature for the L1 instance of MQDSS, which should provide 128 bits of security, can be done in $\approx 2^{95}$ operations. We verify the validity of the attack by implementing it for round-reduced versions of MQDSS, and the designers have revised their parameter choices accordingly. We also survey other post-quantum signature algorithms and find the attack succeeds against PKP-DSS (a signature scheme based on the hardness of the permuted kernel problem) and list other schemes that may be affected. Finally, we use our analysis to choose parameters and investigate the performance of a 5-round variant of the Picnic scheme.

  • Secure merge with $O(n \log \log n)$ secure operation
    by Brett Hemenway Falk on October 6, 2020 at 8:12 am

    Data-oblivious algorithms are a key component of many secure computation protocols. In this work, we show that advances in secure multiparty shuffling algorithms can be used to increase the efficiency of several key cryptographic tools. The key observation is that many secure computation protocols rely heavily on secure shuffles. The best data-oblivious shuffling algorithms require $O(n \log n)$, operations, but in the two-party or multiparty setting, secure shuffling can be achieved with only $O(n)$ communication. Leveraging the efficiency of secure multiparty shuffling, we give novel algorithms that improve the efficiency of securely sorting sparse lists, secure stable compaction, and securely merging two sorted lists. Securely sorting private lists is a key component of many larger secure computation protocols. The best data-oblivious sorting algorithms for sorting a list of $n$ elements require $O(n \log n)$ comparisons. Using black-box access to a linear-communication secure shuffle, we give a secure algorithm for sorting a list of length $n$ with $t \ll n$ nonzero elements with communication $O(t \log^2 n + n)$, which beats the best oblivious algorithms when the number of nonzero elements, $t$, satisfies $t < n/\log^2 n$. Secure compaction is the problem of removing dummy elements from a list, and is essentially equivalent to sorting on 1-bit keys. The best oblivious compaction algorithms run in $O(n)$-time, but they are unstable, i.e., the order of the remaining elements is not preserved. Using black-box access to a linear-communication secure shuffle, we give a stable compaction algorithm with only $O(n)$ communication. Our main result is a novel secure merge protocol. The best previous algorithms for securely merging two sorted lists into a sorted whole required $O(n \log n)$ secure operations. Using black-box access to an $O(n)$-communication secure shuffle, we give the first secure merge algorithm that requires only $O(n \log \log n)$ communication. Our algorithm takes as input $n$ secret-shared values, and outputs a secret-sharing of the sorted list. All our algorithms are generic, i.e., they can be implemented using generic secure computations techniques and make black-box access to a secure shuffle. Our techniques extend naturally to the multiparty situation (with a constant number of parties) as well as to handle malicious adversaries without changing the asymptotic efficiency. These algorithm have applications to securely computing database joins and order statistics on private data as well as multiparty Oblivious RAM protocols.

  • Forward-Secure 0-RTT Goes Live: Implementation and Performance Analysis in QUIC
    by Fynn Dallmeier on October 6, 2020 at 8:11 am

    Modern cryptographic protocols, such as TLS 1.3 and QUIC, can send cryptographically protected data in "zero round-trip times (0-RTT)", that is, without the need for a prior interactive handshake. Such protocols meet the demand for communication with minimal latency, but those currently deployed in practice achieve only rather weak security properties, as they may not achieve forward security for the first transmitted payload message and require additional countermeasures against replay attacks. Recently, 0-RTT protocols with full forward security and replay resilience have been proposed in the academic literature. These are based on puncturable encryption, which uses rather heavy building blocks, such as cryptographic pairings. Some constructions were claimed to have practical efficiency, but it is unclear how they compare concretely to protocols deployed in practice, and we currently do not have any benchmark results that new protocols can be compared with. We provide the first concrete performance analysis of a modern 0-RTT protocol with full forward security, by integrating the Bloom Filter Encryption scheme of Derler et al. (EUROCRYPT 2018) in the Chromium QUIC implementation and comparing it to Google's original QUIC protocol. We find that for reasonable deployment parameters, the server CPU load increases approximately by a factor of eight and the memory consumption on the server increases significantly, but stays below 400 MB even for medium-scale deployments that handle up to 50K connections per day. The difference of the size of handshake messages is small enough that transmission time on the network is identical, and therefore not significant. We conclude that while current 0-RTT protocols with full forward security come with significant computational overhead, their use in practice is not infeasible, and may be used in applications where the increased CPU and memory load can be tolerated in exchange for full forward security and replay resilience on the cryptographic protocol level. Our results also serve as a first benchmark that can be used to assess the efficiency of 0-RTT protocols potentially developed in the future.

  • Asynchronous Byzantine Agreement with Subquadratic Communication
    by Erica Blum on October 6, 2020 at 5:36 am

    Understanding the communication complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing. In particular, as protocols are run with a large number of parties (as, e.g., in the context of blockchain protocols), it is important to understand the dependence of the communication on the number of parties $n$. Although adaptively secure BA protocols with $o(n^2)$ communication are known in the synchronous and partially synchronous settings, no such protocols are known in the fully asynchronous case. We show here an asynchronous BA protocol with subquadratic communication tolerating an adaptive adversary who can corrupt $f<(1-\epsilon)n/3$ of the parties (for any $\epsilon>0$). One variant of our protocol assumes initial setup done by a trusted dealer, after which an unbounded number of BA executions can be run; alternately, we can achieve subquadratic amortized communication with no prior setup. We also show that some form of setup is needed for (non-amortized) subquadratic BA tolerating $\Theta(n)$ corrupted parties. As a contribution of independent interest, we show a secure-computation protocol in the same threat model that has $o(n^2)$ communication when computing no-input functionalities with short output (e.g., coin tossing).

  • FPGA Benchmarking of Round 2 Candidates in the NIST Lightweight Cryptography Standardization Process: Methodology, Metrics, Tools, and Results
    by Kamyar Mohajerani on October 6, 2020 at 5:25 am

    Over 20 Round 2 candidates in the NIST Lightweight Cryptography (LWC) process have been implemented in hardware by groups from all over the world. In August and September 2020, all implementations compliant with the LWC Hardware API, proposed in 2019, have been submitted for FPGA benchmarking to George Mason University’s LWC benchmarking team, who co-authored this report. The received submissions were first verified for correct functionality and compliance with the hardware API's specification. Then, formulas for the execution times in clock cycles, as a function of input sizes, have been confirmed using behavioral simulation. If needed, appropriate corrections were introduced in collaboration with the submission teams. The compatibility of all implementations with FPGA toolsets from three major vendors, Xilinx, Intel, and Lattice Semiconductor was verified. Optimized values of the maximum clock frequency and resource utilization metrics, such as the number of look-up tables (LUTs) and flip-flops (FFs), were obtained by running optimization tools, such as Minerva, ATHENa, and Xeda. The raw post-place and route results were then converted into values of the corresponding throughputs for long, medium-size, and short inputs. The overhead of modifying vs. reusing a key between two consecutive inputs was quantified. The results were presented in the form of easy to interpret graphs and tables, demonstrating the relative performance of all investigated algorithms. For a few submissions, the results of the initial design-space exploration were illustrated as well. An effort was made to make the entire process as transparent as possible and results easily reproducible by other groups.

  • ABY2.0: Improved Mixed-Protocol Secure Two-Party Computation
    by Arpita Patra on October 6, 2020 at 4:00 am

    Secure Multi-party Computation (MPC) allows a set of mutually distrusting parties to jointly evaluate a function on their private inputs while maintaining input privacy. In this work, we improve semi-honest secure two-party computation (2PC) over rings, with a focus on the efficiency of the online phase. We propose an efficient mixed-protocol framework, outperforming the state-of-the-art 2PC framework of ABY. Moreover, we extend our techniques to multi- input multiplication gates without inflating the online communication, i.e., it remains independent of the fan-in. Along the way, we construct efficient protocols for several primitives such as scalar product, matrix multiplication, comparison, maxpool, and equality testing. The online communication of our scalar product is two ring elements irrespective of the vector dimension, which is a feature achieved for the first time in the 2PC literature. The practicality of our new set of protocols is showcased with four applications: i) AES S-box, ii) Circuit-based Private Set Intersection, iii) Biometric Matching, and iv) Privacy- preserving Machine Learning (PPML). Most notably, for PPML, we implement and benchmark training and inference of Logistic Regression and Neural Networks over LAN and WAN networks. For training, we improve online runtime (both for LAN and WAN) over SecureML (Mohassel et al., IEEE S&P’17) in the range 1.5x-6.1x, while for inference, the improvements are in the range of 2.5x-754.3x.

  • Certificateless Public-key Authenticate Encryption with Keyword Search Revised: MCI and MTP
    by Xiao Chen on October 6, 2020 at 3:54 am

    Boneh et al proposed the cryptographic primitive public key encryption with keyword search (PEKS) to search on encrypted data without exposing the privacy of the keyword. Most standard PEKS schemes are vulnerable to inside keyword guessing attacks (KGA), i.e., a malicious server may generate a ciphertext by its own and then to guess the keyword of the trapdoor by testing. Huang et al. solved this problem by proposing the public-key authenticated encryption with keyword search (PAEKS) achieving single trapdoor privacy (TP). Qin et al. defined notion of multi-ciphertext indistinguishability (MCI) security and multi-trapdoor privacy (MTP) security, and proposed the first PAEKS scheme with MCI and TP. Certificateless public-key authenticated encryption with keyword search (CLPAEKS) is first formally proposed by He et al. as combination of the PAEKS and the certificateless public key cryptography (CLPKC). Lin et al. revised He's work and re-formalize the security requirements for CLPAEKS in terms of trapdoor privacy and ciphertext indistinguishability. However, how to achieve both MCI and MTP security in a CLPAEKS scheme is still unknown. In this paper, we initially propose a CLPAEKS scheme with both MCI security and MTP security simultaneously. We provide formal proof of our schemes in the random oracle model.

  • Integral Cryptanalysis of Reduced-Round Tweakable TWINE
    by Muhammad ElSheikh on October 6, 2020 at 3:53 am

    textsf{Tweakable TWINE} (\twine) is the first lightweight dedicated tweakable block cipher family built on Generalized Feistel Structure (GFS). \twine family is an extension of the conventional block cipher \textsf{TWINE} with minimal modification by adding a simple tweak based on the SKINNY's tweakey schedule. Similar to \textsf{TWINE}, \twine has two variants, namely \twine[80] and \twine[128]. The two variants have the same block size of 64 bits and a variable key length of 80 and 128 bits. In this paper, we study the implications for adding the tweak on the security of \twine against the integral cryptanalysis. In particular, we first utilize the bit-based division property to search for the longest integral distinguisher. As a result, we are able to perform a distinguishing attack against 19 rounds using $2^{6} \times 2^{63} = 2^{69}$ chosen tweak-plaintext combinations. We then convert this attack to key recovery attacks against 26 and 27 rounds (out of 36) of \twine[80] and \twine[128], respectively. By prepending one round before the distinguisher and using dynamically chosen plaintexts, we manage to extend the attack one more round without using the full codebook of the plaintext. Therefore, we are able to attack 27 and 28 rounds of \twine[80] and \twine[128], respectively.

  • Synchronous Constructive Cryptography
    by Chen-Da Liu-Zhang on October 6, 2020 at 3:52 am

    This paper proposes a simple synchronous composable security framework as an instantiation of the Constructive Cryptography framework, aiming to capture minimally, without unnecessary artefacts, exactly what is needed to state synchronous security guarantees. The objects of study are specifications (i.e., sets) of systems, and traditional security properties like consistency and validity can naturally be understood as specifications, thus unifying composable and property-based definitions. The framework's simplicity is in contrast to current composable frameworks for synchronous computation which are built on top of an asynchronous framework (e.g. the UC framework), thus not only inheriting artefacts and complex features used to handle asynchronous communication, but adding additional overhead to capture synchronous communication. As a second, independent contribution we demonstrate how secure (synchronous) multi-party computation protocols can be understood as constructing a computer that allows a set of parties to perform an arbitrary, on-going computation. An interesting aspect is that the instructions of the computation need not be fixed before the protocol starts but can also be determined during an on-going computation, possibly depending on previous outputs.

  • Multi-Input Functional Encryption: Efficient Applications From Symmetric Primitives (extended version)
    by Alexandros Bakas on October 6, 2020 at 3:52 am

    Functional Encryption (FE) allows users who hold a specific secret key (known as the functional key) to learn a specific function of encrypted data whilst learning nothing about the content of the underlying data. Considering this functionality and the fact that the field of FE is still in its infancy, we sought a route to apply this potent tool to design efficient applications. To this end, we first built a symmetric FE scheme for the $\ell_1$ norm of a vector space, which allows us to compute the sum of the components of an encrypted vector. Then, we utilized our construction, to design an Order-Revealing Encryption (ORE) scheme and a privately encrypted database. While there is room for improvement in our schemes, this work is among the first attempts that seek to utilize FE for the solution of practical problems that can have a tangible effect on people's daily lives.

  • Algorithmic Acceleration of B/FV-like Somewhat Homomorphic Encryption for Compute-Enabled RAM
    by Jonathan Takeshita on October 6, 2020 at 3:51 am

    Somewhat Homomorphic Encryption (SHE) allows arbitrary computation with nite multiplicative depths to be performed on encrypted data, but its overhead is high due to memory transfer incurred by large ciphertexts. Recent research has recognized the shortcomings of general-purpose computing for high-performance SHE, and has begun to pioneer the use of hardware-based SHE acceleration with hardware including FPGAs, GPUs, and Compute-Enabled RAM (CE-RAM). CERAM is well-suited for SHE, as it is not limited by the separation between memory and processing that bottlenecks other hardware. Further, CE-RAM does not move data between dierent processing elements. Recent research has shown the high eectiveness of CE-RAM for SHE as compared to highly-optimized CPU and FPGA implementations. However, algorithmic optimization for the implementation on CE-RAM is underexplored. In this work, we examine the eect of existing algorithmic optimizations upon a CE-RAM implementation of the B/FV scheme, and further introduce novel optimization techniques for the Full RNS Variant of B/FV. Our experiments show speedups of up to 784x for homomorphic multiplication, 143x for decryption, and 330x for encryption against a CPU implementation. We also compare our approach to similar work in CE-RAM, FPGA, and GPU acceleration, and note general improvement over existing work. In particular, for homomorphic multiplication we see speedups of 506.5x against CE-RAM, 66.85x against FPGA, and 30.8x against GPU as compared to existing work in hardware acceleration of B/FV.

  • Practical Post-Quantum Few-Time Verifiable Random Function with Applications to Algorand
    by Muhammed F. Esgin on October 6, 2020 at 3:51 am

    In this work, we introduce the first practical post-quantum verifiable random function (VRF) that relies on well-known (module) lattice problems, namely Module-SIS and Module-LWE. Our construction, named LB-VRF, results in a VRF value of only 84 bytes and a proof of around only 5 KB (in comparison to several MBs in earlier works), and runs in about 3 ms for evaluation and about 1 ms for verification. In order to design a practical scheme, we need to restrict the number of VRF outputs per key pair, which makes our construction few-time. Despite this restriction, we show how our few-time LB-VRF can be used in practice and, in particular, we estimate the performance of Algorand using LB-VRF. We find that, due to the significant increase in the communication size in comparison to classical constructions, which is inherent in all existing lattice-based schemes, the throughput in LB-VRF-based consensus protocol is reduced, but remains practical. In particular, in a medium-sized network with 100 nodes, our platform records a 1.16x to 4x reduction in throughput, depending on the accompanying signature used. In the case of a large network with 500 nodes, we can still maintain at least 66 transactions per second. This is still much better than Bitcoin, which processes only about 5 transactions per second.

  • Verifiable Functional Encryption using Intel SGX
    by Tatsuya Suzuki on October 6, 2020 at 3:51 am

    Most functional encryption schemes implicitly assume that inputs to decryption algorithms, i.e., secret keys and ciphertexts, are generated honestly. However, they may be tampered by malicious adversaries. Thus, verifiable functional encryption (VFE) was proposed by Badrinarayanan et al. in ASIACRYPT 2016 where anyone can publicly check the validity of secret keys and ciphertexts. They employed indistinguishability-based (IND-based) security due to an impossibility result of simulation-based (SIM-based) VFE even though SIM-based security is more desirable. In this paper, we propose a SIM-based VFE scheme. To bypass the impossibility result, we introduce a trusted setup assumption. Although it appears to be a strong assumption, we demonstrate that it is reasonable in a hardware-based construction, e.g., Fisch et al. in ACM CCS 2017. Our construction is based on a verifiable public-key encryption scheme (Nieto et al. in SCN 2012), a signature scheme, and a secure hardware scheme, which we refer to as VFE-HW. Finally, we discuss an our implementation of VFE-HW using Intel Software Guard Extensions (Intel SGX).

  • The Topographic Signature (TopoSign) Protocol
    by Hassan Jameel Asghar on October 6, 2020 at 3:44 am

    This report contains analysis and discussion on different versions of the TopoSign (Topographic Signature) protocol proposed by Matelski.

  • Aggregate Signature with Detecting Functionality from Group Testing
    by Shingo Sato on October 6, 2020 at 3:44 am

    In this paper, we comprehensively study aggregate signatures with detecting functionality, that have functionality of both keyless aggregation of multiple signatures and identifying an invalid message from the aggregate signature, in order to reduce a total amount of signature-size for lots of messages. Our contribution is (i) to formalize strong security notions for both non-interactive and interactive protocols by taking into account related work such as fault-tolerant aggregate signatures and (non-)interactive aggregate MACs with detecting functionality (i.e., symmetric case); and (ii) to construct aggregate signatures with the functionality from group testing-protocols in a generic and comprehensive way. As instantiations, pairing-based constructions are provided.

  • Interactive Aggregate Message Authentication Equipped with Detecting Functionality from Adaptive Group Testing
    by Shingo Sato on October 6, 2020 at 3:43 am

    In this paper, we propose a formal security model and a construction methodology of interactive aggregate message authentication with detecting functionality (IAMD). The IAMD is an interactive aggregate MAC protocol which can identify invalid messages with a small amount of tag-size. Several aggregate MAC schemes that can specify invalid messages has been proposed so far by using non-adaptive group testing in the prior work. In this paper, we utilize adaptive group testing to construct IAMD scheme, and we show that the resulting IAMD scheme can identify invalid messages with a small amount of tag-size compared to the previous schemes.

  • R-Propping of HK17: Upgrade for a Detached Proposal of NIST PQC First Round Survey
    by Pedro Hecht on October 6, 2020 at 3:43 am

    NIST is currently conducting the 3rd round of a survey to find post-quantum class asymmetric protocols (PQC) [1]. We participated in a joint-team with a fellow researcher of the Interamerican Open University (UAI) with a Key-Exchange Protocol (KEP) called HK17 [2]. The proposal was flawed because Bernstein [3] found a weakness, which was later refined by Li [4] using a quadratic reduction of octonions and quaternions, albeit no objection about the published non-commutative protocol and the one-way trapdoor function (OWTF). This fact promoted the search for a suitable algebraic platform. HK17 had its interest because it was the only first-round offer strictly based on canonical group theory [5]. At last, we adapted the original protocol with the R-propping solution of 3-dimensional tensors [6], yielding Bernstein attack fruitless. Therefore, an El Gamal IND-CCA2 cipher security using Cao [7] arguments are at hand.

  • Polynomial Multiplication in NTRU Prime: Comparison of Optimization Strategies on Cortex-M4
    by Erdem Alkim on October 6, 2020 at 3:42 am

    This paper proposes two different methods to perform NTT-based polynomial multiplication in polynomial rings that do not naturally support such a multiplication. We demonstrate these methods on the NTRU Prime key-encapsulation mechanism (KEM) proposed by Bernstein, Chuengsatiansup, Lange, and Vredendaal, which uses a polynomial ring that is, by design, not amenable to use with NTT. One of our approaches is using Good's trick and focuses on speed and supporting more than one parameter set with a single implementation. The other approach is using a mixed-radix NTT and focuses on the use of smaller multipliers and less memory. On an ARM Cortex-M4 microcontroller, we show that our three NTT-based implementations, one based on Good's trick and two mixed-radix NTTs, provide between 32% and 17% faster polynomial multiplication. For the parameter-set ntrulpr761, this results in between 16% and 9% faster total operations (sum of key generation, encapsulation, and decapsulation) and requires between 15% and 39% less memory than the current state-of-the-art NTRU Prime implementation on this platform, which is using Toom-Cook-based polynomial multiplication.

  • Differential analysis of the ZUC-256 initialisation
    by Steve Babbage on October 6, 2020 at 3:42 am

    This short report contains results of a brief cryptanalysis of the initialisation phase of ZUC-256. We find IV differentials that persist for 26 of the 33 initialisation rounds, and Key differentials that persist for 28 of the 33 rounds.

  • Cryptanalysis of RSA: A Special Case of Boneh-Durfee's Attack
    by Majid Mumtaz on October 6, 2020 at 3:41 am

    Boneh-Durfee proposed (at Eurocrypt 1999) a polynomial time attacks on RSA small decryption exponent which exploits lattices and sub-lattice structure to obtain an optimized bounds d < N^0.284 and d < N^0.292 respectively using lattice based Coppersmith’s method. In this paper we propose a special case of Boneh-Durfee’s attack with respect to large private exponent (i.e. d = N^ε > e = N^α where ε and α are the private and public key exponents respectively) for some α ≤ ε, which satisfy the condition d > φ(N) − N^ε. We analyzed lattices whose basis matrices are triangular and non-triangular using large decryption exponent and focus group attacks respectively. The core objective is to explore RSA polynomials underlying algebraic structure so that we can improve the performance of weak key attacks. In our solution, we implemented the attack and perform several experiments to show that an RSA cryptosystem successfully attacked and revealed possible weak keys which can ultimately enables an adversary to factorize the RSA modulus.

  • Expected-Time Cryptography: Generic Techniques and Applications to Concrete Soundness
    by Joseph Jaeger on October 6, 2020 at 3:41 am

    This paper studies concrete security with respect to expected-time adversaries. Our first contribution is a set of generic tools to obtain tight bounds on the advantage of an adversary with expected-time guarantees. We apply these tools to derive bounds in the random-oracle and generic-group models, which we show to be tight. As our second contribution, we use these results to derive concrete bounds on the soundness of public-coin proofs and arguments of knowledge. Under the lens of concrete security, we revisit a paradigm by Bootle at al. (EUROCRYPT '16) that proposes a general Forking Lemma for multi-round protocols which implements a rewinding strategy with expected-time guarantees. We give a tighter analysis, as well as a modular statement. We adopt this to obtain the first quantitative bounds on the soundness of Bulletproofs (Bünz et al., S&P 2018), which we instantiate with our expected-time generic-group analysis to surface inherent dependence between the concrete security and the statement to be proved.

  • Public-key Authenticate Encryption with Keyword Search Revised:\\ Probabilistic TrapGen algorithm
    by Xiao Chen on October 6, 2020 at 3:40 am

    Public key encryption with keyword search (PEKS) is first introduced by Boneh et al. enabling a cloud server to search on encrypted data without leaking any information of the keyword. In almost all PEKS schemes, the privacy of trapdoor is vulnerable to inside keyword guessing attacks (KGA), i.e., the server can generate the ciphertext by its own and then run the test algorithm to guess the keyword contained in the trapdoor. To sole this problem, Huang et al. proposed the public-key authenticated encryption with keyword search (PAEKS) achieving trapdoor privacy (TP) security, in which data sender not only encrypts the keyword but also authenticates it by using his/her secret key. Qin et al. introduced the notion of multi-ciphertext indistinguishability (MCI) security to capture outside chosen multi-ciphertext attacks, in which the adversary needs to distinguish two tuples of ciphertexts corresponding with two sets of keywords. They analysed that Huang's work cannot achieve MCI security, so they proposed an improved scheme to match both the MCI security and trapdoor privacy (TP) security. In addition, they also defined the notion of multi-trapdoor privacy (MTP) security, which requires to distinguish two tuples of trapdoors corresponding with two sets of keywords. Unfortunately, trapdoor generation algorithms of all above works are deterministic, which means they are unable to capture the security requirement of MTP. How to achieve MTP security against inside multi-keyword guessing attacks,i.e., designing a probabilistic trapdoor generation algorithm, is still an open problem. In this paper, we solve this problem. We initially propose two public-key authenticated encryption with keyword search schemes achieving both MCI security and MTP security simultaneously. We provide formal proof of our schemes in the random oracle model.

  • Non-Committing Encryption with Constant Ciphertext Expansion from Standard Assumptions
    by Yusuke Yoshida on October 6, 2020 at 3:40 am

    Non-committing encryption (NCE) introduced by Canetti et al. (STOC '96) is a central tool to achieve multi-party computation protocols secure in the adaptive setting. Recently, Yoshida et al. (ASIACRYPT '19) proposed an NCE scheme based on the hardness of the DDH problem, which has ciphertext expansion $\mathcal{O}(\log\lambda)$ and public-key expansion $\mathcal{O}(\lambda^2)$. In this work, we improve their result and propose a methodology to construct an NCE scheme that achieves constant ciphertext expansion.Our methodology can be instantiated from the DDH assumption and the LWE assumption. When instantiated from the LWE assumption, the public-key expansion is $\lambda\cdot\mathsf{poly}(\log\lambda)$. They are the first NCE schemes satisfying constant ciphertext expansion without using iO or common reference strings. Along the way, we define a weak notion of NCE, which satisfies only weak forms of correctness and security.We show how to amplify such a weak NCE scheme into a full-fledged one using wiretap codes with a new security property.

  • Universal Composition with Global Subroutines: Capturing Global Setup within plain UC
    by Christian Badertscher on October 6, 2020 at 3:39 am

    The Global and Externalized UC frameworks [Canetti-Dodis-Pass-Walfish, TCC 07] extend the plain UC framework to additionally handle protocols that use a ``global setup'', namely a mechanism that is also used by entities outside the protocol. These frameworks have broad applicability: Examples include public-key infrastructures, common reference strings, shared synchronization mechanisms, global blockchains, or even abstractions such as the random oracle. However, the need to work in a specialized framework has been a source of confusion, incompatibility, and an impediment to broader use. We show how security in the presence of a global setup can be captured within the plain UC framework, thus significantly simplifying the treatment. This is done as follows: - We extend UC-emulation to the case where both the emulating protocol $\pi$ and the emulated protocol $\phi$ make subroutine calls to protocol $\gamma$ that is accessible also outside $\pi$ and $\phi$. As usual, this notion considers only a single instance of $\phi$ or $\pi$ (alongside $\gamma$). - We extend the UC theorem to hold even with respect to the new notion of UC emulation. That is, we show that if $\pi$ UC-emulates $\phi$ in the presence of $\gamma$, then $\rho^{\phi\rightarrow\pi}$ UC-emulates $\rho$ for any protocol $\rho$, even when $\rho$ uses $\gamma$ directly, and in addition calls many instances of $\phi$, all of which use the same instance of $\gamma$. We prove this extension using the existing UC theorem as a black box, thus further simplifying the treatment. We also exemplify how our treatment can be used to streamline, within the plain UC model, proofs of security of systems that involve global set-up, thus providing greater simplicity and flexibility.

  • An algorithm for bounding non-minimum weight differentials in 2-round LSX-ciphers
    by Vitaly Kiryukhin on October 6, 2020 at 3:39 am

    This article describes some approaches to bounding non-minimum weight differentials (EDP) and linear hulls (ELP) in 2-round LSX-cipher. We propose a dynamic programming algorithm to solve this problem. For 2-round Kuznyechik the nontrivial upper bounds on all differentials (linear hulls) with $18$ and $19$ active Sboxes was obtained. These estimates are also holds for other differentials (linear hulls) with a larger number of active Sboxes. We obtain a similar result for 2-round Khazad. As a consequence, the exact value of the maximum expected differential (linear) probability (MEDP/MELP) was computed for this cipher.

  • Frontrunning on Automated Decentralized Exchange in Proof Of Stake Environment
    by Andrey Sobol on October 6, 2020 at 3:38 am

    This paper will contain the analysis of frontrunning potential on Quipuswap - a decentralized exchange with automated marketmaking in the context of the Proof Of Stake family consensus algo over Tezos protocol, and a proposal to boost the frontrunning resistance of the protocol via the implementation of commit reveal scheme.

  • Towards Non-Interactive Witness Hiding
    by Benjamin Kuykendall on October 6, 2020 at 3:37 am

    Witness hiding proofs require that the verifier cannot find a witness after seeing a proof. The exact round complexity needed for witness hiding proofs has so far remained an open question. In this work, we provide compelling evidence that witness hiding proofs are achievable non-interactively for wide classes of languages. We use non-interactive witness indistinguishable proofs as the basis for all of our protocols. We give four schemes in different settings under different assumptions: – A universal non-interactive proof that is witness hiding as long as any proof system, possibly an inefficient and/or non-uniform scheme, is witness hiding, has a known bound on verifier runtime, and has short proofs of soundness. – A non-uniform non-interactive protocol justified under a worst-case complexity assumption that is witness hiding and efficient, but may not have short proofs of soundness. – A new security analysis of the two-message argument of Pass [Crypto 2003], showing witness hiding for any non-uniformly hard distribution. We propose a heuristic approach to removing the first message, yielding a non-interactive argument. – A witness hiding non-interactive proof system for languages with unique witnesses, assuming the non-existence of a weak form of witness encryption for any language in NP ∩ coNP.

  • Single-to-Multi-Theorem Transformations for Non-Interactive Statistical Zero-Knowledge
    by Marc Fischlin on October 6, 2020 at 3:37 am

    Non-interactive zero-knowledge proofs or arguments allow a prover to show validity of a statement without further interaction. For non-trivial statements such protocols require a setup assumption in form of a common random or reference string (CRS). Generally, the CRS can only be used for one statement (single-theorem zero-knowledge) such that a fresh CRS would need to be generated for each proof. Fortunately, Feige, Lapidot and Shamir (FOCS 1990) presented a transformation for any non-interactive zero-knowledge proof system that allows the CRS to be reused any polynomial number of times (multi-theorem zero-knowledge). This FLS transformation, however, is only known to work for either computational zero-knowledge or requires a structured, non-uniform common reference string. In this paper we present FLS-like transformations that work for non-interactive statistical zero-knowledge arguments in the common random string model. They allow to go from single-theorem to multi-theorem zero-knowledge and also preserve soundness, for both properties in the adaptive and non-adaptive case. Our first transformation is based on the general assumption that one-way permutations exist, while our second transformation uses lattice-based assumptions. Additionally, we define different possible soundness notions for non-interactive arguments and discuss their relationships.

  • Correlation Power Analysis and Higher-order Masking Implementation of WAGE
    by Yunsi Fei on October 6, 2020 at 3:36 am

    WAGE is a hardware-oriented authenticated cipher, which has the smallest (unprotected) hardware cost (for 128-bit security level) among the round 2 candidates of the NIST lightweight cryptography (LWC) competition. In this work, we analyze the security of WAGE against the correlation power analysis (CPA) on ARM Cortex-M4F microcontroller. Our attack detects the secret key leakage from power consumption for up to 12 (out of 111) rounds of the WAGE permutation and requires 10,000 power traces to recover the 128-bit secret key. Motivated by the CPA attack and the low hardware cost of WAGE, we propose the first optimized masking scheme of WAGE in the t-strong non-interference (SNI) security model. We investigate different masking schemes for S-boxes by exploiting their internal structures and leveraging the state-of-the-art masking techniques.To practically demonstrate the effectiveness of masking, we perform the test vector leakage assessment on the 1-order masked WAGE. We evaluate the hardware performance of WAGE for 1, 2, and 3-order security and provide a comparison with other NIST LWC round 2 candidates.

  • Algebraic Key-Recovery Attacks on Reduced-Round Xoofff
    by Tingting Cui on October 6, 2020 at 3:35 am

    Farfalle, a permutation-based construction for building a pseudorandom function (PRF), is really versatile. It can be used for message authentication code, stream cipher, key derivation function, authenticated encryption and so on. Farfalle construction relies on a set of permutations and on so-called rolling functions: it can be split into a compression layer followed by a two-step expansion layer. As one instance of Farfalle, Xoofff is very efficient on a wide range of platforms from low-end devices to high-end processors by combining the narrow permutation Xoodoo and the inherent parallelism of Farfalle. In this paper, we present key-recovery attacks on reduced-round Xoofff. After identifying a weakness in the expanding rolling function, we first propose practical attacks on Xoofff instantiated with 1-/2-round Xoodoo in the expansion layer. We next extend such attack on Xoofff instantiated with 3-/4-round Xoodoo in the expansion layer by making use of Meet-in-the-Middle algebraic attacks and the linearization technique. All attacks proposed here -- which are independent of the details of the compression and/or middle layer -- have been practically verified (either on the "real" Xoofff or on a toy-version Xoofff with block-size of 96 bits). As a countermeasure, we discuss how to slightly modified the rolling function for free to reduce the number of attackable rounds.

  • Finding EM leakages at design stage: a simulation methodology
    by Davide Poggi on October 6, 2020 at 3:33 am

    For many years EM Side-Channel Attacks, which exploit the statistical link between the magnetic field radiated by secure ICs and the data they process, are a critical threat. Indeed, attackers need to find only one hotspot (position of the EM probe over the IC surface) where there is an exploitable leakage to compromise the security. As a result, designing secure ICs robust against these attacks is incredibly difficult because designers must warrant there is no hotspot over the whole IC surface. This task is all the more difficult as there is no CAD tool to compute the magnetic field radiated by ICs and hence no methodology to detect hotspots at the design stages. Within this context, this paper introduces a flow allowing predicting the EM radiations of ICs and two related methodologies. The first one aims at identifying and quantifying the dangerousness of EM hotspots at the surface of ICs, i.e. positions where to place an EM probe to capture a leakage. The second aims at locating leakage hotspots in ICs, i.e. areas in circuits from where these leakages originate.

  • Black-Box Non-Interactive Non-Malleable Commitments
    by Rachit Garg on October 6, 2020 at 3:32 am

    There has been recent exciting progress on building non-interactive non-malleable commitments from judicious assumptions. All proposed approaches proceed in two steps. First, obtain simple "base'' commitment schemes for very small tag/identity spaces based on a various sub-exponential hardness assumptions. Next, assuming sub-exponential non-interactive witness indistinguishable proofs (NIWIs), and variants of keyless collision resistant hash functions, construct non-interactive compilers that convert tag-based non-malleable commitments for a small tag space into tag-based non-malleable commitments for a larger tag space. We propose the first black-box construction of non-interactive non-malleable commitments. Our key technical contribution is a novel way of implementing the non-interactive proof of consistency required by the tag amplification process. Prior to our work, the only known approach to tag amplification without setup and with black-box use of the base scheme (Goyal, Lee, Ostrovsky and Visconti, FOCS 2012) added multiple rounds of interaction. Our construction satisfies the strongest known definition of non-malleability, i.e., CCA (chosen commitment attack) security. In addition to being black-box, our approach dispenses with the need for sub-exponential NIWIs, that was common to all prior work. Instead of NIWIs, we rely on sub-exponential hinting PRGs which can be obtained based on a broad set of assumptions such as sub-exponential CDH or LWE.

  • TR-31 and AS 2805 (Non)equivalence report
    by Arthur Van Der Merwe on October 6, 2020 at 3:32 am

    We examine the security of the Australian card payment system by analysing existing cryptographic protocols in this analysis. We compare current Australian cryptographic methods with their international counterparts, such as the ANSI TR-31 methods. Then, finally, we formulate a formal difference between the two schemes using security proofs.

  • A Lower Bound for One-Round Oblivious RAM
    by David Cash on October 6, 2020 at 3:31 am

    We initiate a fine-grained study of the round complexity of Oblivious RAM (ORAM). We prove that any one-round balls-in bins ORAM that does not duplicate balls must have either \Omega(\sqrt{N}) bandwidth or \Omega(\sqrt{N}) client memory, where N is the number of memory slots being simulated. This shows that such schemes are strictly weaker than general (multi-round) ORAMs or those with server computation, and in particular implies that a one-round version of the original square-root ORAM of Goldreich and Ostrovksy (J. ACM 1996) is optimal. We prove this bound via new techniques that differ from those of Goldreich and Ostrovksy, and of Larsen and Nielsen (CRYPTO 2018), which achieved an \Omega(\log N) bound for balls-in-bins and general multi-round ORAMs respectively. Finally we give a weaker extension of our bound that allows for limited duplication of balls, and also show that our bound extends to multiple-round ORAMs of a restricted form that include the best known constructions.

  • Quantum copy-protection of compute-and-compare programs in the quantum random oracle model
    by Andrea Coladangelo on October 6, 2020 at 3:31 am

    Copy-protection allows a software distributor to encode a program in such a way that it can be evaluated on any input, yet it cannot be "pirated" - a notion that is impossible to achieve in a classical setting. Aaronson (CCC 2009) initiated the formal study of quantum copy-protection schemes, and speculated that quantum cryptography could offer a solution to the problem thanks to the quantum no-cloning theorem. In this work, we introduce a quantum copy-protection scheme for a large class of evasive functions known as "compute-and-compare programs" - a more expressive generalization of point functions. A compute-and-compare program $\mathsf{CC}[f,y]$ is specified by a function $f$ and a string $y$ within its range: on input $x$, $\mathsf{CC}[f,y]$ outputs $1$, if $f(x) = y$, and $0$ otherwise. We prove that our scheme achieves non-trivial security against fully malicious adversaries in the quantum random oracle model (QROM), which makes it the first copy-protection scheme to enjoy any level of provable security in a standard cryptographic model. As a complementary result, we show that the same scheme fulfils a weaker notion of software protection, called "secure software leasing", introduced very recently by Ananth and La Placa (eprint 2020), with a standard security bound in the QROM, i.e. guaranteeing negligible adversarial advantage.

  • On Pseudorandom Encodings
    by Thomas Agrikola on October 6, 2020 at 3:23 am

    We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, ''honey encryption'' and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography.

  • KRNC: New Foundations for Permissionless Byzantine Consensus and Global Monetary Stability
    by Clinton Ehrlich on October 5, 2020 at 10:01 pm

    This paper applies biomimetic engineering to the problem of permissionless Byzantine consensus and achieves results that surpass the prior state of the art by four orders of magnitude. It introduces a biologically inspired asymmetric Sybil-resistance mechanism, Proof-of-Balance, which can replace symmetric Proof-of-Work and Proof-of-Stake weighting schemes. The biomimetic mechanism is incorporated into a permissionless blockchain protocol, Key Retroactivity Network Consensus (“KRNC”), which delivers ~40,000 times the security and speed of today’s decentralized ledgers. KRNC allows the fiat money that the public already owns to be upgraded with cryptographic inflation protection, eliminating the problems inherent in bootstrapping new currencies like Bitcoin and Ethereum. The paper includes two independently significant contributions to the literature. First, it replaces the non-structural axioms invoked in prior work with a new formal method for reasoning about trust, liveness, and safety from first principles. Second, it demonstrates how two previously overlooked exploits — book-prize attacks and pseudo-transfer attacks — collectively undermine the security guarantees of all prior permissionless ledgers.

  • SiGamal: A supersingular isogeny-based PKE and its application to a PRF
    by Tomoki Moriya on October 5, 2020 at 8:56 pm

    We propose two new supersingular isogeny-based public key encryptions: SiGamal and C-SiGamal. They were developed by giving an additional point of the order $2^r$ to CSIDH. SiGamal is similar to ElGamal encryption, while C-SiGamal is a compressed version of SiGamal. We prove that SiGamal and C-SiGamal are IND-CPA secure without using hash functions under a new assumption: the P-CSSDDH assumption. This assumption comes from the expectation that no efficient algorithm can distinguish between a random point and a point that is the image of a public point under a hidden isogeny. Next, we propose a Naor-Reingold type pseudo random function (PRF) based on SiGamal. If the P-CSSDDH assumption and the CSSDDH$^*$ assumption, which guarantees the security of CSIDH that uses a prime $p$ in the setting of SiGamal, hold, then our proposed function is a pseudo random function. Moreover, we estimate that the computational costs of group actions to compute our proposed PRF are about $\sqrt{\frac{8T}{3\pi}}$ times that of the group actions in CSIDH, where $T$ is the Hamming weight of the input of the PRF. Finally, we experimented with group actions in SiGamal and C-SiGamal. The computational costs of group actions in SiGamal-512 with a $256$-bit plaintext message space were about $2.62$ times that of a group action in CSIDH-512.

  • KVaC: Key-Value Commitments for Blockchains and Beyond
    by Shashank Agrawal on October 5, 2020 at 4:03 pm

    As blockchains grow in size, validating new transactions becomes more and more resource intensive. To deal with this, there is a need to discover compact encodings of the (effective) state of a blockchain -- an encoding that allows for efficient proofs of membership and updates. In the case of account-based cryptocurrencies, the state can be represented by a key-value map, where keys are the account addresses and values consist of account balance, nonce, etc. We propose a new commitment scheme for key-value maps whose size does not grow with the number of keys, yet proofs of membership are of constant-size. In fact, both the encoding and the proofs consist of just two and three group elements respectively (in groups of unknown order like class groups). Verifying and updating proofs involves just a few group exponentiations. Additive updates to key values enjoy the same level of efficiency too. Key-value commitments can be used to build dynamic accumulators and vector commitments, which find applications in group signatures, anonymous credentials, verifiable databases, interactive oracle proofs, etc. Using our new key-value commitment, we provide the most efficient constructions of (sub)vector commitments to date.

  • DLSAG: Non-Interactive Refund Transactions For Interoperable Payment Channels in Monero
    by Pedro Moreno-Sanchez on October 5, 2020 at 11:22 am

    Monero has emerged as one of the leading cryptocurrencies with privacy by design. However, this comes at the price of reduced expressiveness and interoperability as well as severe scalability issues. First, Monero is restricted to coin exchanges among individual addresses and no further functionality is supported. Second, transactions are authorized by linkable ring signatures, a digital signature scheme only available in Monero, hindering thereby the interoperability with the rest of cryptocurrencies. Third, Monero transactions require high on-chain footprint, which leads to a rapid ledger growth and thus scalability issues. In this work, we extend Monero expressiveness and interoperability while mitigating its scalability issues. We present \emph{Dual Linkable Spontaneous Anonymous Group Signature for Ad Hoc Groups (DLSAG)}, a novel linkable ring signature scheme that enables for the first time \emph{refund transactions} natively in Monero: DLSAG can seamlessly be implemented along with other cryptographic tools already available in Monero such as commitments and range proofs. We formally prove that DLSAG achieves the same security and privacy notions introduced in the original linkable ring signature~\cite{Liu2004} namely, unforgeability, signer ambiguity, and linkability. We have evaluated DLSAG and showed that it imposes even slightly lower computation and similar communication overhead than the current digital signature scheme in Monero, demonstrating its practicality. We further show how to leverage DLSAG to enable off-chain scalability solutions in Monero such as payment channels and payment-channel networks as well as atomic swaps and interoperable payments with virtually all cryptocurrencies available today. DLSAG is currently being discussed within the Monero community as an option for possible adoption as a key building block for expressiveness, interoperability, and scalability.

  • Towards Post-Quantum Security for Signal's X3DH Handshake
    by Jacqueline Brendel on October 5, 2020 at 9:17 am

    Modern key exchange protocols are usually based on the Diffie-Hellman (DH) primitive. The beauty of this primitive, among other things, is its potential reusage of key shares: DH shares can be either used a single time or in multiple runs. Since DH-based protocols are insecure against quantum adversaries, alternative solutions have to be found when moving to the post-quantum setting. However, most post-quantum candidates, including schemes based on lattices and even supersingular isogeny DH, are not known to be secure under key reuse. In particular, this means that they cannot be necessarily deployed as an immediate DH substitute in protocols. In this paper, we introduce the notion of a split key encapsulation mechanism (split KEM) to translate the desired key-reusability of a DH-based protocol to a KEM-based flow. We provide the relevant security notions of split KEMs and show how the formalism lends itself to lifting Signal's X3DH handshake to the post-quantum KEM setting without additional message flows. Although the proposed framework conceptually solves the raised issues, instantiating it securely from post-quantum assumptions proved to be non-trivial. We give passively secure instantiations from (R)LWE, yet overcoming the above-mentioned insecurities under key reuse in the presence of active adversaries remains an open problem. Approaching one-sided key reuse, we provide a split KEM instantiation that allows such reuse based on the KEM introduced by Kiltz (PKC 2007), which may serve as a post-quantum blueprint if the underlying hardness assumption (gap hashed Diffie-Hellman) holds for the commutative group action of CSIDH (Asiacrypt 2018). The intention of this paper hence is to raise awareness of the challenges arising when moving to KEM-based key exchange protocols with key-reusability, and to propose split KEMs as a specific target for instantiation in future research.

  • CHVote Protocol Specification
    by Rolf Haenni on October 5, 2020 at 8:12 am

    This document provides a self-contained, comprehensive, and fully-detailed specification of a new cryptographic voting protocol designed for political elections in Switzerland. The document describes every relevant aspect and every necessary technical detail of the computations and communications performed by the participants during the protocol execution. To support the general understanding of the cryptographic protocol, the document accommodates the necessary mathematical and cryptographic background information. By providing this information to the maximal possible extent, it serves as an ultimate companion document for the developers in charge of implementing this system. It may also serve as a manual for developers trying to implement an independent election verification software. The decision of making this document public even enables implementations by third parties, for example by students trying to develop a clone of the system for scientific evaluations or to implement protocol extensions to achieve additional security properties. In any case, the target audience of this document are system designers, software developers, and cryptographic experts.

  • Obfuscating Finite Automata
    by Steven D. Galbraith on October 4, 2020 at 3:12 pm

    We construct a VBB and perfect circuit-hiding obfuscator for evasive deterministic finite automata using a matrix encoding scheme with a limited zero-testing algorithm. We construct the matrix encoding scheme by extending an existing matrix FHE scheme. Using obfuscated DFAs we can for example evaluate secret regular expressions or disjunctive normal forms on public inputs. In particular, the possibility of evaluating regular expressions solves the open problem of obfuscated substring matching.

  • Extended Truncated-differential Distinguishers on Round-reduced AES
    by Zhenzhen Bao on October 4, 2020 at 3:17 am

    Distinguishers on round-reduced AES have attracted considerable attention in the recent years. While the number of rounds covered in key-recovery attacks did not increase, subspace, yoyo, mixture-differential, and multiple-of-n cryptanalysis advanced the understanding of the properties of the cipher. For substitution-permutation networks, integral attacks are a suitable target for extension since they usually end after a linear layer sums several subcomponents. Based on results by Patarin, Chen et al. already observed that the expected number of collisions for a sum of permutations differs slightly from that for a random primitive. Though, their target remained lightweight primitives. The present work illustrates how the well-known integral distinguisher on three-round AES resembles a sum of PRPs and can be extended to truncated-differential distinguishers over 4 and 5 rounds. In contrast to previous distinguishers by Grassi et al., our approach allows to prepend a round that starts from a diagonal subspace. We demonstrate how the prepended round can be used for key recovery with a new differential key-recovery attack on six-round AES. Moreover, we show how the prepended round can also be integrated to form a six-round distinguisher. For all distinguishers and the key-recovery attack, our results are supported by implementations with Cid et al.'s established Small-AES version. While the distinguishers do not threaten the security of the AES, they try to shed more light on its properties.

  • ABDKS Attribute-Based Encryption with Dynamic Keyword Search in Fog Computing
    by Fei Meng on October 3, 2020 at 2:29 am

    Attribute-based encryption with keyword search (ABKS) achieves both fine-grained access control and keyword search. However, in the previous ABKS schemes, the search algorithm requires that each keyword between the target keyword set and the ciphertext keyword set be the same, otherwise the algorithm doesn't output any search result, which is not conducive to use. Moreover, the previous ABKS schemes are vulnerable to what we call a \emph{peer-decryption attack}, that is, the ciphertext may be eavesdropped and decrypted by an adversary who has sufficient authorities but no information about the ciphertext keywords. In this paper, we provide a new system in fog computing, the ciphertext-policy attribute-based encryption with dynamic keyword search (ABDKS). In ABDKS, the search algorithm requires only \emph{one} keyword to be identical between the two keyword sets and outputs the corresponding correlation which reflects the number of the same keywords in those two sets. In addition, our ABDKS is resistant to peer-decryption attack, since the decryption requires not only sufficient authority but also at least one keyword of the ciphertext. Beyond that, the ABDKS shifts most computational overheads from resource constrained users to fog nodes. The security analysis shows that the ABDKS can resist Chosen-Plaintext Attack (CPA) and Chosen-Keyword Attack (CKA).

  • A Modular Treatment of Blind Signatures from Identification Schemes
    by Eduard Hauck on October 2, 2020 at 7:33 pm

    We propose a modular security treatment of blind signatures derived from linear identification schemes in the random oracle model. To this end, we present a general framework that captures several well known schemes from the literature and allows to prove their security. Our modular security reduction introduces a new security notion for identification schemes called One-More-Man In the Middle Security which we show equivalent to the classical One-More-Unforgeability notion for blind signatures. We also propose a generalized version of the Forking Lemma due to Bellare and Neven (CCS 2006) and show how it can be used to greatly improve the understandability of the classical security proofs for blind signatures schemes by Pointcheval and Stern (Journal of Cryptology 2000).

  • Lin2-Xor Lemma and Log-size Linkable Ring Signature
    by Anton A. Sokolov on October 2, 2020 at 5:40 pm

    In this paper we introduce a novel method for constructing an efficient linkable ring signature without a trusted setup in a group where decisional Diffie-Hellman problem is hard and no bilinear pairings exist. Our linkable ring signature is logarithmic in the size of the signer anonymity set, its verification complexity is linear in the anonymity set size and logarithmic in the signer threshold. A range of the recently proposed setup-free logarithmic size signatures is based on the commitment-to-zero proving system by Groth and Kohlweiss or on the Bulletproofs inner-product compression method by Bünz et al. In contrast, we construct our signature from scratch using the Lin2-Xor and Lin2-Selector lemmas that we formulate and prove here. With these lemmas we construct an n-move public coin special honest verifier zero-knowledge membership proof protocol and instantiate the protocol in the form of a general-purpose setup-free signer-ambiguous linkable ring signature in the random oracle model.

  • On the Optimality of Optimistic Responsiveness
    by Ittai Abraham on October 2, 2020 at 10:17 am

    Synchronous consensus protocols, by definition, have a worst-case commit latency that depends on the bounded network delay. The notion of optimistic responsiveness was recently introduced to allow synchronous protocols to commit instantaneously when some optimistic conditions are met. In this work, we revisit this notion of optimistic responsiveness and present optimal latency results. We present a lower bound for Byzantine Broadcast that relates the latencies of optimistic and synchronous commits when the designated sender is honest and while the optimistic commit can tolerate some faults. We then present two matching upper bounds for tolerating f faults out of n = 2f +1 parties. Our first upper bound result achieves optimal optimistic and synchronous commit latencies when the designated sender is honest and the optimistic commit can tolerate some faults. Our second upper bound result achieves optimal optimistic and synchronous commit latencies when the designated sender is honest but the optimistic commit does not tolerate any faults. The presence of matching lower and upper bound results make both of the results tight for n = 2f + 1. Our upper bound results are presented in a state machine replication setting with a steady state leader who is replaced with a view-change protocol when they do not make progress. For this setting, we also present an optimistically responsive protocol where the view-change protocol is optimistically responsive too.

  • Probabilistic analysis on Macaulay matrices over finite fields and complexity of constructing Gröbner bases
    by Igor Semaev on October 2, 2020 at 6:44 am

    Gröbner basis methods are used to solve systems of polynomial equations over finite fields, but their complexity is poorly understood. In this work an upper bound on the time complexity of constructing a Gröbner basis and finding a solutions of a system is proved. A key parameter in this estimate is the degree of regularity of the leading forms of the polynomials. Therefore, we provide an upper bound on the degree of regularity for a sufficiently overdetermined system of forms over any finite field. The bound holds with probability tending to 1 and depends only on the number of variables, the number of polynomials, and their degrees. Our results imply that sufficiently overdetermined systems of polynomial equations are solvable in polynomial time with high probability.

  • The design of scalar AES Instruction Set Extensions for RISC-V
    by Ben Marshall on October 2, 2020 at 5:17 am

    Secure, efficient execution of AES is an essential requirement on most computing platforms. Dedicated Instruction Set Extensions (ISEs) are often included for this purpose. RISC-V is a (relatively) new ISA that lacks such a standardised ISE. We survey the state-of-the-art industrial and academic ISEs for AES, implement and evaluate five different ISEs, one of which is novel. We recommend separate ISEs for 32 and 64-bit base architectures, with measured performance improvements for an AES-128 block encryption of 4× and 10× with a hardware cost of 1.1K and 8.2K gates respectivley, when compared to a software-only implementation based on use of T-tables. We also explore how the proposed standard bit-manipulation extension to RISC-V can be harnessed for efficient implementation of AES-GCM. Our work supports the ongoing RISC-V cryptography extension standardisation process.

  • A Fast and Compact RISC-V Accelerator for Ascon and Friends
    by Stefan Steinegger on October 2, 2020 at 3:07 am

    Ascon-p is the core building block of Ascon, the winner in the lightweight category of the CAESAR competition. With ISAP, another Ascon-p-based AEAD scheme is currently competing in the 2nd round of the NIST lightweight cryptography standardization project. In contrast to Ascon, ISAP focuses on providing hardening/protection against a large class of implementation attacks, such as DPA, DFA, SFA, and SIFA, entirely on mode-level. Consequently, Ascon-p can be used to realize a wide range of cryptographic computations such as authenticated encryption, hashing, pseudorandom number generation, with or without the need for implementation security, which makes it the perfect choice for lightweight cryptography on embedded devices. In this paper, we implement Ascon-p as an instruction extension for RISC-V that is tightly coupled to the processors register file and thus does not require any dedicated registers. This single instruction allows us to realize all cryptographic computations that typically occur on embedded devices with high performance. More concretely, with ISAP and Ascon's family of modes for AEAD and hashing, we can perform cryptographic computations with a performance of about 2 cycles/byte, or about 4 cycles/byte if protection against fault attacks and power analysis is desired. As we show, our instruction extension requires only 4.7 kGE, or about half the area of dedicated Ascon co-processor designs, and is easy to integrate into low-end embedded devices like 32-bit ARM Cortex-M or RISC-V microprocessors. Finally, we analyze the provided implementation security of ISAP, when implemented using our instruction extension.

  • A Secure User Authentication and Key Agreement Scheme for HWSN Tailored for the Internet of Things Environment
    by Hamidreza Yazdanpanah on October 2, 2020 at 12:08 am

    Internet of things (IoT) is the term used to describe a world in which the things interact with other things through internet connection or communication means, share the information together and or people and deliver a new class of capabilities, application and services; the world in which all things and heterogeneous devices are addressable and controllable. Wireless Sensor Networks (WSN) play an important role in such an environment since they include a wide application field. Researchers are already working on how to integrate WSN better into the IoT environment. One aspect of it is the security aspect of the integration. In 2014, Turkanovi´c proposed a lightweight user authentication and key agreement protocol for heterogeneous WSN(HWSN) based on the internet of things concept. In this scheme, a remote user can access a single desired sensor node from the WSN without the necessity of firstly connecting with a gateway node (GWN). Moreover, this scheme is lightweight because it based on simple symmetric cryptography and it uses simple hash and XOR computations. Turkanovi´c et al.'s scheme had some security shortages and it was susceptible to some security attacks. Recently Farash et al. proposed an efficient user authentication and key agreement scheme for HWSN tailored for the Internet of Things environment based on Turkanovi´c et al.'s scheme. Although their scheme is efficient, we found out that this scheme is vulnerable to some cryptographic attacks. In this paper, we demonstrate some security weaknesses of the Farash et al.’s scheme and then we propose an improved and secure mutual authentication and key agreement scheme.

  • Secret Shared Shuffle
    by Melissa Chase on October 1, 2020 at 11:23 am

    Generating secret shares of a shuffled dataset - such that neither party knows the order in which it is permuted - is a fundamental building block in many protocols, such as secure collaborative filtering, oblivious sorting, and secure function evaluation on set intersection. Traditional approaches to this problem either involve expensive public-key based crypto or using symmetric crypto on permutation networks. While public-key based solutions are bandwidth efficient, they are computation-bound. On the other hand, permutation network based constructions are communication-bound, especially when the elements are long, for example feature vectors in an ML context. We design a new 2-party protocol for this task of computing secret shares of shuffled data, which we refer to as secret-shared shuffle. Our protocol is secure against static semi-honest adversary. At the heart of our approach is a new method of obtaining two sets of pseudorandom shares which are ``correlated via the permutation'', which can be implemented with low communication using GGM puncturable PRFs. This gives a new protocol for secure shuffle which is concretely more efficient than the existing techniques in the literature. In particular, we are three orders of magnitude faster than public key based approach and one order of magnitude faster compared to the best known symmetric-key cryptography approach based on permutation network when the elements are moderately large.

  • Weak-Key Distinguishers for AES
    by Lorenzo Grassi on October 1, 2020 at 11:18 am

    In this paper, we analyze the security of AES in the case in which the whitening key is a weak key. After a systematization of the classes of weak-keys of AES, we perform an extensive analysis of weak-key distinguishers (in the single-key setting) for AES instantiated with the original key-schedule and with the new key-schedule proposed at ToSC/FSE'18 (which is faster than the standard key schedule and ensures a higher number of active S-Boxes). As one of the main results, we show that (almost) all the secret-key distinguishers for round-reduced AES currently present in the literature can be set up for a higher number of rounds of AES if the whitening key is a weak-key. Using these results as starting point, we describe a property for 9-round AES-128 and 12-round AES-256 in the chosen-key setting with complexity 264 without requiring related keys. These new chosen-key distinguishers -- set up by exploiting a variant of the multiple-of-8 property introduced at Eurocrypt'17 -- improve all the AES chosen-key distinguishers in the single-key setting. The entire analysis has been performed using a new framework that we introduce here -- called "weak-key subspace trails", which is obtained by combining invariant subspaces (Crypto'11) and subspace trails (FSE'17) into a new, more powerful, attack. Weak-key subspace trails are defined by extending the invariant subspace approach to allow for different subspaces in every round, something that so far only the subspace trail approach and a generalization for invariant subspace and invariant set attacks (Asiacrypt'18) were able to do. For an easier detection, we also provide an algorithm which finds these weak-key subspace trails.

  • A Security Model and Fully Verified Implementation for the IETF QUIC Record Layer
    by Antoine Delignat-Lavaud on October 1, 2020 at 9:29 am

    We investigate the security of the QUIC record layer, as standardized by the IETF in draft version 30. This version features major differences compared to Google's original protocol and prior IETF drafts. We model packet and header encryption, which uses a custom construction for privacy. To capture its goals, we propose a security definition for authenticated encryption with semi-implicit nonces. We show that QUIC uses an instance of a generic construction parameterized by a standard AEAD-secure scheme and a PRF-secure cipher. We formalize and verify the security of this construction in F*. The proof uncovers interesting limitations of nonce confidentiality, due to the malleability of short headers and the ability to choose the number of least significant bits included in the packet counter. We propose improvements that simplify the proof and increase robustness against strong attacker models. In addition to the verified security model, we also give concrete functional specification for the record layer, and prove that it satisfies important functionality properties (such as successful decryption of encrypted packets) after fixing more errors in the draft. We then provide a high-performance implementation of the record layer that we prove to be memory safe, correct with respect to our concrete specification (inheriting its functional correctness properties), and secure with respect to our verified model. To evaluate this component, we develop a provably-safe implementation of the rest of the QUIC protocol. Our record layer achieves nearly 2 GB/s throughput, and our QUIC implementation's performance is within 21% of an unverified baseline.

  • A Constant Time Full Hardware Implementation of Streamlined NTRU Prime
    by Adrian Marotzke on October 1, 2020 at 1:09 am

    This paper presents a constant time hardware implementation of the NIST round 2 post-quantum cryptographic algorithm Streamlined NTRU Prime. We implement the entire KEM algorithm, including all steps for key generation, encapsulation and decapsulation, and all en- and decoding. We focus on optimizing the resources used, as well as applying optimization and parallelism available due to the hardware design. We show the core en- and decapsulation requires only a fraction of the total FPGA fabric resource cost, which is dominated by that of the hash function, and the en- and decoding algorithm. For the NIST Security Level 3, our implementation uses a total of 1841 slices on a Xilinx Zynq Ultrascale+ FPGA, together with 14 BRAMs and 19 DSPs. The maximum achieved frequency is 271 MHz, at which the key generation, encapsulation and decapsulation take 4808 μs, 524 μs and 958 μs respectively. To our knowledge, this work is the first full hardware implementation where the entire algorithm is implemented.

  • Solving multivariate polynomial systems and an invariant from commutative algebra
    by Alessio Caminata on September 30, 2020 at 7:55 am

    The complexity of computing the solutions of a system of multivariate polynomial equations by means of Gr\"obner bases computations is upper bounded by a function of the solving degree. In this paper, we discuss how to rigorously estimate the solving degree of a system, focusing on systems arising within public-key cryptography. In particular, we show that it is upper bounded by, and often equal to, the Castelnuovo Mumford regularity of the ideal generated by the homogenization of the equations of the system, or by the equations themselves in case they are homogeneous. We discuss the underlying commutative algebra and clarify under which assumptions the commonly used results hold. In particular, we discuss the assumption of being in generic coordinates (often required for bounds obtained following this type of approach) and prove that systems that contain the field equations or their fake Weil descent are in generic coordinates. We also compare the notion of solving degree with that of degree of regularity, which is commonly used in the literature. We complement the paper with some examples of bounds obtained following the strategy that we describe.

  • A Composable Security Treatment of the Lightning Network
    by Aggelos Kiayias on September 30, 2020 at 3:43 am

    The high latency and low throughput of blockchain protocols constitute one of the fundamental barriers for their wider adoption.Overlay protocols, notably the lightning network, have been touted asthe most viable direction for rectifying this in practice. In this work wepresent for the first time a full formalisation and security analysis ofthe lightning network in the (global) universal composition setting thattakes into account a global ledger functionality for which previous work[Badertscher et al., Crypto’17] has demonstrated its realisability by theBitcoin blockchain protocol. As a result, our treatment delineates exactlyhow the security guarantees of the protocol depend on the properties ofthe underlying ledger. Moreover, we provide a complete and modulardescription of the core of the lightning protocol that highlights preciselyits dependency to underlying basic cryptographic primitives such as digital signatures, pseudorandom functions, identity-based signatures anda less common two-party primitive, which we term a combined digitalsignature, that were originally hidden within the lightning protocol’s implementation.

  • An Efficient Authenticated Key Exchange from Random Self-Reducibility on CSIDH
    by Tomoki Kawashima on September 30, 2020 at 2:09 am

    SIDH and CSIDH are key exchange protocols based on isogenies and conjectured to be quantum-resistant. Since their protocols are similar to the classical Diffie–Hellman, they are vulnerable to the man-in-the-middle attack. A key exchange which is resistant to such an attack is called an authenticated key exchange (AKE), and many isogeny-based AKEs have been proposed. However, none of them are efficient in that they all have relatively large security losses. This is partially because the random self-reducibility of isogeny-based decisional problems has not been proved yet. In this paper, we show that the computational problem and the gap problem of CSIDH are random self-reducible. A gap problem is a computational problem given access to the corresponding decision oracle. Moreover, we propose a CSIDH-based AKE with small security loss, following the construction of Cohn-Gordon et al. at CRYPTO 2019, as an application of the random self-reducibility of the gap problem of CSIDH. Our AKE is proved to be the fastest CSIDH-based AKE when we aim at 110-bit security level.

  • Bypassing Isolated Execution on RISC-V with Fault Injection
    by Shoei Nashimoto on September 30, 2020 at 1:51 am

    RISC-V is equipped with physical memory protection (PMP) to prevent malicious software from accessing protected memory regions. One of the main objectives of PMP is to provide a trusted execution environment (TEE) that isolates secure and insecure applications. In this study, we propose a fault injection attack to bypass the isolation based on PMP. The proposed attack scheme involves extracting successful glitch parameters for fault injection under the assumption of a black-box environment. We implement a proof-of-concept TEE compatible with PMP in RISC-V, and we verify the feasibility and effectiveness of the proposed attack through some experiments conducted in the TEE. The results show that an attacker can bypass the isolation of the TEE and read data from the protected memory region.

  • Architecture Correlation Analysis (ACA): Identifying the Source of Side-channel Leakage at Gate-level
    by Yuan Yao on September 30, 2020 at 1:51 am

    Power-based side-channel leakage is a known problem in the design of security-centric electronic systems. As the complexity of modern systems rapidly increases through the use of System-on-Chip (SoC) integration, it becomes difficult to determine the precise source of the side-channel leakage. Designers of secure SoC must therefore proactively apply expensive countermeasures to protect entire subsystems such as encryption modules, and this increases the design cost of the chip. We propose a methodology to determine, at design time, the source of side-channel leakage with much greater accuracy, at the granularity of a single cell. Our methodology, Architecture Correlation Analysis, uses a leakage model, well known from differential side-channel analysis techniques, to rank the cells within a netlist according to their contribution to the side-channel leakage. With this analysis result, the designer can selectively apply countermeasures where they are most effective. We demonstrate Architecture Correlation Analysis (ACA) on an AES coprocessor in an SoC design, and we determine the sources of side-channel leakage at the gate-level within the AES module as well as within the overall SoC. We validate ACA by demonstrating its use in an optimized hiding countermeasure.

  • On the Round Complexity of the Shuffle Model
    by Amos Beimel on September 30, 2020 at 1:46 am

    The shuffle model of differential privacy [Bittau et al. SOSP 2017; Erlingsson et al. SODA 2019; Cheu et al. EUROCRYPT 2019] was proposed as a viable model for performing distributed differentially private computations. Informally, the model consists of an untrusted analyzer that receives messages sent by participating parties via a shuffle functionality, the latter potentially disassociates messages from their senders. Prior work focused on one-round differentially private shuffle model protocols, demonstrating that functionalities such as addition and histograms can be performed in this model with accuracy levels similar to that of the curator model of differential privacy, where the computation is performed by a fully trusted party. A model closely related to the shuffle model was presented in the seminal work of Ishai et al. on establishing cryptography from anonymous communication [FOCS 2006]. Focusing on the round complexity of the shuffle model, we ask in this work what can be computed in the shuffle model of differential privacy with two rounds. Ishai et al. showed how to use one round of the shuffle to establish secret keys between every two parties. Using this primitive to simulate a general secure multi-party protocol increases its round complexity by one. We show how two parties can use one round of the shuffle to send secret messages without having to first establish a secret key, hence retaining round complexity. Combining this primitive with the two-round semi-honest protocol of Applebaum, Brakerski, and Tsabary [TCC 2018], we obtain that every randomized functionality can be computed in the shuffle model with an honest majority, in merely two rounds. This includes any differentially private computation. We hence move to examine differentially private computations in the shuffle model that (i) do not require the assumption of an honest majority, or (ii) do not admit one-round protocols, even with an honest majority. For that, we introduce two computational tasks: common element, and nested common element with parameter $\alpha$. For the common element problem we show that for large enough input domains, no one-round differentially private shuffle protocol exists with constant message complexity and negligible $\delta$, whereas a two-round protocol exists where every party sends a single message in every round. For the nested common element we show that no one-round differentially private protocol exists for this problem with adversarial coalition size $\alpha n$. However, we show that it can be privately computed in two rounds against coalitions of size $cn$ for every $c < 1$. This yields a separation between one-round and two-round protocols. We further show a one-round protocol for the nested common element problem that is differentially private with coalitions of size smaller than $c n$ for all $0 < c < \alpha < 1 / 2$.