Subscribe to IACR Eprint feed
Updated: 19 hours 22 min ago

Key Encapsulation from Noisy Key Agreement in the Quantum Random Oracle Model

Thu, 09/27/2018 - 09:08
A multitude of post-quantum key encapsulation mechanisms (KEMs) and public key encryption (PKE) schemes implicitly rely on a protocol by which Alice and Bob exchange public messages and converge on secret values that are identical up to some small noise. By our count, 24 out of 49 KEM or PKE submissions to the NIST Post-Quantum Cryptography Standardization project follow this strategy. Yet the notion of a noisy key agreement (NKA) protocol lacks a formal definition as a primitive in its own right. We provide such a formalization by defining the syntax and security for an NKA protocol. This formalization brings out four generic problems, called A and B State Recovery, Noisy Key Search and Noisy Key Distinguishing, whose solutions must be hard in the quantum computing model. Informally speaking, these can be viewed as noisy, quantum-resistant counterparts of the problems arising from the classical Diffie-Hellman type protocols. We show that many existing proposals contain an NKA component that fits our formalization and we reveal the induced concrete hardness assumptions. The question arises whether considering NKA as an independent primitive can help provide modular designs with improved efficiency and/or proofs. As the second contribution of this paper, we answer this question positively by presenting a generic transform from a secure NKA protocol to an IND-CCA secure KEM in the quantum random oracle model, with a security bound tightly related to the NKD problem. This transformation is essentially the same as that of the NIST candidate Ramstake. While establishing the security of Ramstake was our initial objective, the collection of tools that came about as a result of this journey is of independent interest.

Slothful reduction

Thu, 09/27/2018 - 06:57
In the implementation of many public key schemes, there is a need to implement modular arithmetic. Typically this consists of addition, subtraction, multiplication and (occasionally) division with respect to a prime modulus. To resist certain side-channel attacks it helps if implementations are ``constant time''. As the calculations proceed there is potentially a need to reduce the result of an operation to its remainder modulo the prime modulus. However often this reduction can be delayed, a process known as ``lazy reduction''. The idea is that results do not have to be fully reduced at each step, that full reduction takes place only occasionally, hence providing a performance benefit. Here we extend the idea to determine the circumstances under which reduction can be delayed to the very end of a particular public key operation.

Hardware-Supported ORAM in Effect: Practical Oblivious Search and Update on Very Large Dataset

Thu, 09/27/2018 - 04:02
The ability to query and update over encrypted data is an essential feature to enable breach- resilient cyber-infrastructures. Statistical attacks on searchable encryption (SE) have demonstrated the importance of sealing information leaks in access patterns. In response to such attacks, the community has proposed the Oblivious Random Access Machine (ORAM). However, due to the logarithmic communication overhead of ORAM, the composition of ORAM and SE is known to be costly in the conventional client-server model, which poses a critical barrier toward its practical adaptations. In this paper, we propose a novel hardware-supported privacy-enhancing platform called Practical Oblivious Search and Update Platform (POSUP), which enables oblivious keyword search and update operations on large datasets with high efficiency. We harness Intel SGX to realize efficient oblivious data structures for oblivious search/update purposes. We implemented POSUP and evaluated its per- formance on a Wikipedia dataset containing ≥ 229 keyword-file pairs. Our implementation is highly efficient, taking only 1 ms to access a 3 KB block with Circuit-ORAM. Our experiments have shown that POSUP offers up to 70× less end-to-end delay with 100× reduced network bandwidth consump- tion compared with the traditional ORAM-SE composition without secure hardware. POSUP is also at least 4.5× faster for up to 99.5% of keywords that can be searched compared with state-of-the-art Intel SGX-assisted search platforms.

A Bit-fixing PRF with O(1) Collusion-Resistance from LWE

Thu, 09/27/2018 - 01:05
Constrained pseudorandom functions (CPRFs) allow learning modified PRF keys that can evaluate the PRF on a subset of the input space, or based on some sort of predicate. First introduced by Boneh and Waters [Asiacrypt 2013], they have been shown to be a useful cryptographic primitive with many applications. The full security definition of CPRFs requires the adversary to learn multiple constrained keys, a requirement for all of these applications. Unfortunately, existing constructions of CPRFs satisfying this security notion are only known from exceptionally strong cryptographic assumptions, such as indistinguishability obfuscation and the existence of multilinear maps, even for very weak predicates. CPRFs from more standard assumptions only satisfy security when one key is learnt. In this work, we give the first construction of a CPRF that can issue a constant number of constrained keys for bit-fixing predicates, from learning with errors (LWE). It also satisfies \(1\)-key privacy (otherwise known as constraint-hiding). Finally, our construction achieves fully adaptive security with polynomial security loss; the only construction to achieve such security under a standard assumption. Our technique represents a noted departure existing for CPRF constructions. We hope that it may lead to future constructions that can expose a greater number of keys, or consider more expressive predicates (such as circuit-based constraints).

ARM2GC: High-Level and Reliable Secure Computation Framework

Wed, 09/26/2018 - 17:20
We present ARM2GC, a novel secure computation framework based on Yao’s Garbled Circuit (GC) protocol and the ARM processor. It allows users to develop privacy-preserving applications using standard high-level programming languages (e.g., C) and compile them using off-the-shelf ARM compilers (e.g., gcc-arm). The main enabler of this framework is the introduction of SkipGate, an algorithm that dynamically omits the communication and encryption cost of a gate when its output is independent of the private data. SkipGate greatly enhances the performance of ARM2GC by omitting costs of the gates associated with the instructions of the compiled binary, which is known by both parties involved in the computation. Our evaluation on benchmark functions demonstrates that ARM2GC not only outperforms the current GC frameworks that support high-level languages, it also achieves efficiency comparable to the best prior solutions based on hardware description languages. Moreover, in contrast to previous high-level frameworks with domain-specific languages and customized compilers, ARM2GC relies on standard ARM compiler which is rigorously verified and supports programs written in the standard syntax.

Round-Optimal Fully Black-Box Zero-Knowledge Arguments from One-Way Permutations

Wed, 09/26/2018 - 10:12
In this paper, we revisit the round complexity of designing zero-knowledge (ZK) arguments via a black-box construction from minimal assumptions. Our main result implements a 4-round ZK argument for any language in NP, based on injective one-way functions, that makes black-box use of the underlying function. As a corollary, we also obtain the first 4-round perfect zero-knowledge argument for NP based on claw-free permutations via a black-box construction and 4-round input-delayed commit-and-prove zero-knowledge argument based on injective one-way functions.

Forking a Blockcipher for Authenticated Encryption of Very Short Messages

Wed, 09/26/2018 - 09:35
Highly efficient encryption and authentication of short messages has been identified as an essential requirement for enabling security in constrained computation and communication scenarios such as the CAN FD in automotive systems (with maximum message length of 64 bytes), massive IoT and critical communication domains of 5G, and Narrowband IoT (NB-IoT), to mention some. Accordingly, NIST has specified, as a design requirement in the lightweight cryptography project, that AEAD submissions shall be “optimized to be efficient for short messages (e.g., as short as 8 bytes)”. We propose AEAD schemes that exceed in efficiency over all previous general-purpose modular AEAD designs at processing (very) short inputs. The main ingredient in our solution is a new low-level primitive, called a tweakable forkcipher, which we introduce and formalize in this paper. We give an instance of the tweakable forkcipher and dub it ForkAES. It is based on the tweakable blockcipher KIASU, which relies on the round function of AES and uses the TWEAKEY framework to derive round keys from a 128-bit secret key and a 64-bit tweak. Finally, we demonstrate the applicability of a tweakable forkcipher by designing several provably secure nonce-based AEAD modes of operation, optimized to be efficient for short messages. Considering the AES block size (16 bytes) as a reference, our new AE schemes can beat all known schemes for single-block messages while still performing better than majority of the existing schemes for combined message and associated data lengths up to 4 blocks. While ForkAES as a concrete instantiation for a forkcipher is based on KIASU, we note that our solution provides a general recipe for lightweight AEAD for short messages, even for very resource-constrained scenarios in which AES may not be considered a lightweight option. In those environments, our schemes can be instantiated using a forkcipher that is realized based on the best off-the-shelf lightweight blockcipher, following the TWEAKEY framework.

On the Security of a Certificateless Strong Designated Verifier Signature Scheme

Wed, 09/26/2018 - 09:35
Recently, Chen et al. proposed the first non-delegatable certificateless strong designated verifier signature scheme and claimed that their scheme achieves all security requirements. However, in this paper, we disprove their claim and present a concrete attack which shows that their proposed scheme is forgeable. More precisely, we show that there exist adversaries who are able to forge any signer's signature for any designated verifier on any message of his choice.

Note on Constructing Constrained PRFs from OWFs with Constant Collusion Resistance

Wed, 09/26/2018 - 09:34
Constrained pseudorandom functions (CPRFs) are a type of PRFs that allows one to derive a constrained key $\mathsf{K}_C$ from the master key $\mathsf{K}$. While the master key $\mathsf{K}$ allows one to evaluate on any input as a standard PRF, the constrained key $\mathsf{K}_C$ only allows one to evaluate on inputs $x$ such that $C(x) = 1$. Since the introduction of CPRFs by Boneh and Waters (ASIACRYPT'13), Kiayias et al. (CCS'13), and Boyle et al. (PKC'14), there have been various constructions of CPRFs. However, thus far, almost all constructions (from standard assumptions and non-trivial constraints) are only proven to be secure if at most one constrained key $\mathsf{K}_C$ is known to the adversary, excluding the very recent work of Davidson and Nishimaki (EPRINT'18). Considering the interesting applications of CPRFs such as ID-based non-interactive key exchange, we desire CPRFs that are collusion resistance with respect to the constrained keys. In this work, we make progress in this direction and construct a CPRF for the bit-fixing predicates that are collusion resistance for a constant number of constrained keys. Surprisingly, compared to the heavy machinery that was used by previous CPRF constructions, our construction only relies on the existence of one-way functions.

Best Possible Information-Theoretic MPC

Wed, 09/26/2018 - 09:33
We reconsider the security guarantee that can be achieved by general protocols for secure multiparty computation in the most basic of settings: information-theoretic security against a semi-honest adversary. Since the 1980s, we have elegant solutions to this problem that offer full security, as long as the adversary controls a minority of the parties, but fail completely when that threshold is crossed. In this work, we revisit this problem, questioning the optimality of the standard notion of security. We put forward a new notion of information-theoretic security which is strictly stronger than the standard one, and which we argue to be ``best possible.'' Our new notion still requires full security against dishonest minority in the usual sense, but also requires a meaningful notion of information-theoretic security against dishonest majority. We present protocols for useful classes of functions that satisfy this new notion of security. Our protocols have the unique feature of combining the efficiency benefits of protocols for an honest majority and (most of) the security benefits of protocols for dishonest majority. We further extend some of the solutions to the malicious setting.

Watermarking PRFs under Standard Assumptions: Public Marking and Security with Extraction Queries

Tue, 09/25/2018 - 22:23
A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC '16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO '17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application. In this work, we construct a new PRF watermarking scheme with the following properties. * The marking procedure is public and therefore anyone can embed marks in PRFs from the family. Previously we had no such construction even using obfuscation. * The extraction key is secret, but marks remain unremovable even if the attacker has access to an extraction oracle. Previously we had no such construction under standard assumptions. * Our scheme is simple, uses generic components and can be instantiated under many different assumptions such as DDH, Factoring or LWE. The above benefits come with one caveat compared to prior work: the PRF family that we can watermark depends on the public parameters of the watermarking scheme and the watermarking authority has a secret key which can break the security of all of the PRFs in the family. Since the watermarking authority is usually assumed to be trusted, this caveat appears to be acceptable.

Achieving Fair Treatment in Algorithmic Classification

Tue, 09/25/2018 - 16:49
Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today’s classification tasks. In particular, a subject of much recent debate is that of finding, and subsequently achieving, suitable definitions of fairness in an algorithmic context. In this work, following the work of Hardt et al. (NIPS’16), we consider and formalize the task of sanitizing an unfair classifier C into a classifier C' satisfying an approximate notion of "equalized odds", or fair treatment. Our main result shows how to take any (possibly unfair) classifier C over a finite outcome space, and transform it—-by just perturbing the output of C—according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, to produce a classifier C' that satisfies fair treatment; we additionally show that our derived classifier is near-optimal in terms of accuracy. We also experimentally evaluate the performance of our method.

Secure Certification of Mixed Quantum States with Application to Two-Party Randomness Generation

Tue, 09/25/2018 - 16:25
We investigate sampling procedures that certify that an arbitrary quantum state on $n$ subsystems is close to an ideal mixed state $\varphi^{\otimes n}$ for a given reference state $\varphi$, up to errors on a few positions. This task makes no sense classically: it would correspond to certifying that a given bitstring was generated according to some desired probability distribution. However, in the quantum case, this is possible if one has access to a prover who can supply a purification of the mixed state. In this work, we introduce the concept of mixed-state certification, and we show that a natural sampling protocol offers secure certification in the presence of a possibly dishonest prover: if the verifier accepts then he can be almost certain that the state in question has been correctly prepared, up to a small number of errors. We then apply this result to two-party quantum coin-tossing. Given that strong coin tossing is impossible, it is natural to ask ``how close can we get". This question has been well studied and is nowadays well understood from the perspective of the bias of individual coin tosses. We approach and answer this question from a different---and somewhat orthogonal---perspective, where we do not look at individual coin tosses but at the global entropy instead. We show how two distrusting parties can produce a common high-entropy source, where the entropy is an arbitrarily small fraction below the maximum.

Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time

Tue, 09/25/2018 - 16:21
A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography — a useful property of hash functions for deterring large-scale password-cracking attacks — and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows. First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings. Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs. Finally, as an additional contribution of independent interest, we present an asymptotically tight graph construction that achieves the best possible space complexity up to log log n-factors for an existing memory-hardness measure called cumulative complexity in the sequential pebbling model.

Pages

IACR Eprint