This paper initiates a study of Fine Grained Secure Computation: i.e.
the construction of secure computation primitives against ``moderately complex" adversaries. We present definitions and constructions for Fully Homomorphic Encryption and Verifiable Computation secure against (non-uniform) $\mathsf{NC}^1$ adversaries. We also present two application scenarios for our model: (i) hardware chips that prove their own correctness, and (ii) protocols against rational adversaries potentially relevant to the Verifier's Dilemma in smart-contracts transactions such as Ethereum.

Ratcheted key exchange (RKE) is a cryptographic technique used in instant messaging software like Signal and the WhatsApp messenger for attaining strong security in the face of state exposure attacks (including automatic healing from the latter). RKE received first academic attention in the recent works of Cohn-Gordon et al. (EuroS&P 2017) and Bellare et al. (CRYPTO 2017). While the former is analytical in the sense that it aims primarily at assessing the security that one particular protocol does achieve, rather than looking for a strong notion of security that it could achieve, the authors of the latter follow a different approach in that they first develop a notion of security they want to achieve, and then securely instantiate it. Unfortunately, however, their model is too restricted to serve for the analysis of real-world messenger apps, for considering exclusively unidirectional communication, and for considering only the exposure of the state of only one party.
In this article we resolve the limitations of prior work by developing alternative security definitions, for unidirectional RKE as well as for RKE where both parties can contribute. We follow a purist approach, aiming at finding strong yet convincing notions that cover a realistic communication model; in particular, and in contrast to prior work, our models support fully concurrent operation of both participants.
We further propose secure instantiations (as the protocols analyzed or proposed by Cohn-Gordon et al. and Bellare et al. turn out to be weak in our models). While our scheme for the unidirectional case builds on a generic KEM as the main building block (differently to prior work that requires explicitly Diffie-Hellman), our schemes for bidirectional communication require, perhaps surprisingly, considerably stronger tools.

Malicious exploitation of faults for extracting secrets is one of the most practical and potent threats to modern cryptographic primitives. Interestingly, not every possible fault for a cryptosystem is maliciously exploitable, and evaluation of the exploitability of a fault is nontrivial. In order to devise precise defense mechanisms against such rogue faults, a comprehensive knowledge is required about the exploitable part of the fault space of a cryptosystem.
Unfortunately, the fault space is diversified and of formidable size even while a single crypto-primitive is considered and traditional manual fault analysis techniques may often fall short
to practically cover such a fault space within reasonable time. An automation for analyzing individual fault instances for their exploitability is thus inevitable. Such an automation is
supposed to work as the core engine for analyzing the fault spaces of cryptographic primitives. In this paper, we propose an automation for evaluating the exploitability status of fault instances
from block ciphers, mainly in the context of Differential Fault Analysis (DFA) attacks. The proposed framework is generic and scalable, which are perhaps the two most important features
for covering diversified fault spaces of formidable size originating from different ciphers. As a proof-of-concept, we reconstruct some known attack examples on AES and PRESENT using
the framework and finally analyze a recently proposed cipher GIFT [21] for the first time. It is found that the secret key of GIFT can be uniquely determined with 1 nibble fault instance
injected at the beginning of the 25th round with a reasonable computational complexity of 2^14 .

We propose a framework for constructing efficient designated-verifier non-interactive zero-knowledge proofs (DVNIZK) for a wide class of algebraic languages over abelian groups, under standard assumptions. The proofs obtained via our framework are proofs of knowledge, enjoy statistical, and unbounded soundness (the soundness holds even when the prover receives arbitrary feedbacks on previous proofs). Previously, no efficient DVNIZK system satisfying any of those three properties was known. Our framework allows proving arbitrary relations between cryptographic primitives such as Pedersen commitments, ElGamal encryptions, or Paillier encryptions, in an efficient way. For the latter, we further exhibit the first non-interactive zero-knowledge proof system in the standard model that is more efficient than proofs obtained via the Fiat-Shamir transform, with still-meaningful security guarantees and under standard assumptions. Our framework has numerous applications, in particular for the design of efficient privacy-preserving non-interactive authentication.

Lattice signature schemes generally require particular care when it comes to preventing secret information from leaking through signature transcript. For example, the Goldreich-Goldwasser-Halevi (GGH) signature scheme and the NTRUSign scheme were completely broken by the parallelepiped-learning attack of Nguyen and Regev. Several heuristic countermeasures were also shown vulnerable to similar statistical attacks.
At PKC~2008, Plantard, Susilo and Win proposed a new variant of GGH, informally arguing resistance to such attacks. Based on this variant, Plantard, Sipasseuth, Dumondelle and Susilo proposed a concrete signature scheme, called DRS, that has been accepted in the round 1 of the NIST post-quantum cryptography project.
In this work, we propose yet another statistical attack and demonstrate a weakness of the DRS scheme: one can recover some partial information of the secret key from sufficiently many signatures. One difficulty is that, dued to the DRS reduction algorithm, the relation between the statistical leak and the secret seems more intricate. We work around this difficulty by training a statistical model, using a few features that we designed according to a simple heuristic analysis.
While we only recover partial information on the secret key, this information is easily exploited by lattice attacks, significantly decreasing their complexity. Concretely, we claim that, provided that $100\,000$ signatures are available, the secret key may be recovered using BKZ-$138$ for first set of DRS parameters submitted to the NIST. This puts the security level of this parameter set below $80$-bits (maybe even $70$ bits), for an original claim of $128$-bits.

In this paper, we connect two interesting problems in the domain of Information-Theoretic Cryptography: "Non-malleable Codes" and "Privacy Amplification". Non-malleable codes allow for encoding a message in such a manner that any "legal" tampering will either leave the message in the underlying tampered codeword unchanged or unrelated to the original message. In the setting of Privacy Amplification, we have two users that share a weak secret $w$ guaranteed to have some entropy. The goal is to use this secret to agree on a fully hidden, uniformly distributed, key $K$, while communicating on a public channel fully controlled by an adversary.
While lot of connections have been known from other gadgets to NMCs, this is the first result to show an application of NMCs to any information-theoretic primitive (other than tamper resilient circuits). Specifically, we give a general transformation that takes any augmented non-malleable code and builds a privacy amplification protocol. This leads to the following results:
(a) Assuming the existence of constant rate, optimal error (we say an $\epsilon$-(augmented) NMC has optimal error if $\epsilon$ = $2^{-O(message\ length)}$), two-state augmented non-malleable code there exists a $8$-round privacy amplification protocol with optimal entropy loss and min-entropy requirement $\Omega(\log(n)+ \kappa)$ (where $\kappa$ is the security parameter). In fact, "non-malleable randomness encoders" suffice.
(b) Instantiating our construction with the current best known augmented non-malleable code for $2$-split-state family [Li17], we get a $8$-round privacy amplification protocol with entropy loss $O(\log(n)+ \kappa \log (\kappa))$ and min-entropy requirement $\Omega(\log(n) +\kappa\log (\kappa))$.

AEGIS is an authenticated cipher introduced at SAC 2013, which takes advantage of AES-NI instructions to reach outstanding speed in software. Like LEX, Fides, as well as many sponge-based designs, AEGIS leaks part of its inner state each round to form a keystream. In this paper, we investigate the existence of linear biases in this keystream. Our main result is a linear mask with bias $2^{-89}$ on the AEGIS-256 keystream. The resulting distinguisher can be exploited to recover bits of a partially known message encrypted $2^{188}$ times, regardless of the keys used. We also consider AEGIS-128, and find a surprising correlation between ciphertexts at rounds $i$ and $i+2$, although the biases would require $2^{140}$ data to be detected. Due to their data requirements, neither attack threatens the practical security of the cipher.

In this paper we present a novel attack based on photonic
emission analysis targeting software implementations of AES. We focus on the particular case in which the attacker can collect the photonic emission of a limited number of sense amplifiers (e.g. only one) of the SRAM storing the S-Box. The attack consists in doing hypothesis on the secret key based on the knowledge of the partial output of the SubBytes operation. We also consider the possibility to attack a masked implementation of AES using the photonic emission analysis. In the case of masking, the attacker needs 2 leakages of the same encryption to overcome the randomization of the masks. For our analysis, we assume the same physical setup described in other previous works. Reported results are based on simulations with some hypothesis on the probability of photonic emission of a single transistor.

In an anonymous subscription system (ASS), a subscribed user (SU) is able to access the services of a service provider without having to reveal its true identity. For a SU computing platform that is compliant with the Trusted Platform Module (TPM) standard, direct anonymous attestation (DAA) is an appropriate cryptographic protocol for realizing ASS, since DAA enables privacy-preserving authentication of the SU platform. This approach takes advantage of a cryptographic key that is securely embedded in the platform's hardware. Although the computing industry and academia have made significant strides in developing secure and sound DAA schemes, these schemes share a common drawback that may act as a major obstacle to their widespread deployment. In all of the existing schemes, the SU suffers from significant computational and communication costs that increase proportionally to the size of the revocation list. This drawback renders the existing schemes to be impractical when the size of the revocation list grows beyond a relatively modest size. In this paper, we propose a novel scheme called Lightweight Anonymous Subscription with Efficient Revocation (LASER) that addresses this very problem. In LASER, the computational and communication costs of the SU's signature are multiple orders of magnitude lower than the prior art. LASER achieves this significant performance improvement by shifting most of the computational and communication costs from the DAA's online procedure (i.e., signature generation) to its offline procedure (i.e., acquisition of keys/credentials). We have conducted a thorough analysis of LASER's performance-related features and compared the findings to the prior art. We have also conducted a comprehensive evaluation of LASER by implementing it on a laptop platform with an on-board TPM. To the best of our knowledge, the results presented in this paper represent the first implementation and analysis of a scheme using an actual TPM cryptoprocessor that is compliant with the most recent TPM specification version 2.0. We have thoroughly analyzed the security of LASER in the random oracle model.

Privacy-preserving data analysis in the context of federated databases distributed across multiple parties has the potential to produce richer and more accurate models than what each party can learn with their own data. Secure Multi-Party Computation (MPC) offers a robust cryptographic approach to this problem, and in fact several protocols have been proposed for various learning tasks on parametric models. In this paper we focus on $k$-NN, shifting the attention towards non-parametric models. We tackle several challenges arising in privacy-preserving $k$-NN classification on federated databases, and implement a concrete protocol for document classification. Our solution is faster than the state-of-the-art custom MPC protocol by at least one an order of magnitude.

Currently several traceable (or linkable) identity-based ring signature schemes have been proposed. However, most of them are constructed in the random oracle model. In this paper, we present a fully traceable ring signature (TRS) scheme without random oracles, which has the constant size signature and a security reduction to the computational Diffie-Hellman (CDH) assumption. Also, we give a formal security model for traceable ring signature and prove that the proposed scheme has the properties of traceability and anonymity.

We initiate a study of garbled circuits that contain both Boolean and arithmetic gates in secure multiparty computation. In particular, we incorporate the garbling gadgets for arithmetic circuits recently presented by Ball, Malkin, and Rosulek (ACM CCS 2016) into the multiparty garbling paradigm initially introduced by Beaver, Micali, and Rogaway (STOC '90). This is the first work that studies arithmetic garbled circuits in the multiparty setting. Using mixed Boolean-arithmetic circuits allows more efficient secure computation of functions that naturally combine Boolean and arithmetic computations. Our garbled circuits are secure in the semi-honest model, under the same hardness assumptions as Ball et al., and can be efficiently and securely computed in constant rounds assuming an honest majority.
We first extend free addition and multiplication by a constant to the multiparty setting.
We then extend to the multiparty setting efficient garbled multiplication gates. The garbled multiplication gate construction we show was previously achieved only in the two-party setting and assuming a random oracle.
We further present a new garbling technique, and show how this technique can improve efficiency in garbling selector gates. Selector gates compute a simple ``if statement" in the arithmetic setting: the gate selects the output value from two input integer values, according to a Boolean selector bit; if the bit is $0$ the output equals the first value, and if the bit is $1$ the output equals the second value. Using our new technique, we show a new and designated garbled selector gate that reduces by approximately $33\%$ the evaluation time, for any number of parties, from the best previously known constructions that use existing techniques and are secure based on the same hardness assumptions.
On the downside, we find that testing equality and computing exponentiation by a constant are significantly more complex to garble in the multiparty setting than in the two-party setting.

This paper presents a secure cloud storage scheme based on hybrid cryptosystem, which consists of Elliptic Curve Cryptography (ECC),
Advanced Encryption Standard (AES), and one-way hash function. Here, the data owner exports large volume of encrypted data to a cloud
storage provider. The exported encrypted data is over-encrypted by the cloud storage provider, and the data is sent to the requesting user. An existing hybrid cryptosystem based dynamic key management scheme with hierarchical access control has been incorporated in our
scheme. The key management scheme groups users in various security classes, and helps to derive efficiently, as well as directly
the secret keys of the lower order security classes. The incorporated key management scheme in our proposed scheme incurs low computational, communication, and storage overheads for key generation, and derivation purposes. The security analysis, and the
simulation results run on the AVISPA tool (formal security verification tool) show that the proposed scheme is protected from the adversaries. This scheme is useful in `owner-write-users-read' application areas, and the end users may use resource-constrained
wireless mobile devices securely in this proposed scheme.

Increasingly connectivity becomes integrated in products and devices that previously
operated in a stand-alone setting. This observation holds for many consumer ap-
plications in the so-called "Internet of Things" (IoT) as well as for corresponding
industry applications (IIoT), such as industrial process sensors. Often the only
practicable means for authentication of human users is a weak password. The security
of password-based authentication schemes frequently form the weakest point of the
security infrastructure.
In this paper we first expose, why a tailored protocol designed for the IIoT use case is
considered necessary. The differences between IIoT and to the conventional Internet
use-cases result in largely modified threats and require special procedures for allowing
both, convenient and secure use in the highly constrained industrial setting.
Specifically the use of a verifier-based password-authenticated key-exchange (V-PAKE)
protocol as a hedge against public-key-infrastructure (PKI) failures is considered
important. Availability concerns for the case of failures of (part of) the communication
infrastructure makes local storage of access credentials mandatory. The larger threat
of physical attacks make it important to use memory-hard password hashing.
This paper presents a corresponding tailored protocol AuCPace together with a
security proof within the Universal Composability (UC) framework considering fully
adaptive adversaries. We also introduce a new security notion of partially augmented
PAKE that provides specific performance advantages and allows, thus, for suitability
for a larger set of IIoT applications.
We also present an actual instantiation of our protocol, AuCPace25519, and present
performance results on ARM Cortex-M0 and Cortex-M4 microcontrollers. Our
implementation realizes new speed-records for PAKE and X25519 Diffie-Hellman for
the ARM Cortex M4 architecture.

We introduce a linear code based on resilient maps on vector spaces over finite fields, we give a basis of this code and upper and lower bounds for its minimal distance. Then the use of the introduced code for building vector space secret sharing schemes is explained and an estimation of the robustness of the schemes against cheaters is provided.

The Bitcoin backbone protocol [Eurocrypt 2015] extracts
basic properties of Bitcoin's underlying {\em blockchain} data structure, such as ``common prefix'' and ``chain quality,'' and shows how fundamental applications including consensus and a robust public transaction ledger can be built on top of them. The underlying assumptions are ``proofs of work'' (POWs), adversarial hashing power strictly less than $1/2$ {\em and} no adversarial pre-computation---or, alternatively, the existence of an unpredictable ``genesis'' block.
In this paper we first show how to remove the latter assumption, presenting a ``bootstrapped'' Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks ``from scratch'' in the presence of adversarial pre-computation. Importantly, the round complexity of the genesis block generation process is
\emph{independent} of the number of
participants.
Next, we consider applications of our construction, including a PKI generation protocol and a consensus protocol without trusted setup assuming an honest majority (in terms of computational power).
Previous results in the same setting (unauthenticated parties, no trusted setup, POWs)
required a round complexity linear in the number of participants.

Election verifiability is defined in the computational
model of cryptography. The definition formalizes
notions of voters verifying their own votes, auditors
verifying the tally of votes, and auditors verifying that
only eligible voters vote.
The Helios (Adida et al., 2009), Helios-C (Cortier et al., 2014) and
JCJ (Juels et al., 2010) election schemes are analyzed using the definition.
Neither Helios nor Helios-C satisfy the definition
because they do not ensure that recorded ballots are
tallied in certain cases when the adversary posts malicious material on the bulletin board.
A variant of Helios is proposed and shown to satisfy the definition.
JCJ similarly does not ensure that recorded ballots are tallied in certain cases.
Moreover, JCJ does not ensure that only eligible voters vote, due to a trust assumption it makes.
A variant of JCJ is proposed and shown to satisfy a weakened definition
that incorporates the trust assumption.
Two previous definitions of verifiability (Juels et al., 2010; Cortier et al., 2014)
are shown to permit election schemes vulnerable to attacks, whereas the new definition
prohibits those schemes.
And a relationship between the new definition and global verifiability (Kuesters et al., 2010) is shown.

Third-party applications on Facebook can collect personal data of the users who install them, but also of their friends. This raises serious privacy issues as these friends are not notified by the applications nor by Facebook, and they have not given consent. This paper presents a detailed multi-faceted study of the collateral information collection of the applications on Facebook. To investigate the views of the users, we designed a questionnaire and collected the responses of 114 participants. The results show that participants are concerned about the collateral information collection and in particular about the lack of notification and of mechanisms to control the data collection. Based on real data, we compute the likelihood of collateral information collection affecting users: we show that the probability is significant and greater than 80% for popular applications such as TripAdvisor. We also demonstrate that a substantial amount of profile data can be collected by applications, which enables application providers to profile users. To investigate whether collateral information collection is an issue to users’ privacy we analysed the legal framework in light of the new General Data Protection Regulation. We provide a detailed analysis of the entities involved and investigate which entity is accountable for the collateral information collection. To provide countermeasures, we propose a privacy dashboard extension that implements privacy scoring computations to enhance transparency towards collateral information collection. Furthermore, we discuss alternative solutions highlighting other countermeasures such as notification and access control mechanisms, cryptographic solutions and application auditing. To the best of our knowledge, this is the first work that provides a detailed multi-faceted study of this problem and that analyses the threat of user profiling by application providers.

It is known that correlation-immune (CI) Boolean functions used in the framework of side channel attacks need to have low Hamming weights. In 2013, Bhasin et al. studied the minimum Hamming weight of $d$-CI Boolean functions, and presented an open problem: the minimal weight of a $d$-CI function in $n$ variables might not increase with $n$. Very recently, Carlet and Chen proposed some constructions of low-weight CI functions, and gave a conjecture on the minimum Hamming weight of $3$-CI functions in $n$ variables.
In this paper, we determine the values of the minimum Hamming weights of $d$-CI Boolean functions in $n$ variables for infinitely many $n$'s and give a negative answer to the open problem proposed by Bhasin et al. We then present a method to construct minimum-weight 2-CI functions through Hadamard matrices, which can provide all minimum-weight 2-CI functions in $4k-1$ variables. Furthermore, we prove that the Carlet-Chen conjecture is equivalent to the famous Hadamard conjecture. Most notably, we propose an efficient method to construct low-weight $n$-variable CI functions through $d$-linearly independent sets, which can provide numerous minimum-weight $d$-CI functions. Particularly, we obtain some new values of the minimum Hamming weights of $d$-CI functions in $n$ variables for $n\leq 13$. We conjecture that the functions constructed by us are of the minimum Hamming weights if the sets are of absolute maximum $d$-linearly independent. If our conjecture holds, then all the values for $n\leq 13$ and most values for general $n$ are determined.

In this paper we propose a rank based algorithm for sorting encrypted data using monomials. Greedy Sort is a sorting technique that achieves to minimize the depth of the homomorphic evaluations. It is a costly algorithm due to excessive ciphertext multiplications and its implementation is cumbersome. Another method Direct Sort has a slightly deeper circuit than Greedy Sort, nevertheless it is simpler to implement and scales better with the size of the input array. Our proposed method minimizes both the circuit depth and the number of ciphertext multiplications. In addition to its performance, its simple design makes it more favorable compared to the alternative methods which are hard to parallelize, e.g. not suitable for fast GPU implementations. Furthermore, we improve the performance of homomorphic sorting algorithm by adapting the SIMD operations alongside message slot rotation techniques. This method allow us to pack $N$ integers into a single ciphertext and compute $N$ comparisons at once, thus reducing $\mathcal{O}(N^2)$ comparisons to $\mathcal{O}(N)$.