This paper proposes DeepSigns, a novel end-to-end framework for systematic Watermarking and Intellectual Property (IP) protection in the context of deep learning. DeepSigns, for the first time, introduces a generic watermarking methodology that is applicable in both and black-box settings, where the adversary may or may not know the internal details of the model. The proposed methodology embeds the signature in the probability density function (pdf) of the data abstraction obtained in different layers of a deep neural network. Our approach is robust to removal and transformation attacks including model compression, model fine-tuning, and/or watermark overwriting. Extensive proof-of-concept evaluations on MNIST and CIFAR10 datasets, as well as a wide variety of neural networks architectures including Wide Residual Networks (Wide-ResNet), Multi-Layer Perceptron (MLP), and Convolutional Neural Networks (CNNs) corroborate DeepSigns' effectiveness and applicability.

One of the most efficient post-quantum signature schemes is Rainbow whose harness is based on the multivariate quadratic polynomial (MQ) problem.
ELSA, a new multivariate signature scheme proposed at Asiacrypt 2017,has a similar construction to Rainbow.
Its advantages, compared to Rainbow, are its smaller secret key and faster signature generation.
In addition, its existential unforgeability against an adaptive chosen-message attack has been proven under the hardness of the MQ-problem induced by a public key of ELSA with a specific parameter set in the random oracle model.
The high efficiency of ELSA is derived from a set of hidden quadratic equations used in the process of signature generation.
However, the hidden quadratic equations yield a vulnerability.
In fact, a piece of information of these equations can be recovered by using valid signatures and an equivalent secret key can be partially recovered from it.
In this paper, we describe how to recover an equivalent secret key of ELSA by a chosen message attack.
Our experiments show that we can recover an equivalent secret key for the claimed $128$-bit security parameter of ELSA on a standard PC in $177$ seconds with $1326$ valid signatures.

Discrete Gaussian Sampling is a fundamental tool in lattice cryptography which has been used in digital signatures, identify-based encryption, attribute-based encryption, zero-knowledge proof and fully homomorphic cryptosystem. How to obtain integers under discrete Gaussian distribution more accurately and more efficiently with a more easily implementable procedure is a core problem in discrete Gaussian Sampling. In 2010, Peikert first formulated a convolution theorem for sampling discrete Gaussian and demonstrated its theoretical soundness. Several improved and
more practical versions of convolution based sampling have been proposed recently. In this paper, we improve the error estimation of convolution discrete Gaussian sampling by considering different types of errors (including some types that are missing from previous work) and expanding the theoretical result into a practical analysis. Our result provides much more accurate error bounds which are tightly matched by our experiments. Furthermore, we analyze two existing practical convolution sampling schemes under our framework. We observed that their sets of parameters need to be modified in order to achieve their preset goals. These
goals can be met using the suggested parameters based on our estimation results and our experiments show the consistences as well. In this paper, we also prove some improved inequalities for discrete Gaussian measure.

We initiate the study of perfect (rather than merely statistical) reductions among cryptographic primitives. For simplicity, we focus on finite functionalities.
In addition to the obvious theoretical appeal of the question towards better understanding secure computation, perfect, as opposed to statistical reductions may be useful for designing MPC protocols with high concrete efficiency, achieved by eliminating the dependence on a security parameter.
1-out-of-2 bit-OT (dubbed OT) was shown to be complete for statistically secure 2PC for all functionalities [Kil88, IPS08]. Existing protocols in the OT-hybrid model only offer statistically secure with abort (efficient) protocols (requiring no further computational assumptions). In general, fairness can not be guaranteed, and only security with abort is achievable [Cleve86]. If the protocol is not required to be efficient in the security parameter $k$, then all 2PC functionalities can be securely evaluated [GK10] with statistical security in the OT-hybrid model.
As opposed to the statistical setting, it is not known whether OT is complete for perfectly secure 2PC. Furthermore, only a few examples of functionalities that have such protocols are known: we are only aware of string-OT and TO (OT with reversed roles) as perfectly reducible to OT.
On the negative side, a large class is known, as implied by the fairness literature. By definition, functionalities not securely computable with fairness require super-polynomial in $k$
computational (and round) complexity to evaluate with error $neg(k)$ in the OT-hybrid model.
This implies that these functionalities not computable with fairness are also not computable with perfect security (in the OT-hybrid model). For symmetric boolean functionalities, this class been fully characterized [ABMO15].
Back to the statistical world, quite surprisingly [IKOPS11] demonstrate that all client-server functionalities can be efficiently reduced to OT with statistical full security (no abort) in only one round.
Motivated by this relative ``ease'' of client-server functionalities for statistically secure 2PC in the OT-hybrid model, we study perfect reductions to OT for this class of functions.
We prove that for many client-server functions of the form $f: X\times Y\rightarrow \{0,1\}$, where server domain size $|Y|$ is larger than client domain size $X$, have a perfect reduction to OT.
More precisely, a $g(|X|,|Y|)=\Omega(1)$-fraction of functions are perfectly reducible to OT. This fraction grows roughly as $1-exp(|X|-|Y|)$.
Furthermore, our reduction is 1-round using an oracle to secure evaluation of ${\text{OT}}^l$ (as in [IKOPS11]). As an example, this class contains $\text{2-out-of-5-OT}$.
More generally, for $f: X\times Y\rightarrow Z$, $\Omega(1)$ of the functions with $|Y|>|X|(|Z|-1)$ are perfectly reducible to OT in 1 round.
Our work leaves open the question of whether all finite client-server functionalities are perfectly reducible to OT (not necessarily in one round).
Another open question is whether 2PC functionalities that do have perfectly secure protocols in the OT hybrid model differ in round complexity, as is the case for statistical protocols.

We present a variation on the CM method that produces elliptic curves over prime fields with nearly prime order that do not admit many efficiently computable isogenies. Assuming the Bateman-Horn conjecture, we prove that elliptic curves produced this way almost always have a large embedding degree, and thus are resistant to the MOV attack on the ECDLP.

The security analysis of real-world protocols involves reduction
steps that are conceptually simple but have to handle complicated
protocol details. Taking inspiration from Universal Composability, Abstract Cryptography, Pi-calculus and F*-based analysis we propose a new technique for writing simple reductions to avoid mistakes, have more selfcontained concrete security statements, and allow the writer and reader to focus on the interesting steps of the proof.
Our method replaces a monolithic protocol game with a collection of so-called packages that are composed by calling each other. Every component scheme is replaced by a package parameterized by a bit b. Packages with a bit 0 behave like a concrete scheme, while packages with a bit 1 behave like an ideal version. The indistinguishability of the two packages captures security properties, such as the concrete pseudo-random function being indistinguishable from a random function.
In a security proof of a real-life protocol, one then needs to flip a package’s bit from 0 to 1. To do so, in our methodology, we consider all other packages as the reduction. We facilitate the concise description of such reductions by a number of Pi-calculus-style algebraic operations on packages that are justified by state separation and interface restrictions. Our proof method handles “simple” steps using algebraic rules and leaves “interesting” steps to existing game-based proof techniques such as, e.g.,
the code-based analysis suggested by Bellare and Rogaway.
In addition to making simple reductions more precise, we apply our techniques to two composition proofs: a proof of self-composition using a hybrid argument, and the composition of key generating and key consuming components. Consistent with our algebraic style the proofs are generic. For concreteness, we apply them to the KEM-DEM proof of hybrid-encryption by Cramer and Shoup and the proof for composability of Bellare-Rogaway secure key exchange protocols with symmetric-key protocols.

We apply Smith's construction to generate four-dimensional GLV curves with fast arithmetic in the group law as well as in the base field. As Costello and Longa did in [5] for a 128-bit security level, we btained an interesting curve for fast GLV scalar multiplication, providing a high level of security (254 bits). Our curve is defined over a well-known finite field: $\mathbb{F}_{p^2}$ where $p = 2^{255} - 19$. We finally explicit the two endomorphisms used during GLV decomposition.

Geosocial applications collect (and record) users’ precise location data to perform proximity computations, such as notifying a user or triggering a service when a friend is within geographic proximity. With the growing popularity of mobile devices that have sophisticated localization capability, it becomes more convenient and tempting to share location data. But the precise location data in plaintext not only exposes user’s whereabouts but also mobility pa erns that are sensitive and cannot be changed easily. This paper proposes cryptographic protocols on top of spatial cloaking to reduce the resolution of location and balance between data utility and privacy. Specifically, we interest in the setting that allows users to send periodic updates of precise coordinates and define privacy preferences to control the granularity of the location, both in an encrypted format. Our system supports three kinds of user queries — “Where is this user?”, “Who is nearby?”, and “How close is this user from another user?”. Also, we develop a new algorithm to improve the multidimensional data access by reducing significant masking error. Our prototype and various performance evaluations on different platforms demonstrated that our system is practical.

While many cryptographic protocols for card games have been proposed, all of them focus on card games where players have some state that must be kept secret from each other, e.g. closed cards and bluffs in Poker. This scenario poses many interesting technical challenges, which are addressed with cryptographic tools that introduce significant computational and communication overheads (e.g. zero-knowledge proofs). In this paper, we consider the case of games that do not require any secret state to be maintained (e.g. Blackjack and Baccarat). Basically, in these games, cards are chosen at random and then publicly advertised, allowing for players to publicly announce their actions (before or after cards are known). We show that protocols for such games can be built from very lightweight primitives such as digital signatures and canonical random oracle commitments, yielding constructions that far outperform all known card game protocols in terms of communication, computational and round complexities. Moreover, in constructing highly efficient protocols, we introduce a new technique based on verifiable random functions for extending coin tossing, which is at the core of our constructions. Besides ensuring that the games are played correctly, our protocols support financial rewards and penalties enforcement, guaranteeing that winners receive their rewards and that cheaters get financially penalized. In order to do so, we build on blockchain-based techniques that leverage the power of stateful smart contracts to ensure fair protocol execution.

In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permissionless setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.

The goal of white-box cryptography is to implement cryptographic algorithms securely in software in the presence of an adversary that has complete access to the software's program code and execution environment.
In particular, white-box cryptography needs to protect the embedded secret key from being extracted. As for today, all publicly available white-box
implementations turned out succeptible to key extraction attacks. In the meanwhile, white-box cryptography is widely deployed in commercial implementations that claim to be secure. Bos, Hubain, Michiels and Teuwen (CHES 2016) introduced differential computational analysis (DCA), the first automated attack on white-box cryptography.
The DCA attack performs a statistical analysis on execution traces. These traces contain information about the execution, such as memory addresses or register values, that is collected via binary instrumentation tooling during the encryption process. The white-box implementations that were attacked by Bos et al., as well as white-box implementations that have been described in the literature, protect the embedded key by using internal encodings techniques that have been introduced by Chow, Eisen, Johnson and van Oorschot (SAC 2002). In this paper, we prove rigorously that such internal encodings are too weak to protect against the DCA attack and thereby explain the experimental success of the DCA attack of Bos et al.

In this paper, we introduce a new concept of digital signature that we call \emph{fuzzy signature}, which is a signature scheme that uses a noisy string such as biometric data as a private key, but \emph{does not require user-specific auxiliary data} (which is also called a helper string in the context of fuzzy extractors), for generating a signature.
Our technical contributions are three-fold:
(1) We first give the formal definition of fuzzy signature, together with a formal definition of a \lq\lq setting'' that specifies some necessary information for fuzzy data.
(2) We give a generic construction of a fuzzy signature scheme based on a signature scheme that has certain homomorphic properties regarding keys and satisfies a kind of related key attack security with respect to addition, and a new tool that we call \emph{linear sketch}.
(3) We specify two concrete settings for fuzzy data, and for each of the settings give a concrete instantiation of these building blocks for our generic construction, leading to two concrete fuzzy signature schemes.
We also discuss how fuzzy signature schemes can be used to realize a biometric-based PKI that uses biometric data itself as a cryptographic key, which we call the \emph{public biometric infrastructure (PBI)}.

Modern web applications using advanced cryptographic methods may need to calculate a large number of modular exponentiations. Performing such calculations in the web browser efficiently is a known problem. We propose a solution to this problem based on outsourcing the computational effort to untrusted exponentiation servers. We present several efficient outsourcing protocols for different settings and a practical implementation consisting of a JavaScript client library and a server application. Compared to browser-only computation, our solution improves the overall computation time by an order of magnitude.
This is an extended version of a paper accepted and presented at the Voting’18 workshop of the Financial Cryptography and Data Security 2018 conference. It will be included in the conference’s LNCS proceedings and available on the Springer web site.

Dynamic Policy Update

Private information retrieval (PIR) is a key building block in many privacy-preserving systems. Unfortunately, existing constructions remain very expensive. This paper introduces two techniques that make the computational variant of PIR (CPIR) more efficient in practice. The first technique targets a recent class of CPU-efficient CPIR protocols where the query sent by the client contains a number of ciphertexts proportional to the size of the database. We show how to compresses this query, achieving size reductions of up to 274X.
The second technique is a new data encoding called probabilistic batch codes (PBCs). We use PBCs to build a multi-query PIR scheme that allows the server to amortize its computational cost when processing a batch of requests from the same client. This technique achieves up to 40× speedup over processing queries one at a time, and is significantly more efficient than related encodings. We apply our techniques to the Pung private communication system, which relies on a custom multi-query CPIR protocol for its privacy guarantees. By porting our techniques to Pung, we find that we can simultaneously reduce network costs by 36× and increase throughput by 3X.

While non-interactive zero-knowledge (NIZK) proofs require trusted parameters, Groth, Ostrovsky and Sahai constructed non-interactive witness-indistinguishable (NIWI) proofs without any setup; they called their scheme a non-interactive zap. More recently, Bellare, Fuchsbauer and Scafuro investigated the security of NIZK in the face of parameter subversion and observe
that NI zaps provide subversion-resistant soundness and WI.
Arguments of knowledge prove that not only the statement is true, but also that the prover knows a witness for it, which is essential for anonymous identification. We present the first NIWI argument of knowledge without parameters, i.e., a NI zap of knowledge. Consequently, our scheme is also the first subversion-resistant knowledge-sound proof system, a notion recently proposed by Fuchsbauer.

We present a new improvement in the linear programming technique to derive lower bounds on the information ratio of secret sharing schemes. We obtain non-Shannon-type bounds without using information inequalities explicitly. Our new technique makes it possible to determine the optimal information ratio of linear secret sharing schemes for all access structures on 5 participants and all graph-based access structures on 6 participants. In addition, new lower bounds are presented also for some small matroid ports and, in particular, the optimal information ratios of the linear secret sharing schemes for the ports of the Vamos matroid are determined.

The security of several post-quantum cryptosystems is based on the assumption that solving a system of multivariate (quadratic) polynomial equations $p_1=\dots=p_r=0$
over a finite field is hard. Such a system can be solved by computing a lexicographic Gr\"obner basis of the ideal $(p_1,\dots,p_r)$.
The most efficient algorithms for computing Gr\"obner bases transform the problem into several instances of Gaussian elimination.
The computational complexity of these algorithms is not completely understood, especially when the polynomials $p_1,\dots,p_r$ are not homogeneous.
In this paper, we prove that this complexity is controlled by the Castelnuovo-Mumford regularity of the ideal $(p_1^h,\dots,p_r^h)$ obtained by homogenizing the input polynomials. This allows us to bound the complexity of solving a system of polynomial equations when the associated ideal is zero-dimensional, a common situation in cryptography. In combination with some theorems in commutative algebra, our results also allow us to bound the complexity of the ABC and cubic simple matrix schemes, as well as some instances of the MinRank Problem.

This article describes a survey of long-term cryp-
tographic public keys observed in real deployments of secure-
shell, e-mail and web protocols in two similarly-sized countries
– Ireland and Estonia. We find that keys are very widely re-
used across multiple IP addresses, and even autonomous systems.
From one run scanning 18,268 hosts in Ireland that run at least
one TLS or SSH service, approximately 53% of the hosts involved
are using keys that are also seen on some other IP address. For
each key, if two IP addresses share that key, then those two
IP addresses are considered members of the same cluster. In
the same scan we find a maximum cluster size of 1,991 hosts
and a total of 1,437 clusters, mostly with relatively few hosts
per cluster (median cluster size was 26.5, most common cluster
size is two). In that scan, of the 54,447 host/port combinations
running cryptographic protocols, we only see 20,053 unique keys
(36%), indicating significant key re-use across hosts and ports.
We describe the methodology followed and the published source
code and public data sources that enable researchers to replicate,
validate and extend these results. Clearly, such key sharing can
create undesirable security and privacy dependencies between
cluster members. The author is currently starting the process of
contacting local (Irish) asset-owners to try establish the reasons
for this key sharing and to possibly assist with improving network
posture.

We survey elliptic curve implementations from several vantage points. We perform internet-wide scans for TLS on a large number of ports, as well as SSH and IPsec to measure elliptic curve support and implementation behaviors, and collect passive measurements of client curve support for TLS. We also perform active measurements to estimate server vulnerability to known attacks against elliptic curve implementations, including support for weak curves, invalid curve attacks, and curve twist attacks. We estimate that 0.77% of HTTPS hosts, 0.04% of SSH hosts, and 4.04% of IKEv2 hosts that support elliptic curves do not perform curve validity checks as specified in elliptic curve standards. We describe how such vulnerabilities could be used to construct an elliptic curve parameter downgrade attack called CurveSwap for TLS, and observe that there do not appear to be combinations of weak behaviors we examined enabling a feasible CurveSwap attack in the wild. We also analyze source code for elliptic curve implementations, and find that a number of libraries fail to perform point validation for JSON Web Encryption, and find a flaw in the Java and NSS multiplication algorithms.