DEEP NEURAL NETWORK FINGERPRINTING BY CONFERRABLE ADVERSARIAL EXAMPLES

Abstract

In Machine Learning as a Service, a provider trains a deep neural network and gives many users access. The hosted (source) model is susceptible to model stealing attacks, where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust method to determine whether a suspect model is a surrogate of their model. We propose a fingerprinting method for deep neural network classifiers that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs. These inputs are a subclass of transferable adversarial examples which we call conferrable adversarial examples that exclusively transfer with a target label from a source model to its surrogates. We propose a new method to generate these conferrable adversarial examples. We present an extensive study on the irremovability of our fingerprint against finetuning, weight pruning, retraining, retraining with different architectures, three model extraction attacks from related work, transfer learning, adversarial training, and two new adaptive attacks. Our fingerprint is robust against distillation, related model extraction attacks, and even transfer learning when the attacker has no access to the model provider's dataset. Our fingerprint is the first method that reaches a ROC AUC of 1.0 in verifying surrogates, compared to a ROC AUC of 0.63 by previous fingerprints.

1. INTRODUCTION

Deep neural network (DNN) classifiers have become indispensable tools for addressing practically relevant problems, such as autonomous driving (Tian et al., 2018) , natural language processing (Young et al., 2018) and health care predictions (Esteva et al., 2019) . While a DNN provides substantial utility, training a DNN is costly because of data preparation (collection, organization, and cleaning) and computational resources required for validation of a model (Press, 2016) . For this reason, DNNs are often provided by a single entity and consumed by many, such as in the context of Machine Learning as a Service (MLaaS). A threat to the provider is model stealing, in which an adversary derives a surrogate model from only API access to a source model. We refer to an independently trained model for the same task as a reference model. Consider a MLaaS provider that wants to protect their service and hence restrict its redistribution, e.g., through a contractual usage agreement because trained models constitute their intellectual property. A threat to the model provider is an attacker who derives surrogate models and publicly deploys them. Since access to the source model has to be provided, users cannot be prevented from deriving surrogate models. Krishna et al. (2019) have shown that model stealing is (i) effective, because even high-fidelity surrogates of large models like BERT can be stolen, and (ii) efficient, because surrogate models can be derived for a fraction of the costs with limited access to domain data. This paper proposes a DNN fingerprinting method to predict whether a model is a (stolen) surrogate or a (benign) reference model relative to a source model. DNN fingerprinting is a new area of research that extracts a persistent, identifying code (fingerprint) from an already trained model. Model stealing can be categorized into model modification, such as weight pruning (Zhu & Gupta, 2017) , or model extraction that uses some form of knowledge distillation (Hinton et al., 2015) to derive a surrogate from scratch. Claimed security properties of existing defenses ((Adi et al., 2018; Zhang et al., 2018) ), have been broken by model extraction attacks (Shafieinejad et al., 2019) . Our fingerprinting method is the first passive defense that is specifically designed towards withstanding model extraction attacks, which extends to robustness against model modification attacks. Our research provides new insight into the transferability of adversarial examples. In this paper, we hypothesize that there exists a subclass of targeted, transferable, adversarial examples that transfer exclusively to surrogate models, but not to reference models. We call this subclass conferrable. Any conferrable example found in the source model should have the same misclassification in a surrogate model, but a different one in reference models. We propose a metric to measure conferrability and an ensemble adversarial attack that optimizes this new metric. We generate conferrable examples as the source model's fingerprint. Retrained CIFAR-10 surrogate models can be verified with a perfect ROC AUC of 1.0 using our fingerprint, compared to an ROC AUC of 0.63 for related work (Cao et al., 2019) . While our fingerprinting scheme is robust to almost all derivation and extraction attacks, we show that some adapted attacks may remove our fingerprint. Specifically, our fingerprint is not robust to transfer learning when the attacker has access to a model pre-trained on ImageNet32 and access to CIFAR-10 domain data. Our fingerprint is also not robust against adversarial training (Madry et al., 2017) from scratch. Adversarial training is an adapted model extraction attack specifically designed to limit the transferability of adversarial examples. We hypothesize that incorporating adversarial training into the generation process of conferrable adversarial examples may lead to higher robustness against this attack.

2. RELATED WORK

In black-box adversarial attacks (Papernot et al., 2017; Tramèr et al., 2016; Madry et al., 2017) , access to the target model is limited, meaning that the target architecture is unknown and computing gradients directly is not possible. Transfer-based adversarial attacks (Papernot et al., 2016; 2017) exploit the ability of an adversarial example to transfer across models with similar decision boundaries. Targeted transferability additionally specifies the target class of the adversarial example. Our proposed adversarial attack is a targeted, transfer-based attack with white-box access to a source model (that should be defended), but black-box access to the stolen model derived by the attacker. Liu et al. (2016) and Tramèr et al. (2017a) show that (targeted) transferability can be boosted by optimizing over an ensemble of models. Our attack also optimizes over an ensemble of models to maximize transferability to stolen surrogate models, while minimizing transferability to independently trained models, called reference models. We refer to this special subclass of targeted transferability as conferrable. Tramèr et al. (2017a) empirically study transferability and find that transferable adversarial examples are located in the intersection of high-dimensional "adversarial subspaces" across models. We further their studies and show that (i) stolen models apprehend adversarial vulnerabilities from the source model and (ii) parts of these subspaces, in which conferrable examples are located, can be used in practice to predict whether a model has been stolen. 



Figure 1: A set of conferrable adversarial examples used as a fingerprint to identify surrogate models.

Watermarking of DNNs is a related method to DNN fingerprinting where an identifying code is embedded into a DNN, thereby potentially impacting the model's utility. Uchida et al. (2017) embed a secret message into the source model's weight parameters, but require white-box access to the model's parameters for the watermark verification. Adi et al. (2018) and Zhang et al. (2018) propose backdooring the source model on a set of unrelated or slightly modified images. Their approaches allow black-box verification that only requires API access to the watermarked model. Frontier-Stitching (Merrer et al., 2017) and BlackMarks (Dong et al., 2018) use (targeted) adversarial examples as watermarks. These watermarks have been evaluated only against model modification

