FAST AND COMPLETE: ENABLING COMPLETE NEU-RAL NETWORK VERIFICATION WITH RAPID AND MASSIVELY PARALLEL INCOMPLETE VERIFIERS

Abstract

Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits the problem domain into sub-domains and solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use the backward mode linear relaxation based perturbation analysis (LiRPA) to replace LP during the BaB process, which can be efficiently implemented on the typical machine learning accelerators such as GPUs and TPUs. However, unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting, making the entire procedure incomplete after BaB. To address these challenges, we apply a fast gradient based bound tightening procedure combined with batch splits and the design of minimal usage of LP bound procedure, enabling us to effectively use LiRPA on the accelerator hardware for the challenging complete NN verification problem and significantly outperform LP-based approaches. On a single GPU, we demonstrate an order of magnitude speedup compared to existing LP-based approaches.

1. INTRODUCTION

Although neural networks (NNs) have achieved great success on various complicated tasks, they remain susceptible to adversarial examples (Szegedy et al., 2013) : imperceptible perturbations of test samples might unexpectedly change the NN predictions. Therefore, it is crucial to conduct formal verification for NNs such that they can be adopted in safety or security-critical settings. Formally, the neural network verification problem can be cast into the following decision problem: Given a neural network f (•), an input domain C, and a property P. ∀x ∈ C, does f (x) satisfy P? The property P is typically a set of desirable outputs of the NN conditioned on the inputs. Typically, consider a binary classifier f (x) and a positive example x 0 (f (x 0 ) ≥ 0), we can set P to be nonnegative numbers R + and x is bounded within an l ∞ norm ball C = {x| x -x 0 ∞ ≤ }. The success of verification guarantees that the label of x 0 cannot flip for any perturbed inputs within C. In this paper we study the complete verification setting, where given sufficient time, the verifier should give a definite "yes/no" answer for a property under verification. In the above setting, it must solve the non-convex optimization problem min x∈C f (x) to a global minimum. Complete NN verification is generally a challenging NP-Hard problem (Katz et al., 2017) which usually requires expensive formal verification methods such as SMT (Katz et al., 2017) or MILP solvers (Tjeng et al., 2019b) . On the other hand, incomplete solvers such as convex relaxations of NNs (Salman et al., 2019) can only provide a sound analysis, i.e., they can only approximate the lower bound of min x∈C f (x) as f and verify the property when f ≥ 0. No conclusion can be drawn when f < 0. Recently, a Branch and Bound (BaB) style framework (Bunel et al., 2018; 2020b) has been adopted for efficient complete verification. BaB solves the optimization problem min x∈C f (x) to a global

