LOCALIZED RANDOMIZED SMOOTHING FOR COLLECTIVE ROBUSTNESS CERTIFICATION

Abstract

Models for image segmentation, node classification and many other tasks map a single input to multiple labels. By perturbing this single shared input (e.g. the image) an adversary can manipulate several predictions (e.g. misclassify several pixels). Collective robustness certification is the task of provably bounding the number of robust predictions under this threat model. The only dedicated method that goes beyond certifying each output independently is limited to strictly local models, where each prediction is associated with a small receptive field. We propose a more general collective robustness certificate for all types of models. We further show that this approach is beneficial for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e.g. based on their proximity in the image). The certificate is based on our novel localized randomized smoothing approach, where the random perturbation strength for different input regions is proportional to their importance for the outputs. Localized smoothing Paretodominates existing certificates on both image segmentation and node classification tasks, simultaneously offering higher accuracy and stronger certificates.

1. INTRODUCTION

There is a wide range of tasks that require models making multiple predictions based on a single input. For example, semantic segmentation requires assigning a label to each pixel in an image. When deploying such multi-output classifiers in practice, their robustness should be a key concern. After all -just like simple classifiers (Szegedy et al., 2014) -they can fall victim to adversarial attacks (Xie et al., 2017; Zügner & Günnemann, 2019; Belinkov & Bisk, 2018) . Even without an adversary, random noise or measuring errors can cause predictions to unexpectedly change. We propose a novel method providing provable guarantees on how many predictions can be changed by an adversary. As all outputs operate on the same input, they have to be attacked simultaneously by choosing a single perturbed input, which can be more challenging for an adversary than attacking them independently. We must account for this to obtain a proper collective robustness certificate. The only dedicated collective certificate that goes beyond certifying each output independently (Schuchardt et al., 2021) is only beneficial for models we call strictly local, where each output depends on a small, pre-defined subset of the input. Multi-output classifiers , however, are often only softly local. While all their predictions are in principle dependent on the entire input, each output may assign different importance to different subsets. For example, convolutional networks for image segmentation can have small effective receptive fields (Luo et al., 2016; Liu et al., 2018) , i.e. primarily use a small region of the image in labeling each pixel. Many models for node classification are based on the homophily assumption that connected nodes are mostly of the same class. Thus, they primarily use features from neighboring nodes. Transformers, which can in principle attend to arbitrary parts of the input, may in practice learn "sparse" attention maps, with the prediction for each token being mostly determined by a few (not necessarily nearby) tokens (Shi et al., 2021) . Figure 1 : Localized randomized smoothing applied to semantic segmentation. We assume that the most relevant information for labeling a pixel is contained in other nearby pixels. We partition the input image into multiple grid cells. For each grid cell, we sample noisy images from a different anisotropic distribution that applies more noise to far-away, less relevant cells. Segmenting all noisy images, cropping the result and computing the majority vote yields a local segmentation mask. These per-cell segmentation masks can then be combined into a complete segmentation mask. Softly local models pose a budget allocation problem for an adversary that tries to simultaneously manipulate multiple predictions by crafting a single perturbed input. When each output is primarily focused on a different part of the input, the attacker has to distribute their limited adversarial budget and may be unable to attack all predictions at once. We propose localized randomized smoothing, a novel method for the collective robustness certification of softly local models that exploits this budget allocation problem. It is an extension of randomized smoothing (Lécuyer et al., 2019; Li et al., 2019; Cohen et al., 2019) , a versatile black-box certification method which is based on constructing a smoothed classifier that returns the expected prediction of a model under random perturbations of its input (more details in § 2). Randomized smoothing is typically applied to single-output models with isotropic Gaussian noise. In localized smoothing however, we smooth each output (or set of outputs) of a multi-output classifier using a different distribution that is anisotropic. This is illustrated in Fig. 1 , where the predicted segmentation masks for each grid cell are smoothed using a different distribution. For instance, the distribution for segmenting the top-right cell applies less noise to the top-right cell. The smoothing distribution for segmenting the bottom-left cell applies significantly more noise to the top-right cell. Given a specific output of a softly local model, using a low noise level for the most relevant parts of the input lets us preserve a high prediction quality. Less relevant parts can be smoothed with a higher noise level to guarantee more robustness. The resulting certificates (one per output) explicitly quantify how robust each prediction is to perturbations of which part of the input. This information about the smoothed model's locality can then be used to combine the per-prediction certificates into a stronger collective certificate that accounts for the adversary's budget allocation problem. 1Our core contributions are: • Localized randomized smoothing, a novel smoothing scheme for multi-output classifiers. • An efficient anisotropic randomized smoothing certificate for discrete data. • A collective certificate based on localized randomized smoothing.

2. BACKGROUND AND RELATED WORK

Randomized smoothing. Randomized smoothing is a certification technique that can be used for various threat models and tasks. For the sake of exposition, let us discuss a certificate for l 2 perturbations (Cohen et al., 2019) . Assume we have a D-dimensional input space R D , label set Y and classifier g : R D → Y. We can use isotropic Gaussian noise to construct the smoothed classifier f = argmax y∈Y Pr z∼N (x,σ) [g(z) = y] that returns the most likely prediction of base classifier g under the input distribution 2 . Given an input x ∈ R D and smoothed prediction y = f (x), we can then easily determine whether y is robust to all l 2 perturbations of magnitude ϵ, i.e. whether ∀x ′ : ||x ′ -x|| 2 ≤ ϵ : f (x ′ ) = y. Let q = Pr z∼N (x,σ) [g(z) = y] be the probability of predicting label y. The prediction is certifiably robust if ϵ < σΦ -1 (q) (Cohen et al., 2019) . This result showcases a trade-off inherent to randomized smoothing: Increasing the noise level (σ) may strengthen the certificate, but could also lower the accuracy of f or reduce q and thus weaken the certificate. White-box certificates for multi-output classifiers. There are multiple recent methods for certifying the robustness of multi-output models by analyzing their specific architecture and weights (for example, see (Tran et al., 2021; Zügner & Günnemann, 2019; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2020; Ko et al., 2019; Ryou et al., 2021; Shi et al., 2020; Bonaert et al., 2021) ). They are however not designed to certify collective robustness, i.e. determine whether multiple outputs can be simultaneously attacked using a single perturbed input. They can only determine independently for each prediction whether or not it can be attacked. Black-box certificates for multi-output classifiers. Most directly related to our work is the aforementioned certificate of Schuchardt et al. (2021) , which is only beneficial for strictly local models (i.e. models where each output has a small receptive field). In § I we show that, for randomly smoothed models, their certificate is a special case of ours. SegCertify (Fischer et al., 2021 ) is a collective certificate for segmentation. This method certifies each output independently using isotropic smoothing (ignoring the budget allocation problem) and uses Holm correction (Holm, 1979) to obtain tighter Monte Carlo estimates. It then counts the number of certifiably robust predictions and tests whether it equals the number of predictions. In § H we demonstrate that our method can always provide guarantees that are at least as strong. Another method that can in principle be used to certify collective robustness is center smoothing (Kumar & Goldstein, 2021) . It bounds the change of a vector-valued function w.r.t to a distance function. Using the l 0 pseudo-norm, it can bound how many predictions can be simultaneously changed. More recently, Chen et al. (2022) proposed a collective certificate for bagging classifiers. Different from our work, they consider poisoning (traintime) instead of evasion (test-time) attacks. Yatsura et al. (2022) prove robustness for segmentation, but consider patch-based instead of ℓ p -norm attacks and certify each prediction independently. Anisotropic randomized smoothing. While only designed for single-output classifiers, two recent certificates for anisotropic Gaussian and uniform smoothing (Fischer et al., 2020; Eiras et al., 2022) can be used as a component of our collective certification approach: They can serve as per-prediction certificates, which we can then combine into our stronger collective certificate (more details in § 3.2).

3.1. COLLECTIVE THREAT MODEL

We assume a multi-output classifier f : X Din → Y Dout , that maps D in -dimensional inputs to D out labels from label set Y. We further assume that this classifier f is the result of randomly smoothing each output of a base classifier g. Given this multi-output classifier f , an input x ∈ X Din and the corresponding predictions y = f (x), the objective of the adversary is to cause as many predictions from a set of targeted indices T ⊆ {1, . . . , D out } to change. That is, their objective is min x ′ ∈Bx n∈T I [f n (x ′ ) = y n ], where I is the indicator function and B x ⊆ X Din is the perturbation model. As is common in robustness certification, we assume a ℓ p -norm perturbation model, i.e. B x = x ′ ∈ X Din | ||x ′ -x|| p ≤ ϵ with p, ϵ ≥ 0. Importantly, note that the minimization operator is outside the sum, meaning the predictions have to be attacked using a single input.

3.2. A RECIPE FOR COLLECTIVE CERTIFICATES

Before discussing localized randomized smoothing, we show how to combine arbitrary perprediction certificates into a collective certificate, a procedure that underlies both our method and that of Schuchardt et al. (2021) and Fischer et al. (2021) . The first step is to apply an arbitrary certification procedure to each prediction y 1 , . . . , y Dout in order to obtain per-prediction base certificates. Definition 3.1 (Base certificates). A base certificate for a prediction y n = f n (x) is a set H (n) ⊆ X Din of perturbed inputs s.t. ∀x ′ ∈ H (n) : f n (x ′ ) = y n . Using these base certificates, one can derive two bounds on the adversary's objective: min x ′ ∈Bx n∈T I [f n (x ′ ) = y n ] ≥ (1.1) min x ′ ∈Bx n∈T I x ′ ∈ H (n) ≥ (1.2) n∈T min x ′ ∈Bx I x ′ ∈ H (n) . Eq. 1.1 follows from Theorem 3.1 (if a prediction is certifiably robust to x ′ , then f n (x ′ ) = y n ), while Eq. 1.2 results from moving the min operator inside the summation. Eq. 1.2 is the naïve collective certificate: It iterates over the predictions and counts how many are certifiably robust to perturbation model B x . Each summand involves a separate minimization problem. Thus, the certificate neglects that the adversary has to choose a single perturbed input to attack all outputs. SegCertify (Fischer et al., 2021) applies this to isotropic Gaussian smoothing. While Eq. 1.1 is seemingly tighter than the naïve collective certificate, it may lead to identical results. For example, let us consider the most common case where the base certificates guarantee robustness within an l p ball, i.e. n) with certified radii r (n) . Then, the optimal solution to both Eq. 1.1 and Eq. 1.2 is to choose an arbitrary x ′ with ||x ′ -x|| = ϵ: H (n) = x ′′ | ||x ′′ -x|| p ≤ r ( min x ′ ∈Bx n∈T I x ′ ∈ H (n) = n∈T I ϵ < r (n) = n∈T min x ′ ∈Bx I x ′ ∈ H (n) . The main contribution of Schuchardt et al. (2021) is to notice that, by exploiting strict locality (i.e. the outputs having small receptive fields), one can augment certificate Eq. 1.1 to make it tighter than the naive collective certificate from Eq. 1.2. One must simply mask out all perturbations falling outside a given receptive field when evaluating the corresponding base certificate: min x ′ ∈Bx n∈T I ψ (n) ⊙ x ′ + (1 -ψ (n) ) ⊙ x ∈ H (n) . Here, ψ (n) ∈ {0, 1} Din encodes the receptive field of f n and ⊙ is the elementwise product. If two outputs f n and f m have disjoint receptive fields (i.e. ψ (n) T ψ (m) = 0), then the adversary has to split up their limited adversarial budget and may be unable to attack both at once.

4. LOCALIZED RANDOMIZED SMOOTHING

The core idea behind localized smoothing is that, rather than improving upon the naïve collective certificate by using external knowledge about strict locality, we can use anisotropic randomized smoothing to obtain base certificates that directly encode soft locality. Here, we explain our approach in a domain-independent manner before turning to specific distributions and data-types in § 5. In localized randomized smoothing, we associate base classifier outputs g 1 , . . . , g Dout with distinct anisotropic smoothing distributions Ψ (1) x , . . . , Ψ x that depend on input x. For example, they could be Gaussian distributions with mean x and distinct covariance matrices -like in Fig. 1 , where we use a different distribution for each grid cell. We use these distributions to construct the smoothed classifier f , where each output f n (x) is the result of randomly smoothing g n (Z) with Ψ (n) x . To certify robustness for a vector of predictions y = f (x), we follow the procedure discussed in § 3.2, i.e. compute base certificates H (1) , . . . , H (D out) and solve Eq. 1.1. We do not make any assumption about how the base certificates are computed. However, we require that they comply with a common interface, which will later allow us combine them via linear programming: Definition 4.1 (Base certificate interface). A base certificate H (n) ⊆ X Din is compliant with our base certificate interface for l p -norm perturbations if there is a w ∈ R Din + and η (n) ∈ R + such that H (n) = x ′ Din d=1 w (n) d • |x ′ d -x d | p < η (n) . The weight w (n) d quantifies how sensitive y n is to perturbations of input dimension d. It will be smaller where the anisotropic smoothing distribution applies more noise. The radius η (n) quantifies the overall level of robustness. In § 5 we present different distributions and corresponding certificates that comply with this interface. Inserting Eq. 2 into Eq. 1.1 results in the collective certificate min x ′ ∈Bx n∈T I Din d=1 w (n) d • |x ′ d -x d | p < η (n) . Eq. 3 showcases why locally smoothed models admit a collective certificate that is stronger than naïvely certifying each output independently (i.e. Eq. 1.2). Because we use different distributions for different outputs, any two outputs f (n) and f (m) will have distinct certificate weights w (n) and w (m) . If they are sensitive in different parts of the input, i.e. w (n) T w (m) is small, then the adversary has to split up their limited adversarial budget and may be unable to attack both at once. One particularly simple example is the case w (n) T w (m) = 0, where attacking predictions y n and y m requires allocating adversarial budget to two entirely disjoint sets of input dimensions. In § I we show that, with appropriately parameterized smoothing distributions, we can obtain base certificates with w n) , with indicator vector ψ (n) encoding the receptive field of output n. Hence, the collective guarantees from (Schuchardt et al., 2021) are a special case of our certificate. (n) = c • ψ (

4.1. COMPUTING THE COLLECTIVE CERTIFICATE

While Eq. 3 constitutes a valid certificate, it is not immediately clear how to evaluate it. However, we notice that the perturbation set B x imposes linear constraints on the elementwise differences |x ′ d -x d | p , the values of the indicator functions are binary variables and that the base certificates inside the indicator functions are characterized by linear inequalities. We can thus reformulate Eq. 3 as a mixed-integer linear program (MILP), which leads us to our main result (proof in § D): Theorem 4.2. Given locally smoothed model f , input x ∈ X (Din) , smoothed prediction y = f (x) and base certificates H (1) , . . . , H Dout complying with interface Eq. 2, the number of simultaneously robust predictions min x ′ ∈Bx n∈T I [f n (x ′ ) = y n ] is lower-bounded by min b∈R D in + ,t∈{0,1} D out n∈T t n (4) s.t. ∀n : b T w (n) ≥ (1 -t n )η (n) , sum{b} ≤ ϵ p . The vector b models the allocation of adversarial budget (i.e. the elementwise differences b d = |x ′ d -x d | p ). The vector t serves the same role as the indicator functions from Eq. 3, i.e. it indicates which predictions are certifiably robust. Eq. 5 ensures that b does not exceed the overall budget ϵ (i.e. x ′ ∈ B x ) and that t n can only be set to 0 if b T w (n) ≥ η (n) , i.e. only when the base certificate cannot guarantee robustness for prediction y n . This problem can be solved using any MILP solver. Its optimal value provably bounds the number of simultaneously robust predictions.

4.2. IMPROVING EFFICIENCY

Solving large MILPs is expensive. In § E we show that partitioning the outputs into N out subsets sharing the same smoothing distribution and the inputs into N in subsets sharing the same noise level (for example like in Fig. 1 , where we partition the image into a 2 × 3 grid), as well as quantizing the base certificate parameters η (n) into N bin bins, reduces the number of variables and constraints from D in +D out and D out +1 to N in +N out •N bins and N out •N bins +1, respectively. We can thus control the problem size independent of the data's dimensionality. We further derive a linear relaxation of the MILP that can be efficiently solved while preserving the soundness of the certificate.

4.3. ACCURACY-ROBUSTNESS TRADEOFF

When discussing Eq. 3, we only explained why our collective certificate for locally smoothed models is better than a naïve combination of localized smoothing base certificates. However, this does not necessarily mean that our certificate is also stronger than naïvely certifying an isotropically smoothed model. This is why we focus on soft locality. With isotropic smoothing, high certified robustness requires using large noise levels, which degrade the model's prediction quality. Localized smoothing, when applied to softly local models, can circumvent this issue. For each output, we can use low noise levels for the most important parts of the input to retain high prediction quality. Our LP-based collective certificate allows us to still provide strong collective robustness guarantees. We investigate this improved accuracy-robustness trade-off in our experimental evaluation (see § 7).

5. BASE CERTIFICATES

To apply our collective certificate in practice, we require smoothing distributions Ψ (n) x and corresponding per-prediction base certificates that comply with the interface from Theorem 3.1. As base certificates for l 2 and l 1 perturbations we can reformulate existing anisotropic Gaussian (Fischer et al., 2020; Kumar & Goldstein, 2021) and uniform (Kumar & Goldstein, 2021) smoothing certificates for single-output models: For Ψ (n) x = N (x, diag(s (n) )) we have w (n) d = 1/(s (n) d ) 2 and η (n) = (Φ -1 (q n,yn )) 2 with q n,yn = Pr z∼Ψ (n) x [g n (z) = y]. For Ψ (n) x = U x, λ (n) we have w (n) d = 1/λ (n) d and η (n) = Φ -1 (q n,yn ). We prove the correctness of these reformulations in § F. For l 0 perturbations of binary data, we can use a distribution F(x, θ) that flips x d with probability θ d ∈ [0, 1], i.e. Pr[z d ̸ = x d ] = θ d for z ∼ F(x, θ). Existing methods (e.g. (Lee et al., 2019) ) can be used to derive per-prediction certificates for this distribution, but have exponential runtime in the number of unique values in θ. Thus, they are not suitable for localized smoothing, which uses different θ d for different parts of the input. We therefore propose a novel, more efficient approach: Variance-constrained certification, which smooths the base classifier's softmax scores instead of its predictions and then uses both their expected value and variance to certify robustness (proof in § F.3): Theorem 5.1 (Variance-constrained certification). Given a function g : X → ∆ |Y| mapping from discrete set X to scores from the (|Y| -1)-dimensional probability simplex, let f (x) = argmax y∈Y E z∼Ψx [g(z) y ] with smoothing distribution Ψ x and probability mass function π x (z) = Pr z∼Ψx [ z = z]. Given an input x ∈ X and smoothed prediction y = f (x), let µ = E z∼Ψx [g(z) y ] and ζ = E z∼Ψx (g(z) y -ν) 2 with ν ∈ R. Assuming ν ≤ µ, then f (x ′ ) = y if z∈X π x ′ (z) π x (z) • π x ′ (z) < 1 + 1 ζ -(µ -ν) 2 µ - 1 2 . ( ) The l.h.s. of Eq. 6 is the expected ratio between the probability mass functions of the smoothing distributions for the perturbed (π x ′ ) and unperturbed (π x ) input. 3 It is equal to 1 if both densities are the same, i.e. there is no adversarial perturbation, and greater than 1 otherwise. The r.h.s. of Eq. 6 depends on the expected softmax score µ, a variable ν ≤ µ and the expected squared difference ζ between µ and ν. For ν = µ the parameter ζ is the variance of the softmax score. A higher expected value and a lower variance allow us to certify robustness for larger adversarial perturbations. Applying Theorem 5.1 with flipping distribution F(x, θ) to each of the D softmax vectors of our model's outputs yields l 0 -norm certificates for binary data that can be computed in linear time (see § F.3.1). In § F.3.2, we also apply it to the sparsity-aware smoothing distribution (Bojchevski et al., 2020) , allowing us to differentiate between adversarial deletions and additions of bits. Theorem 5.1 can also be generalized to continuous distributions (see § F.3.3). But, for fair comparison with our baselines, we use the certificates of Eiras et al. (2022) as our base certificates for continuous data. In practice, the smoothed classifier and the base certificates cannot be evaluted exactly. One has to use Monte Carlo sampling to provide guarantees that hold with high probability (see § G).

6. LIMITATIONS

A limitation of our approach is that it assumes soft locality. It can be applied to arbitrary models, but may not necessarily result in better certificates than isotropic smoothing (recall § 4.3). Also, choosing the smoothing distributions requires some assumptions about which parts of the input are how relevant to making a prediction. Our experiments show that natural assumptions like homophily can be sufficient. But choosing a distribution may be more challenging for other tasks. A limitation of (most) randomized smoothing methods is that they use sampling to approximate the smoothed classifier. Because we use multiple distributions, we can only use a fraction of the samples per distribution. We can alleviate this problem by sharing smoothing distributions among outputs (see § E.1). Still, future work should try to improve the sample efficiency of randomized smoothing or develop deterministic base certificates (e.g. by generalizing (Levine & Feizi, 2020) to anisotropic distributions), which could then be incorporated into our linear programming framework. 

7. EXPERIMENTAL EVALUATION

In this section, we compare our method to all existing collective certificates for ℓ p -norm perturbations: Center smoothing using isotropic Gaussian noise (Kumar & Goldstein, 2021) , SegCertify (Fischer et al., 2021) and the collective certificates of Schuchardt et al. (2021) . To compare SegCertify to the other methods, we report the number of certifiably robust predictions and not just whether all predictions are robust. We write SegCertify * to highlight this. When considering models that are not strictly local (i.e. all outputs depend on all inputs) the certificates of Schuchardt et al. (2021) and Fischer et al. (2021) are identical, i.e., do not have to be evaluated separately. A more detailed description of the experimental setup, hardware and computational cost can be found in § C.

Metrics.

Evaluating randomized smoothing methods based on certificate strength alone is not sufficient. Different distributions lead to different tradeoffs between prediction quality and certifiable robustness (as discussed in § 4.3). As metrics for prediction quality, we use accuracy and mean intersection over union (mIOU). 4 The main metric for certificate strength is the certified accuracy ξ(ϵ), i.e., the percentage of predictions that are correct and certifiably robust, given adversarial budget ϵ. Following (Schuchardt et al., 2021) , we use the average certifiable radius (ACR) as an aggregate metric, i.e. N -1 n=1 ϵ n • (ξ(ϵ n ) -ξ(ϵ n+1 ) with budgets ϵ 1 ≤ ϵ 2 • • • ≤ ϵ N and ϵ 1 = 0, ξ(ϵ N ) = 0. Evaluation procedure. We assess the accuracy-robustness tradeoff of each method by computing accuracy / mIOU and ACR for a wide range of smoothing distribution parameters. We then eliminate all points that are Pareto-dominated, i.e. for which there exist diffent parameter values that yield higher accuracy / mIOU and ACR. Finally, we assess to if localized smoothing dominates the baselines, i.e. whether it can be parameterized to achieve strictly better accuracy-robustness tradeoffs.

7.1. IMAGE SEGMENTATION

Dataset and model. We evaluate our certificate for l 2 perturbations on 100 images from the Pascal-VOC (Everingham et al., 2010) 2012 segmentation validation set. Training is performed on 10582 samples extracted from SBD, also known as "Pascal trainaug" (Hariharan et al., 2011) . Additional experiments on Cityscapes (Cordts et al., 2016) can be found in § A. To increase batch sizes and thus allow a thorough investigation of different smoothing parameters, all images are downscaled to 50% of their original size, similar to (Fischer et al., 2021) . Our base model is a U-Net segmentation model (Ronneberger et al., 2015) with a ResNet-18 backbone. For isotropic randomized smoothing, we use Gaussian noise N (0, σ iso ) with different σ iso ∈ {0.01, 0.02, . . . , 0.5}. To perform localized randomized smoothing, we choose parameters σ min , σ max ∈ R + and partition all images into regular grids (similar to Fig. 1 ). To smooth outputs in grid cell (i, j), we sample noise for grid cell (k, l) from N (0, σ ′ • 1), with σ ′ ∈ [σ min , σ max ] chosen proportional to the distance of (i, j) and Figure 4 : Comparison of isotropic smoothing to our LP-based certificate with a 3 × 5 grid and U-Net on Pascal-VOC. U-Net is sufficiently local to benefit from localized smoothing (Fig. 4a ), but not enough to offset the increased sample complexity (Fig. 4b ) for the probabilistic base certificates. (k, l) (more details in § C.2). All training data is randomly perturbed using samples from the same smoothing distribution that is used for certification. Accuracy-robustness tradeoff under strict locality. Accuracy-robustness tradeoff under soft locality. Next, we want to verify our claim about the existence of softly local models for which localized smoothing is beneficial. To this end, we randomly smooth the U-Net model itself, without using masking to enforce strict locality. We perform localized smoothing with grid size 3 × 5, various σ min ∈ {0.01, 0.02, . . . , 0.5}, σ max ∈ [0.02, 1.0] and 10240 samples per output pixel (i.e. 10240 • 15 = 153600 samples in total). Isotropic smoothing is also performed with 10240 samples per output pixel. Fig. 4a shows that localized smoothing Paretodominates SegCertify * for high-accuracy models with mIOU > 35.3%. Importantly the figure is not to be read like a line graph! Even if the vertical distance between two methods is small, one may significantly outperform the other. For example, σ iso = 0.1, with an mIOU of 46.34% and an ACR of 0.24 (highlighted with a bold cross) is dominated by (σ min , σ max ) = (0.09, 0.2) (highlighted with a large circle), which has a larger ACR of 0.25 and a mIOU that is a whole 6.1 p.p. higher. Benefit of linear programming. Fig. 3 demonstrates how the linear program derived in § 4.1 enables this improved tradeoff. We compare SegCertify * with σ iso = 0.2 to localized smoothing with (σ min , σ max ) = (0.15, 1.0). Naïvely combining the base certificates (dashed line) is not sufficient for outperforming the baseline, as they cannot certify robustness beyond ϵ = 0.45. However, solving the collective LP (solid blue line) extends the maximum certifiable radius to ϵ = 1.15.

Sample efficiency.

Using the same number of samples per output pixel for both localized and isotropic smoothing neglects that localized smoothing requires sampling from 15 different distributions, i.e. sampling 15 times as many images. 6 In Fig. 4b we allow the baselines to sample the same number of images. Now, localized smoothing is mostly dominated by SegCertify * , except for high-accuracy models with mIOU ∈ [52.4%, 57.8%] or mIOU > 60.8%. We conclude that U-Net is local enough to benefit from localized smoothing, but not enough to offset the practical problem of having to work with fewer Monte Carlo samples (see also discusion in § 6) in the entire range of possible isotropic smoothing parameters. Note, however, that we can always recover the guarantees of SegCertify * by using a 1 × 1 grid (see § H). Figure 5 : Comparison of our LP-based collective certificate to Bojchevski et al. (2020) , using APPNP on Citeseer. We consider both adversarial deletions (Fig. 5a ) and additions (Fig. 5b ) of attribute bits. Locally smoothed models offer a better accuracy-robustness tradeoff , especially for deletions. Transparent points signal that they are Pareto-dominated by points from the same method.

7.2. NODE CLASSIFICATION ON CITESEER

Dataset and model. Finally, we consider models that are designed with locality in mind: Graph neural networks. We take APPNP (Klicpera et al., 2019) , which aggregates per-node predictions from the entire graph based on personalized pagerank scores, and apply it to the Citeseer (Sen et al., 2008) dataset. To certify its robustness, we perform randomized smoothing with sparsity-aware noise S (x, θ + , θ -), where θ + and θ -control the probability of randomly adding or deleting node attributes, respectively (more details in § F.3.2). As a baseline we apply the tight certificate SparseSmooth of Bojchevski et al. (2020) to distributions S x, 0.01, θ - iso with θ - iso ∈ {0.1, 0.15, . . . , 0.95}. The small addition probability 0.01 is meant to preserve the sparsity of the graph's attribute matrix and was used in most experiments in (Bojchevski et al., 2020) . For localized smoothing, we partition the graph into 5 clusters and define a minimum deletion probability θ - min ∈ {0.1, 0.15, . . . , 0.95}. We then sample each cluster's attributes from S (x, 0.01, θ ′-) with θ ′-∈ θ - min , 0.95 chosen based on cluster affinity. To compute the base certificates, we use the variance-constrained certificate from § F.3.2. In all cases, we take 5 • 10 5 samples (i.e. 10 5 per cluster for localized smoothing). Further discussions, as well as experiments on different models and datasets can be found in § B. Accuracy-robustness tradeoff. Fig. 5 shows the accuracy and ACR pairs achieved by the naïve isotropic smoothing certificate and the LP-based certificate for localized smoothing. Despite having fewer samples per prediction, our method outperforms the baseline, offering higher accuracy certifying larger ACRs, especially for attribute deletions. Notably, in some cases, our approach even improves accuracy by over 7 p.p. percentage points compared to isotropically smoothed models. Similar to the observation made by Bojchevski et al. (2020) in their Section K, we also find that increasing the probability of attribute perturbations can improve accuracy to some extent. We posit that localized smoothing can leverage this phenomenon as a form of test-time regularization while preserving the crucial attributes of nearby nodes. In § B.1 we show that the stems from the smoothing scheme and is not solely due to using our novel variance-constrained certificate.

8. CONCLUSION

We proposed a novel approach to achieve provable collective robustness in multi-output classifiers that extends beyond strict locality, utilizing our introduced localized randomized smoothing scheme. Our approach involves smoothing different outputs with anisotropic smoothing distributions that match the model's soft locality. We demonstrated how per-output certificates obtained through localized smoothing can be combined into a strong collective robustness certificate using (mixed-integer) linear programming. Our experiments indicate that localized smoothing can achieve superior accuracy-robustness tradeoffs compared to isotropic smoothing methods. However, not all models match our distance-based locality assumption, particularly for image segmentation tasks. Node classification tasks are more amenable to localized smoothing due to their inherent locality. Our results highlight the importance of locality in achieving collective robustness and emphasize the need for future research to develop effective local models for multi-output tasks.

9. REPRODUCIBILITY STATEMENT

We prove all theoretic results that were not already derived in the main text in § D to § G. To ensure reproducibility of the experimental results we provide detailed descriptions of the evaluation process with the respective parameters in § C. An implementation, including configuration files, will be made available at https://www.cs.cit.tum.de/daml/localized-smoothing.

10. ETHICS STATEMENT

In this paper, we propose a method to increase the robustness of machine learning models against adversarial perturbations and to certify their robustness. We see this as an important step towards general usage of models in practice, as many existing methods are brittle to crafted attacks. Through the proposed method, we hope to contribute to the safe usage of machine learning. However, robust models also have to be seen with caution. As they are harder to fool, harmful purposes like mass surveillance are harder to avoid. We believe that it is still necessary to further research robustness of machine learning models as the positive effects can outweigh the negatives, but it is necessary to discuss the ethical implications of the usage in any specific application area.

Pavel Yakubovskiy.

Segmentation models pytorch. https://github.com/qubvel/ segmentation_models.pytorch, 2020. Maksym Yatsura, Kaspar Sakmann, N. Grace Hua, Matthias Hein, and Jan Hendrik Metzen. Certified defences against adversarial patch attacks on semantic segmentation. to 153600 (same as for the baselines) closes the gap between localized randomized smoothing and SegCertify. Still, localized smoothing only offers stronger certificates for models with mIOU ≤ 0.21 (compared to mIOU ≤ 0.11 when using fewer samples).

A IMAGE SEGMENTATION ON CITYSCAPES

In the following, we apply our approach to DeepLabv3 (Chen et al., 2017 ) models trained on the Cityscapes (Cordts et al., 2016) training set. We evaluate the certificates on 50 images from the validation set. For localized smoothing, we partition the image into a grid of shape 4×6. To limit the number of LP variables despite the increased resolution, we quantize the base certificate parameters η (n) into 2048 bins (see § E.2). Different from our experiments on Pascal-VOC and due to the increased computational cost of using higher-dimensional images, the locally smoothed models are not trained on the localized smoothing distribution with parameters (σ min , σ max ). Instead, we use model trained with isotropic Gaussian noise with standard deviation σ iso = σ min . Fig. 6a shows that, even when allowing 153600 samples per output pixel for both localized smoothing and the baselines (i.e. localized smoothing gets to sample 24 times as many images), most choices of (σ min , σ max ) do not offer higher accuracy and robustness than SegCertify * , except those leading to a small mIOU below 0.21. Fig. 6b shows that reducing the number of samples per output pixel for localized smoothing to 6400 = 153600 24 further weakens the certificate. There, localized smoothing only offers stronger certificates for models with an mIOU below 0.11. There are three possible explanations for why localized smoothing does not outperform SegCertify * . The first one is that we do not train on the same distribution that we use for certification, so our models are less accurate or less consistent in their predictions, which reduces mIOU or certified robustness. The second one is that our simplisitic choice of localized smoothing based on grid cell distance (see § C.2) does not match the actual locality structure of DeepLabv3. The last one is that DeepLabv3, which uses dilated convolutions to increase the receptive field size in each layer, is just inherently less local than the U-Net architecture used in our experiments on Pascal-VOC. Nevertheless, it should be noted that we can always parameterize localized smoothing to obtain the same results as SegCertify * (see Appendix H).

B ADDITIONAL EXPERIMENTS ON NODE CLASSIFICATION

In the following, we perform additional experiments on graph neural networks for node classification, including a different model and an additional dataset. Unless otherwise stated, all details of the experimental setup are identical to § 7.2. In particular, we use sparsity-aware smoothing distribution S (x, 0.01, θ -), where probability of deleting bits θ -is either constant across the entire graph (for the isotropic randomized smoothing baseline) or adjusted per output and cluster based on cluster affinity (for localized randomized smoothing). (b) Using variance-constrained certification for the naïve isotropic smoothing baseline. Figure 7 : Analysis of our LP-based collective certificate using APPNP on Citeseer. We use the sparsity-aware smoothing with S (x, 0.01, θ -) to certify robustness to deletions. In Fig. 7a we use the certificate of Bojchevski et al. (2020) for baseline (identical to Fig. 5a ). In Fig. 7b we use variance-constrained certification (see Theorem 5.1) as baseline. In both cases, there are locally smoothed models with a higher accuracy than any of the isotropically smoothed models and significantly larger average certifiable radii.

B.1 COMPARISON TO THE NAÏVE VARIANCE-CONSTRAINED ISOTROPIC SMOOTHING CERTIFICATE

In Fig. 5 of § 7.2, we observed that locally smoothed models surprisingly did not only achieve up to three times higher average certifiable radii, but simultaneously had higher accuracy than any of the isotropically smoothed models. One potential explanation is that we used variance-constrained certification (see Theorem 5.1) (i.e. smoothing the models' softmax scores instead of their predicted labels) for localized smoothing, but not for the isotropic smoothing baseline. This might result in two substantially different models. To investigate this, we repeat the experiment from Fig. 5a , using variance-constrained certification for both localized smoothing and the isotropic smoothing baseline. Fig. 7 shows that, no matter which smoothing paradigm we use for our isotropic smoothing baseline, there is a c.a. 7 p.p. difference in accuracy between the most accurate isotropically smoothed model and the most accurate locally smoothed model. Interestingly, even variance-constrained smoothing with isotropic noise (green crosses in Fig. 7b ) is sufficient for outperforming the isotropic smoothing certificate of Bojchevski et al. (2020) (orange stars in Fig. 7a ). This showcases that variance-constrained certification does not only present a very efficient, but also a very effective way of certifying robustness on discrete data (even when entirely ignoring the collective robustness aspect).

B.2 NODE CLASSIFICATION USING GRAPH CONVOLUTIONAL NETWORKS

So far, we have only used APPNP models as our base classifier. Now, we repeat our experiments using 6-layer Graph Convolutional Networks (GCN) (Kipf & Welling, 2017) . In each layer, GCNs first apply a linear layer to each node's latent vector and then average over each node's 1-hop neighborhood. Thus, a 6-layer GCN classifies each node using attributes from all nodes in its 6-hop neighborhood, which covers most or all of the Citeseer graph. Aside from using GCN instead of APPNP as the base model, we leave the experimental setup from § 7.2 unchanged. Note that GCNs Figure 8 : Comparison of our LP-based collective certificate for localized randomized smoothing to SparseSmooth, using a 6-layer GCN on Citeseer. We consider both adversarial deletions (Fig. 8a ) and additions (Fig. 8b ). Some locally smoothed models have a higher accuracy than any of the isotropically smoothed models. However, our certificate only dominates the best isotropically smoothed models when considering robustness to deletions, not when considering robustness to additions. This can either be attributed to a lower locality in deep GCNs or variance-constrained certification yielding weak base certificates for addition when θ + is small. are typically used with fewer layers. However, these shallow models are strictly local and it has already been established that the certificate Schuchardt et al. ( 2021) -which is subsumed by our certificate (see § I.2) -can provide very strong robustness guarantees for them. We therefore increase the number of layers to obtain a model that is not strictly local. Fig. 8 shows the results for both robustness to deletions and robustness to additions. Similar to APPNP, some locally smoothed models have an up to 4 p.p. higher accuracy than the most accurate isotropically smoothed model. When considering robustness to deletions, the locally smoothed models Pareto-dominate all of the isotropically smoothed models, i.e. offer better accuracy-robustness tradeoffs. Some can guarantee average certifiable radii that are at least 50% larger than those of the baseline. When considering robustness to additions however, some of the isotropically smoothed models have a higher certifiably robustness. We see two potential causes for our method's lower certifiable robustness to additions: The first potential cause is that the GCN may be less local than APPNP or that it has a different form of locality that does not match our clustering-based localized smoothing distributions. This appears plausible, as GCN averages uniformly over each neighborhood, whereas APPNP aggregates predictions based on pagerank scores. APPNP may thus primarily attend to specific, densely connected nodes, making it more local than GCN. The second potential cause is that the variance-constrained certificate we use as our base certificate may be less effective when certifying robustness to adversarial additions by using a very small addition probablity like θ + = 0.01. Afterall, we have also seen in our experiments with APPNP in § 7.2 that the gap in average certifiable radii between localized and isotropic smoothing was significantly smaller when considering additions. We investigate this second potential cause in more detail in § B.4.

B.3 NODE CLASSIFICATION ON CORA-ML

Next, we repeat our experiments with APPNP on the Cora-ML (McCallum et al., 2000; Bojchevski & Günnemann, 2018) node classification dataset, keeping all other parameters fixed. The results are shown in Fig. 9 . Unlike on Citeseer, the locally smoothed models have a slightly reduced accuracy compared to the isotropically smoothed models. This can either be attributed to one smoothing approach having a more desirable regularizing effect on the neural network, or the fact that we smooth softmax scores instead of predicted labels when constructing the locally smoothed models. Nevertheless, when considering adversarial deletions, localized smoothing makes it possible to achieve average certifiable radii that are at least 50% larger than any of the isotropically smoothed models' -at the cost of slightly reduced accuracy 8.6%. Or, for another point of the pareto front, we in- Figure 9 : Comparison of our LP-based collective certificate for localized randomized smoothing to the SparseSmooth (Bojchevski et al., 2020) , using APPNP on Cora-ML. We consider both adversarial deletions (Fig. 9a ) and additions (Fig. 9b ). Some locally smoothed models that have a higher accuracy than any of the isotropically smoothed models. However, our method is only able to dominate all isotropically smoothed models when considering robustness to deletions, not when considering robustness to additions. This can either be attributed to a lower locality in deep GCNs or variance-constrained certification yielding weak base certificates for addition when θ + is small. crease the certificate by 20% while reducing the accuracy by 2.8 percentage points. As before, the certificates for attribute additions are significantly weaker. (b) Using variance-constrained certification for the naïve isotropic smoothing certificate. Figure 10 : Comparison of our LP-based collective certificate for localized randomized smoothing to SparseSmooth and to a naïve combination of its base certificates, using GCN and adversarial additions on Citeseer. Fig. 10a shows that the LP-based certificate is outperformed by naïve isotropic smoothing. Fig. 10b shows that this is largely due to the variance-constrained base certificates (green crosses) for adversarial additions being much weaker than the isotropic smoothing certificate of (Bojchevski et al., 2020) in Fig. 10a . While our certificates for adversarial deletions have compared favorably to the isotropic smoothing baseline in all previous experiments, our certificates for adversarial additions were comparatively weaker on Cora-ML and when using GCNs as base models. In the following, we investigate to what extend this can be attributed to our use of variance-constrained certification for our base certificates. Fig. 10a shows both our linear programming collective certificate and the naïve isotropic smoothing certificate based on (Bojchevski et al., 2020) for GCNs on Citeseer under adversarial additions. In Fig. 10b , we plot not only the LP-based certificates, but also our variance-constrained base certificates (drawn as green crosses). Comparing both figures shows that our base certificate's average certifiable radii are at least 50% smaller than the largest ACR achieved by (Bojchevski et al., 2020) in Fig. 10a . While our linear program significantly improves upon them, it is not sufficient to overcome this significant gap. This result is in stark contrast to our results for attribute deletions § B.1, where the variance-constrained base certificates alone were enough to significantly outperform the certificate of (Bojchevski et al., 2020) . Now that we have established that the variance-constrained base certificates appear significantly weaker for additions, we can analyze why. For this, recall that our base certificates are parameterized by a weight vector w (see Definition 4.1), with smaller values corresponding to higher robustness -or two weight vectors w + , w -quantifying robustness to adversarial additions and deletions, respectively (see § F.3.2). Using our results from § F.3.2, we can draw the weights w + resulting from smoothing distribution S (x, 0.01, θ -) as a function of θ -. Fig. 11a shows that θ -has to be brought very close to 1 in order to guarantee high robustness to deletions, effectively deleting almost all attributes in the graph. Alternatively, one can also increase the addition probability θ + to perhaps 10% or 20%. But this would utterly destroy the sparsity of the graph's attribute matrix. We can conclude that, while variance-constrained certification can in principle provide strong certificates for attribute deletions, it might be a worse choice than the method of Bojchevski et al. (2020) for very sparse datasets that force the use of very low addition probabilities θ + . Figure 11 : Base certificate weight w + of the variance-constrained sparsity-aware smoothing certificate for varying distribution parameters. Certifying high robustness to adversarial additions (i.e. obtaining small weights) requires either setting a high probability for random additions or an even higher probability for random deletions.

B.5 BENEFIT OF LINEAR PROGRAMMING CERTIFICATES

As we did for our experiments on image segmentation (see Fig. 3 ), we can inspect the certified accuracy curves of specific smoothed models in more detail to gain a better understanding of how the collective linear programming certificate enables larger average certifiable radii. We use the same experimental setup as in § 7.2, i.e. APPNP on Citeseer, and certify robustness to deletions. We compare the certifiably most robust isotropically smoothed model (θ - iso = 0.8, ACR = 5.67 to the locally smoothed model with θ - min = 0.75, θ + max = 0.95. For the locally smoothed models, we compute both LP-based collective certificate, as well as the naïve collective certificate. Fig. 12 shows that even naïvely combining the localized smoothing base certificates obtained via variance-constrained certification (dashed blue line) is sufficient for outperforming the naïve isotropic smoothing certificate. This speaks to its effectiveness as a certificate against adversarial deletions. Combining the base certificates via linear programming (solid blue line) significantly enlarges this gap, leading to even larger maximum and average certifiable radii. 

Certified accuracy

Localized LP Localized Naïve SparseSmooth Figure 12 : Certified accuracy of APPNP on Citeseer. We compare the naïve isotropic smoothing certificate of the most robust baseline model (θ - iso = 0.8) to localized smoothing (θ - min = 0.75). Even naïvely combining the variance-constrained base certificates (dashed blue line) is sufficient for outperforming the SparseSmooth certificate for 15 deletions or less. Combining the base certificates via our LP (solid blue line) further extends the certifiable radius and significantly increases the certified accuracy for perturbations with 5 or more deletions.

C DETAILED EXPERIMENTAL SETUP

In the following, we first explain the metrics we use for measuring the strength of certificates, and how they can be applied to the different types of randomized smoothing certificates used in our experiments. We then discuss the specific parameters and hyperparameters for our semantic segmentation and node classification experiments. We conclude by specifying the used hardware and comparing the computational cost of Monte Carlo sampling to that of solving the collective linear program.

C.1 CERTIFICATE STRENGTH METRICS

We use two metrics for measuring certificate strength: For specific adversarial budgets ϵ, we compute the certified accuracy ξ(ϵ) (i.e. the percentage of correct and certifiably robust predictions). As an aggregate metric, we compute the average certifiable radius, i.e. the lower Riemann integral of ξ(ϵ) evaluated at ϵ 1 , . . . , ϵ N with ϵ 1 = 0 and ξ(ϵ N ) = 0. For our experiments on image segmentation, we use 81 equidistant points in [0, 4] . For our experiments on node classification, where we certify robustness to a discrete number of perturbations, we use ϵ n = n, i.e. natural numbers. In all experiments, we perform Monte Carlo randomized smoothing (see § G). Therefore, we may have to abstain from making predictions. Abstentions are counted as non-robust and incorrect. In the case of center smoothing, either all or no predictions abstain (this is inherent to the method. In our experiments, center smoothing never abstained).

C.1.1 COMPUTING CERTIFIED ACCURACY

The three different types of collective certificate considered in our experiments each require a different procedure for computing the certified accuracy. In the following, let Z = {d ∈ {1, . . . , D out } | f n (x) = ŷn } be the indices of correct predictions, given an input x. Naïve collective certificate. The naïve collective certificate certifies each prediction independently. Let H (n) be the set of perturbed inputs y n is certifiably robust to (see Definition 3.1). Let B x be the collective perturbation model. Then L = d ∈ {1, . . . , D out } | B x ⊆ H (n) is the set of all certifiably robust predictions. The certified accuracy can be computed as |L∩Z| Dout . Center smoothing Center smoothing used for collective robustness certification does not determine which predictions are robust, but only the number of robust predictions. We therefore have to make the worst-case assumption that the correct predictions are the first to be changed by the adversary. Let l be the number of certifiably robust predictions. The certified accuracy can then be computed as max(0,|Z|-(Dout-l)) Dout . Collective certificate. Let l(T) be the optimal value of our collective certificate for the set of targeted nodes T. Then the certified accuracy can be computed via l(T) Dout with T = Z.

C.2 SEMANTIC SEGMENTATION

Here, we provide all parameters of our experiments on image segmentation. Models. As base models for the semantic segmentation tasks, we use U-Net (Ronneberger et al., 2015) and DeepLabv3 (Chen et al., 2017) segmentation heads with a ResNet-18 (He et al., 2016) backbone, as implemented by the Pytorch Segmentation Models library (version 0.13) (Yakubovskiy, 2020). We use the library's default parameters. In particular, the inputs to the U-Net segmentation head are the features of the ResNet model after the first convolutional layer and after each ResNet block (i.e. after every fourth of the subsequent layers). The U-Net segmentation head uses (starting with the original resolution) 16, 32, 64, 128 and 256 convolutional filters for processing the features at the different scales. For the DeepLabv3 segmentation head, we use all default parameters from Chen et al. ( 2017) and an output stride of 16. To avoid dimension mismatches in the segmentation head, all input images are zero-padded to a height and width that is the next multiple of 32. Data and preprocessing. We evaluate our certificates on the Pascal-VOC 2012 and Cityscapes segmentation validation set. We do not use the test set, because evaluating metrics like the certified accuracy requires access to the ground-truth labels. For training the U-Net models on Pascal, we use the 10582 Pascal segmentation masks extracted from the SBD dataset (Hariharan et al., 2011) (referred to as "Pascal trainaug" or "Pascal augmented training set" in other papers). SBD uses a different data split than the official Pascal-VOC 2012 segmentation dataset. We avoid data leakage by removing all training images that appear in the validation set. For training the DeepLabv3 model on Cityscapes, we use the default training set. We downscale both the training and the validation images and ground-truth masks to 50% of their original height and width, so that we can use larger batch sizes and thus use our compute time to more thoroughly evaluate a larger range of different smoothing distributions. The segmentation masks are downscaled using nearest-neighbor interpolation, the images are downscaled using the INTER AREA operation implemented in OpenCV (Bradski, 2000) . Training and data augmentation. We initialize our model weights using the weights provided by the Pytorch Segmentation Models library, which were obtained by pre-training on ImageNet. We train our models for 512 epochs, using Dice loss and Adam(lr = 0.001, β 1 = 0.9, β 2 = 0.999, ϵ = 10 -8 , weight decay = 0). We use a batch size of 128 for Pascal-VOC and a batch size of 32 for Cityscapes. Every 8 epochs, we compute the mean IOU on the validation set. After training, we use the model that achieved the highest validation mean IOU. We apply the following traintime augmentations: With 50% probability, each image is randomly scaled by a factor from [1, 2.0] using the ShiftScaleRotate augmentation implemented by the Albumentations library (version 0.5.2) (Buslaev et al., 2020) . The images are than cropped to a fixed size of 160 × 256 (for Pascal-VOC) or 384 × 384 (for Cityscapes). Where necessary, the images are padded with zeros. Padded parts of the segmentation mask are ignored by the loss function. After these operations, each input is randomly perturbed using Gaussian noise. For isotropic smoothing, we use a fixed standard deviation σ iso ∈ {0, 0.01, . . . , 0.5}, i.e. we train 51 different models on different isotropic smoothing distributions. For localized smoothing with grid shape H × W and parameters (σ min , σ max ) we perform localized smoothing with a single sample per image. Since this generates H • W as many perturbed images, we perform gradient accumulation, processing 1 H•W of each batch at a time. All samples are clipped to [0, 1] to retain valid RGB-values. Certification. For Pascal-VOC, we evaluate all certificates on the first 100 images from the validation set that -after downscaling -have a resolution of 166 × 250. For Cityscapes, we use every tenth image from the validation set. For all certificates, we use Monte Carlo randomized smoothing (see discussion in § G). We use the significance parameter α to 0.01, i.e. all certificates hold with probability 0.99. For the center smoothing baseline, we use the default parameters suggested by the authors (∆ = 0.05, β = 2, α 1 = α 2 ). For the naïve isotropic randomized smoothing baseline and for localized smoothing, we use Holm correction to account for the multiple comparisons problem, which yields strictly better results than Bonferroni correction (see § G.4). For our localized smoothing distribution, we partition the input image into a regular grid of size H × W (specified in the different paragraphs of § 7.1) and define minimum standard deviation σ min and maximum standard deviation σ max . Let J (k,l) be the set of all pixel coordinates in grid cell (k, l). To smooth outputs in grid cell (i, j), we use a smoothing distribution N (0, diag(σ)) with ∀k ∈ {1, . . . , H}, l ∈ {1, . . . , W }, d ∈ J (k,l) , σ d = σ min + (σ max -σ min ) • max (|i -k|, |l -j|) W , i.e. we linearly interpolate between σ min and σ max based on the l ∞ distance of grid cells (i, j) and (k, l). All results are reported for the relaxed linear programming formulation of our collective certificate (see § E.4). The collective linear program is solved using MOSEK (version 9. Model We test two different models: 2-layer APPNP (Klicpera et al., 2019) and 6-layer GCN (Kipf & Welling, 2017) . For both models we use a hidden size of 64 and dropout with a probability of 0.5. For the propagation step of APPNP we use 10 for the number of iterations and 0.15 as the teleport probability. Data and preprocessing. We evaluate our approach on the Cora-ML and Citeseer node classification datasets. We perform standard preprocessing, i.e., remove self-loops, make the graph undirected and select the largest connected component. We use the same data split as in (Schuchardt et al., 2021) , i.e. 20 nodes per class for the train and validation set. Training and data augmentation All models are trained with a learning rate of 0.001 and weight decay of 0.001. The models we use for sparse smoothing are trained with the noise distribution that is also reported for certification. The localized smoothing models are trained on the their minimal noise level, i.e., not with localized noise but with only θ + min and θ - min . Certification We evaluate our certificates on the validation nodes. For all certificates, we use Monte Carlo randomized smoothing (see discussion in § G). We use 1000 samples for making smoothed predictions and 5 • 10 5 samples for certification. We use the significance parameter α to 0.01, i.e. all certificates hold with probability 0.99. For the naïve isotropic randomized smoothing baseline, we use Holm correction to account for the multiple comparisons problem, which yields strictly better results than Bonferroni correction (see § G.4). For our localized smoothing certificates, we use Bonferroni correction. To parameterize the localized smoothing distribution, we first perform Metis clustering (Karypis & Kumar, 1998) to partition the graph into 5 clusters. We create an affinity ranking by counting the number of edges which are connecting cluster i and j. Specifically, let C be the set of clusters given by the Metis clustering. Then we count the number of edges between all cluster pairs and denote it by N i,j , i, j ∈ C. If the number of edges of the pair (i, j) is higher than the number for all other pairs (k, j) ∀j ∈ C, i.e. N i,j > N k,j ∀k ∈ C, we can say that, due to the homophily assumption, cluster i is the most important one for cluster j. We create this ranking for all pairs and use it to select the noise parameter θ ′-for smoothing the attributes of cluster j while classifying a node of cluster i out of the discrete steps of the linear interpolation between θ min and θ max based on its previously defined ranking between the clusters. An example would be, given 11 clusters, θ min = 0.0, and θ max = 1.0. If cluster j second most important cluster to i, then we would take the second value out of {0.0, 0.1, . . . , 1.0}. All results are reported for the relaxed linear programming formulation of our collective certificate (see § E.4). For each cluster, we use 1 5 of the samples, which corresponds to 200 samples for prediction and 10 5 samples for certification. The collective linear program is solved using MOSEK (version 9.2.46) (MOSEK ApS, 2019) through the CVXPY interface (version 1.1.13) (Diamond & Boyd, 2016) .

C.4 HARDWARE AND RUNTIME

The experiments on Pascal-VOC with strictly local models (Fig. 2 ) were performed using a Xeon E5-2630 v4 CPU @ 2.20GHz, an NVIDA GTX 1080TI GPU and 128 GB of RAM. All other experiments were performed using an AMD EPYC 7543 CPU @ 2.80GHz, an NVIDA A100 GPU and 128 GB of RAM. In all cases, the time needed for obtaining the Monte Carlo samples required by both localized and isotropic smoothing was much larger than the cost of solving the collective linear program. • For the strictly local model in Fig. 2 , taking 153600 samples took 294 s on average. Averaged over all images and adversarial budgets, solving each LP only took 0.91 s. • For the standard U-Net model in Fig. 4 , taking 153600 samples took 70.3 s on average. Each LP took 1.8 s on average. • For the DeepLabv3 model in Fig. 6b , taking 153600 samples took 1204 s on average. Each LP took 2.78 s on average. • For the APPNP model in Fig. 5 , taking 5 • 10 6 samples took 1034 s on average. Each LP took 10.9 s on average. For graphs, the reported time for solving a single instance of the collective linear program is much higher than for image segmentation, even though the graph datasets require fewer variables. That is because we used a different, not as well vectorized formulation of the linear program in CVXPY. In all cases, the time for calculating the isotropic smoothing certificates and base certificates from the Monte Carlo samples was too small to be measured accurately, since they can be implemented in a few simple vector operations.

D PROOF OF THEOREM 4.2

In the following, we prove Theorem 4.2, i.e. we derive the mixed-integer linear program that underlies our collective certificate and prove that it provides a valid bound on the number of simultaneously robust predictions. The derivation bears some semblance to that of (Schuchardt et al., 2021) , in that both use standard techniques to model indicator functions using binary variables and that both convert optimization in input space to optimization in adversarial budget space. Nevertheless, both methods differ in how they encode and evaluate base certificates, ultimately leading to significantly different results (our method encodes each base certificate using only a single linear constraint and does not perform any masking operations). Theorem 4.2. Given locally smoothed model f , input x ∈ X (Din) , smoothed prediction y = f (x) and base certificates H (1) , . . . , H Dout complying with interface Eq. 2, the number of simultaneously robust predictions min x ′ ∈Bx n∈T I [f n (x ′ ) = y n ] is lower-bounded by min b∈R D in + ,t∈{0,1} D out n∈T t n (8) s.t. ∀n : b T w (n) ≥ (1 -t n )η (n) , sum{b} ≤ ϵ p . Proof. We begin by inserting the definition of our perturbation model B x and the base certificates H (n) into Eq. 1.1: min x ′ ∈Bx n∈T I [f n (x ′ ) = y n ] ≥ min x ′ ∈Bx n∈T I x ′ ∈ H (n) (10) = min x ′ ∈X D in n∈T I Din d=1 w (n) d • |x ′ d -x d | p < η (n) s.t. Din d=1 |x ′ d -x d | p ≤ ϵ p . Evidently, input x ′ only affects the elementwise distances |x ′ d -x d | p . Rather than optimizing x ′ , we can directly optimize these distances, i.e. determine how much adversarial budget is allocated to each input dimension. For this, we define a vector of variables b ∈ R Din + (or b ∈ {0, 1} Din for binary data). Replacing sums with inner products, we can restate Eq. 11 as min b∈R D in + n∈T I b T w (n) < η (n) s.t. sum{b} ≤ ϵ p . In a final step, we replace the indicator functions in Eq. 12 with a vector of boolean variables t ∈ {0, 1} Dout . min b∈R D in + ,t∈{0,1} D out n∈T t n (13) s.t. ∀n : b T w (n) ≥ (1 -t n )η (n) , sum{b} ≤ ϵ p . ( ) The first constraint in Eq. 5 ensures that t n = 0 ⇐⇒ I b T w (n) ≥ η (n) . Therefore, the optimization problem in Eq. 13 and Eq. 5 is equivalent to Eq. 12, which by transitivity is a lower bound on min x ′ ∈Bx n∈T I [f n (x ′ ) = y n ].

E IMPROVING EFFICIENCY

In this section, we discuss different modifications to our collective certificate that improve its sample efficiency and allow us fine-grained control over the size of the collective linear program. We further discuss a linear relaxation of our collective linear program. All of the modifications preserve the soundness of our collective certificate, i.e. we still obtain a provable bound on the number of predictions that can be simultaneously attacked by an adversary. To avoid constant case distinctions, we first present all results for real-valued data, i.e. X = R, before mentioning any additional precautions that may be needed when working with binary data.

E.1 SHARING SMOOTHING DISTRIBUTIONS AMONG OUTPUTS

In principle, our proposed certificate allows a different smoothing distribution Ψ (n) x to be used per output g n of our base model. In practice, where we have to estimate properties of the smoothed classifier using Monte Carlo methods, this is problematic: Samples cannot be re-used, each of the many outputs requires its own round of sampling. We can increase the efficiency of our localized smoothing approach by partitioning our D out outputs into N out subsets that share the same smoothing distributions. When making smoothed predictions or computing base certificates, we can then reuse the same samples for all outputs within each subsets. More formally, we partition our D out output dimensions into sets K (1) , . . . , K (N out) with ˙ Nout i=1 K (i) = {1, . . . , D out }. We then associate each set K (i) with a smoothing distribution Ψ (i) x . For each base model output g n with n ∈ K (i) , we then use smoothing distribution Ψ (i) x to construct the smoothed output f n , e.g. f n (x) = argmax y∈Y Pr z∼Ψ (i) x [f (x + z) = y] (note that for our variance-constrained certificate we smooth the softmax scores instead, see § 5).

E.2 QUANTIZING CERTIFICATE PARAMETERS

Recall that our base certificates from § 5 are defined by a linear inequality: A prediction n) , for some p ≥ 0. The weight vectors w (n) ∈ R Din only depend on the smoothing distributions. A side of effect of sharing the same distribution y n = f n (x) is robust to a perturbed input x ′ ∈ X Din if D d=1 w (n) d • |x ′ d -x d | p < η ( Ψ (i) x among all outputs from a set K (i) , as discussed in the previous section, is that the outputs also share the same weight vector w (i) ∈ R Din with ∀n ∈ K (i) : w (i) = w (n) . Thus, for all smoothed outputs f n with n ∈ K (i) , the smoothed prediction y n is robust if D d=1 w (i) d • |x ′ d -x d | p < η (n) . Evidently, the base certificates for outputs from a set K (i) only differ in their parameter η (n) . Recall that in our collective linear program we use a vector of variables t ∈ {0, 1} Dout to indicate which predictions are robust according to their base certificates (see Theorem 4.2). If there are two outputs f n and f m with η (n) = η (m) , then f n and f m have the same base certificate and their robustness can be modelled by the same indicator variable. Conversely, for each set of outputs K (i) , we only need one indicator variable per unique η (n) . By quantizing the η (n) within each subset K (i) (for example by defining equally sized bins between min n∈K (i) η (n) and max n∈K (i) η (n) ), we can ensure that there is always a fixed number N bins of indicator variables per subset. This way, we can reduce the number of indicator variables from D out to N out • N bins . To implement this idea, we define a matrix of thresholds E ∈ R Nout×N bins with ∀i : min {E i,: } ≤ min n∈K (i) η (n) | n ∈ K (i) . We then define a function ξ : {1, . . . , N out } × R → R with ξ(i, η) = max ({E i,j | j ∈ {1, . . . , N bins ∧ E i,j < η}) that quantizes base certificate parameter η from output subset K (i) by mapping it to the next smallest threshold in E i,: . We can then bound the collective robustness of the targeted dimensions T of our prediction vector y = f (x) as follows: min i∈{1,...,Nout} j∈{1,...,N bins } T i,j n ∈ T ∩ K (i) ξ i, η (n) = E i,j s.t. ∀i, j : b T w (i) ≥ (1 -T i,j )E i,j , sum{b} ≤ ϵ p (18) b ∈ R Din + , T ∈ {0, 1} Nout×N bins . ( ) Constraint Eq. 18 ensures that T i,j is only set to 0 if b T w (i) ≥ E i,j , i.e. all predictions from subset K (i) whose base certificate parameter η (n) is quantized to E i,j are no longer robust. When this is the case, the objective function decreases by the number of these predictions. For N out = D out , N bins = 1 and E n,1 = η (n) , we recover our general certificate from Theorem 4.2. Note that, if the quantization maps any parameter η (n) to a smaller number, the base certificate H (n) becomes more restrictive, i.e. y n is considered robust to a smaller set of perturbed inputs. Thus, Eq. 17 is a lower bound on our general certificate from Theorem 4.2.

E.3 SHARING NOISE LEVELS AMONG INPUTS

Similar to how partitioning the output dimensions allows us to control the number of output variables t, partitioning the input dimensions and using the same noise level within each partition allows us to control the number of budget variables b. Assume that we have partitioned our output dimensions into N out subsets K (1) , . . . , K (Nout) , with outputs in each subset sharing the same smoothing distribution Ψ (i) x , as explained in § E.1. Let us now define N in input subsets J (1) , . . . , J (Nin) with ˙ Nin l=1 J (l) = {1, . . . , D out }. ( ) Recall that a prediction n) and that the weight vectors w (i) only depend on the smoothing distributions. Assume that we choose each smoothing distribution Ψ (i) y n = f n (x) with n ∈ K (i) is robust to a perturbed input x ′ ∈ X Din if D d=1 w (i) d • |x ′ d -x d | p < η ( x such that ∀l ∈ {1, . . . , N in }, ∀d, d ′ ∈ J (l) : w (i) d = w (i) d ′ , i.e. all input dimensions within each set J (l) have the same weight. This can be achieved by choosing Ψ (i) x so that all dimensions in each input subset J l are smoothed with the noise level (note that we can still use a different smoothing distribution Ψ (i) x for each set of outputs K (i) ). For example, one could use a Gaussian distribution with covariance matrix Σ = diag (σ) 2 with ∀l ∈ {1, . . . , N in }, ∀d, d ′ ∈ J (l) : σ d = σ d ′ . In this case, the evaluation of our base certificates can be simplified. Prediction y n = f n (x) with n ∈ K (n) is robust to a perturbed input x ′ ∈ X Din if Din d=1 w (i) d • |x ′ d -x d | p < η (n) (21) = Nin l=1   u (i) • d∈J (l) |x ′ d -x d | p   < η (n) , with u ∈ R Nin + and ∀i ∈ {1, . . . , N out }, ∀l ∈ {1, . . . , N in }, ∀d ∈ J (l) : u i l = w i d . That is, we can replace each weight vector w (i) that has one weight w  T i,j n ∈ T ∩ K (i) ξ i, η (n) = E i,j (23) s.t. ∀i, j : b T u (i) ≥ (1 -T i,j )E i,j , sum{b} ≤ ϵ p , ( ) b ∈ R Nin + , T ∈ {0, 1} Nout×N bins . ( ) with u ∈ R Nin and ∀i ∈ {1, . . . , N out }, ∀l ∈ {1, . . . , N in }, ∀d ∈ J : n) , we recover our general certificate from Theorem 4.2. u i l = w i d . For N out = D out , N in = D in , N bins = 1 and E n,1 = η ( When certifying robustness for binary data, we impose different constraints on b. To model that the adversary can not flip more bits than are present within each subset, we use a budget vector b ∈ N Nin 0 with ∀l ∈ {1, . . . , N in } : b l ≤ J (l) , instead of a continuous budget vector b ∈ R Nin + .

E.4 LINEAR RELAXATION

Combining the previous steps allows us to reduce the number of problem variables and linear constraints from D in + D out and D out + 1 to N in + N out • N bins and N out • N bins + 1, respectively. Still, finding an optimal solution to the mixed-integer linear program may be too expensive. One can obtain a lower bound on the optimal value and thus a valid, albeit more pessimistic, robustness certificate by relaxing all discrete variables to be continuous. When using the general certificate from Theorem 4.2, the binary vector t ∈ {0, 1} Dout can be relaxed to t ∈ [0, 1] Dout . When using the certificate with quantized base certificate parameters from § E.2 or § E.3, the binary matrix T ∈ [0, 1] Nout×N bins can be relaxed to T ∈ [0, 1] Nout×N bins . Conceptually, this means that predictions can be partially certified, i.e. t n ∈ (0, 1) or T i,j ∈ (0, 1). In particular, a prediction can be partially certified even if we know that is impossible to attack under the collective perturbation model Schuchardt et al. (2021) , who encountered the same problem with their collective certificate, we circumvent this issue by first computing a set L ⊆ T of all targeted predictions in T that are guaranteed to always be robust under the collective perturbation model: B x = x ′ ∈ X Din | ||x ′ -x|| p ≤ ϵ . Just like L = n ∈ T max x∈Bx D d=1 w (n) d • |x ′ d -x d | p < η (n) (26) = n ∈ T max n w (n) • ϵ p < η (n) . ( ) The equality follows from the fact that the most effective way of attacking a prediction is to allocate all adversarial budget to the least robust dimension, i.e. the dimension with the largest weight. Because we know that all predictions with indices in L are robust, we do not have to include them in the collective optimization problem and can instead compute |L| + min x ′ ∈Bx n∈T\L I x ′ ∈ H (n) . ( ) The r.h.s. optimization can be solved using the general collective certificate from Theorem 4.2 or any of the more efficient, modified certificates from previous sections. When using the general collective certificate from Theorem 4.2 with binary data, the budget variables b ∈ {0, 1} Din can be relaxed to b ∈ [0, 1] Din . When using the modified collective certificate from § E.3, the budget variables with b ∈ N Nin 0 can be relaxed to b ∈ R Nin + . The additional constraint ∀l ∈ {1, . . . , N in } : b l ≤ J (l) can be kept in order to model that the adversary cannot flip (or partially flip) more bits than are present within each input subset J (l) .

F BASE CERTIFICATES

In the following, we show why the base certificates discussed in § 5 and summarized in Table 1 hold. In § F.3.2 we further present a base certificate (and corresponding collective certificate) that can distinguish between adversarial addition and deletion of bits in binary data.  y n = f n (x), let q = Pr z∼N (x,Σ) [g n (z) = y n ]. Then, ∀x ′ ∈ H (n) : f n (x ′ ) = y n with H (n) defined as in Eq. 2, w d = 1 σ d 2 , η = Φ (-1) (q) 2 and p = 2. Proof. Based on the definition of the base certificate interface, we need to show that, ∀x ′ ∈ H : f n (x ′ ) = y n with H = x ′ ∈ R Din Din d=1 1 σ 2 d • |x d -x ′ d | 2 < Φ -1 (q) 2 . ( ) Eiras et al. ( 2022) have shown that under the same conditions as above, but with a general covariance matrix Σ ∈ R Din×Din + , a prediction y n is certifiably robust to a perturbed input x ′ if (x -x ′ )Σ -1 (x -x ′ ) < 1 2 Φ -1 (q) -Φ -1 (q ′ ) , where q ′ = max y ′ n ̸ =yn Pr z∼N (x,Σ) [g n (z) = y ′ n ] is the probability of the second most likely prediction under the smoothing distribution. Because the probabilities of all possible predictions have to sum up to 1, we have q ′ ≤ 1 -q. Since Φ -1 is monotonically increasing, we can obtain a lower bound on the r.h.s. of Eq. 30 and thus a more pessimistic certificate by substituting 1 -q for q ′ (deriving such a "binary certificate" from a "multiclass certificate" is common in randomized smoothing and was already discussed in (Cohen et al., 2019) ): (x -x ′ )Σ -1 (x -x ′ ) < 1 2 Φ -1 (q) -Φ -1 (1 -q) , In our case, Σ is a diagonal matrix diag (σ) 2 with σ ∈ R Din + . Thus Eq. 31 is equivalent to Din d=1 (x d -x ′ d ) 1 σ 2 d (x d -x ′ d ) < 1 2 Φ -1 (q) -Φ -1 (1 -q) . Finally, using the fact that Φ -1 (q) -Φ -1 (1 -q) = 2Φ -1 (q) and eliminating the square root shows that we are certifiably robust if Din d=1 1 σ 2 d • |x d -x ′ d | 2 < Φ -1 (q) 2 . ( ) Table 1 : Base certificates complying with interface Eq. 2 with parameters w (n) and η (n) . Here, y n = f n (x) is the prediction of f n (x) = argmax y∈Y q n,y . With the l 0 certificate, g n (z) y refers to the softmax score of class y and ζ = Var z∼F (x,θ) [g n (z) yn ] is the variance of y n 's softmax score. Norm Ψ (n) x qn,y w (n) d η (n) l2 N x, diag (s) 2 Pr z∼Ψ (n) x [gn(z) = y] 1 s 2 d Φ -1 (qn,y n ) 2 l1 U (x, λ) Pr z∼Ψ (n) x [gn(z) = y] 1 λ d Φ -1 (qn,y n ) l0 F (x, θ) E z∼Ψ (n) x [gn(z)y] ln (1-θ d ) 2 θ d + (θ d ) 2 1-θ d ln 1 + 1 ζ qn,y n -1 F.2 UNIFORM SMOOTHING FOR l 1 PERTURBATIONS OF CONTINUOUS DATA An alternative base certificate for l 1 perturbations is again due to Eiras et al. (2022) . Using uniform instead of Gaussian noise allows us to collective certify robustness to l 1 -norm-bound perturbations. In the following U(x, λ) with x ∈ R D , λ ∈ R D + refers to a vector-valued random distribution in which the d-th element is uniformly distributed in [x d -λ d , x d + λ d ]. Proposition F.2. Given an output g n : R Din → Y, let f (x) = argmax y∈Y Pr z∼U (x,λ) [g(z) = y] be the corresponding smoothed classifier with λ ∈ R Din + . Given an input x ∈ R Din and smoothed prediction y = f (x), let p = Pr z∼U (x,λ) [g(z) = y]. Then, ∀x ′ ∈ H (n) : f n (x ′ ) = y n with H (n) defined as in Eq. 2, w d = 1/λ d , η = Φ -1 (q) and p = 1. Proof. Based on the definition of H (n) , we need to prove that ∀x ′ ∈ H : f n (x ′ ) = y n with H = x ′ ∈ R Din | Din d=1 1 λ d • |x d -x ′ d | < Φ -1 (q) , Eiras et al. ( 2022) have shown that under the same conditions as above, a prediction y n is certifiably robust to a perturbed input x ′ if Din d=1 | 1 λ d • (x d -x ′ d ) | < 1 2 Φ -1 (q) -Φ -1 (1 -q) , where q ′ = max y ′ n ̸ =yn Pr z∼U (x,λ) [g n (z) = y ′ n ] is the probability of the second most likely prediction under the smoothing distribution. As in our previous proof for Gaussian smoothing, we can obtain a more pessimistic certificate by substituting 1-q for q ′ . Since Φ -1 (q)-Φ -1 (1-q) = 2Φ -1 (q) and all λ d are non-negative, we know that our prediction is certifiably robust if Din d=1 1 λ d • |x d -x ′ d | < Φ -1 (p).

F.3 VARIANCE-CONSTRAINED CERTIFICATION

In the following, we derive the general variance-constrained randomized smoothing certificate from Theorem 5.1, before discussing specific certificates for binary data in § F.3.1 and § F.3.2. Variance smoothing assumes that we make predictions by randomly smoothing a base model's softmax scores. That is, given base model g : X → ∆ |Y| mapping from an arbitrary discrete input space X to scores from the (|Y| -1)-dimensional probability simplex ∆ |Y| , we define the smoothed classifier f (x) = argmax y∈Y E z∼Ψ(x) [g(z) y ]. Here, Ψ(x) is an arbitrary distribution over X parameterized by x, e.g a Normal distribution with mean x. The smoothed classifier does not return the most likely prediction, but the prediction associated with the highest expected softmax score. Given an input x ∈ X, smoothed prediction y = f (x) and a perturbed input x ′ ∈ X, we want to determine whether f (x ′ ) = y. By definition of our smoothed classifier, we know that f (x ′ ) = y if y is the label with the highest expected softmax score. In particular, we know that f (x ′ ) = y if y's softmax score is larger than all other softmax scores combined, i.e. E z∼Ψ(x ′ ) [g(z) y ] > 0.5 =⇒ f (x ′ ) = y. Computing E z∼Ψ(x ′ ) [g(z) y ] exactly is usually not tractable -especially if we later want to evaluate robustness to many x ′ from a whole perturbation model B ⊆ X. Therefore, we compute a lower bound on E z∼Ψ(x ′ ) [g(z) y ]. If even this lower bound is larger than 0.5, we know that prediction y is certainly robust. For this, we define a set of functions F with g y ∈ H and compute the minimum softmax score across all functions from F: min h∈F E z∼Ψ(x ′ ) [h(z)] > 0.5 =⇒ f (x ′ ) = y. For our variance smoothing approach, we define F to be the set of all functions that have a larger or equal expected value and a smaller or equal variance under Ψ(x), compared to our base model g. Let µ = E z∼Ψ(x) [g(z) y ] be the expected softmax score of our base model g for label y. Let ζ = E z∼Ψ(x) (g(z) y -ν) 2 be the expected squared distance of the softmax score from a scalar ν ∈ R. (Choosing ν = µ yields the variance of the softmax score. An arbitrary ν is only needed for technical reasons related to Monte Carlo estimation § G.2). Then, we define F = h : X → R E z∼Ψ(x) [h(z)] ≥ µ ∧ E z∼Ψ(x) (h(z) -ν) 2 ≤ ζ Clearly, by the definition of µ and ζ, we have g y ∈ F. Note that we do not restrict functions from H to the domain [0, 1], but allow arbitrary real-valued outputs. By evaluating Eq. 37 with F defined as in Eq. 38, we can determine if our prediciton is robust. To compute the optimal value, we need the following two Lemmata: Lemma F.3. Given a discrete set X and the set Π of all probability mass functions over X, any two probability mass functions π 1 , π 2 ∈ Π fulfill z∈X π 2 (z) π 1 (z) π 2 (z) ≥ 1. Proof. For a fixed probability mass function π 1 , Eq. 40 is lower-bounded by the minimal expected likelihood ratio that can be achieved by another π(z) ∈ Π: z∈X π 2 (z) π 1 (z) π 2 (z) ≥ min π∈Π z∈X π(z) π 1 (z) π(z). The r.h.s. term can be expressed as the constrained optimization problem min π z∈X π(z) π 1 (z) π(z) s.t. z∈X π(z) = 1 with the corresponding dual problem max λ∈R min π z∈X π(z) π 1 (z) π(z) + λ -1 + z∈X π(z) . The inner problem is convex in each π(z). Taking the gradient w.r.t. to π(z) for all z ∈ X shows that it has its minimum at ∀z ∈ X : π(z) = -λπ1(z)

2

. Substituting into Eq. 43 results in max λ∈R z∈X λ 2 π 1 (z) 2 4π 1 (z) + λ -1 - z∈X λπ 1 (z) 2 (44) = max λ∈R -λ 2 z∈X π 1 (z) 4 -λ (45) = max λ∈R - λ 2 4 -λ (46) =1. Eq. 46 follows from the fact that π 1 (z) is a valid probability mass function. Due to duality, the optimal dual value 1 is a lower bound on the optimal value of our primal problem Eq. 40. Lemma F.4. Given a probability distribution D over a R and a scalar ν ∈ R, let µ = E z∼D [z] and ξ = E z∼D (z -ν) 2 . Then ξ ≥ (µ -ν) Because β ≥ 0, each inner optimization problem is convex in h(z). We can thus find the optimal h * (z) by setting the derivative to zero: d dh(z) h(z)π x ′ (z) -αh(z)π x (z) + β (h(z) -ν) 2 π x (z) ! = 0 (61) ⇐⇒ π x ′ (z) -απ x (z) + 2β (h(z) -ν) π x (z) ! = 0 (62) =⇒ h * (z) = - π x ′ (z) 2βπ x (z) + α 2β + ν. Substituting into Eq. 59 and simplifying leaves us with the dual problem max α,β≥0 αµ -βζ - α 2 4β + α 2β -αν + ν - 1 4β z∈X π x ′ (z) 2 π x (z) . In the following, let us use ρ = z∈X π x ′ (z) 2 πx(z) as a shorthand for the expected likelihood ratio. The problem is concave in α. We can thus find the optimum α * by setting the derivative to zero, which gives us α * = 2β(µν) + 1. Because β ≥ 0 and our theorem assumes that ν ≤ µ, the value α * is a feasible solution to the dual problem. Substituting into Eq. 64 and simplifying results in max β≥0 α * µ -βζ - α * 2 4β + α * 2β -α * ν + ν - 1 4β ρ (65) = max β≥0 β (µ -ν) 2 -σ 2 + µ + 1 4β (1 -ρ) . Lemma F.3 shows that the expected likelihood ratio ρ is always greater than or equal to 1. Lemma F.4 shows that (µν) 2 -σ 2 ≤ 0. Therefore Eq. 66 is concave in β. The optimal value of β can again be found by setting the derivative to zero: β * = 1 -ρ 4 ((µ -ν) 2 -σ 2 ) . Recall that our theorem assumes σ 2 ≥ (µν) 2 and thus β * is real valued. Substituting Eq. 67 into Eq. 66 shows that the maximum of our dual problem is µ + (1 -p) ((µ -ν) 2 -σ 2 ). By duality, this is a lower bound on our primal problem min h∈F E z∼Ψ(x ′ ) [h(z)]. We know that our prediction is certifiably robust, i.e. f (x) = y, if min h∈F E z∼Ψ(x ′ ) [h(z)] > 0.5. So, in particular, our prediction is robust if µ + (1 -ρ) ((µ -ν) 2 -σ 2 ) > 0.5 ⇐⇒ ρ < 1 + 1 σ 2 -(µ -ν) 2 µ - 1 2 2 (70) ⇐⇒ z∈X π x ′ (z) 2 π x (z) < 1 + 1 σ 2 -(µ -ν) 2 µ - 1 2 2 (71) The last equivalence is the result of inserting the definition of the expected likelihood ratio ρ. With Theorem 5.1 in place, we can certify robustness for arbitrary smoothing distributions, assuming we can compute the expected likelihood ratio. When we are working with discrete data and the smoothing distributions factorize, this can be done efficiently, as the two following base certificates for binary data demonstrate.

F.3.1 BERNOULLI SMOOTHING FOR PERTURBATIONS OF BINARY DATA

We begin by proving the base certificate presented in § 5. Recall that we we use a smoothing distribution F(x, θ) with θ ∈ [0, 1] Din that independently flips the d'th bit with probability θ d , i.e. for x, z ∈ {0, 1} Din and z ∼ F(x, θ) we have Pr[z d ̸ = x d ] = θ d . Corollary F.5. Given an output g n : {0, 1} Din → ∆ |Y| mapping to scores from the (|Y| -1)dimensional probability simplex, let f n (x) = argmax y∈Y E z∼F (x,θ) [g n (z) y ] be the corresponding smoothed classifier with θ ∈ [0, 1] Din . Given an input x ∈ {0, 1} Din and smoothed prediction y n = f n (x), let µ = E z∼F (x,θ) [g n (z) y ] and ζ = Var z∼F (x,θ) [g n (z) y ]. Then, ∀x ′ ∈ H (n) : f n (x ′ ) = y n with H (n) defined as in Eq. 2, w d = ln (1-θ d ) 2 θ d + (θ d ) 2 1-θ d , η = ln 1 + 1 ζ µ -1 2 2 and p = 0. Proof. Based on our definition of the base certificates interface (see Definition 4.1, we must show that ∀x ′ ∈ H : f n (x ′ ) = y n with H = x ′ ∈ {0, 1} Din Din d=1 ln (1 -θ d ) 2 θ d + (θ d ) 2 1 -θ d • |x ′ d -x d | 0 < ln 1 + 1 ζ µ - 1 2 2 , Because all bits are flipped independently, our probability mass function π x (z) = Pr z∼Ψ(x) [ z = z] factorizes: π x (z) = Din d=1 π x d (z d ) with π x d (z d ) = θ d if z d ̸ = x d 1 -θ d else . Thus, our expected likelihood ratio can be written as z∈{0,1} D in π x ′ (z) 2 π x (z) = z∈{0,1} D in Din d=1 π x ′ d (z d ) 2 π x d (z d ) = Din d=1 z d ∈{0,1} π x ′ d (z d ) 2 π x d (z d ) . For each dimension d, we can distinguish two cases: If both the perturbed and unperturbed input are the same in dimension d, i.e. x ′ d = x d , then π x ′ d (z) πx d (z) = 1 and thus z d ∈{0,1} π x ′ d (z d ) 2 π x d (z d ) = z d ∈{0,1} π x ′ d (z d ) = θ d + (1 -θ d ) = 1. If the perturbed and unperturbed input differ in dimension d, then z d ∈{0,1} π x ′ d (z d ) 2 π x d (z d ) = (1 -θ d ) 2 θ d + (θ d ) 2 1 -θ d . Therefore, the expected likelihood ratio is Din d=1 z d ∈{0,1} π x ′ d (z d ) 2 π x d (z d ) = Din d=1 (1 -θ d ) 2 θ d + (θ d ) 2 1 -θ d |x ′ d -x d | . Due to Theorem 5.1 (and using ν = µ when computing the variance), we know that our prediction is robust, i.e. f n (x ′ ) = y n , if z∈{0,1} D in π x ′ (z) 2 π x (z) < 1 + 1 ζ µ - 1 2 2 (79) ⇐⇒ Din d=1 (1 -θ d ) 2 θ d + (θ d ) 2 1 -θ d |x ′ d -x d | < 1 + 1 ζ µ - 1 2 2 (80) ⇐⇒ Din d=1 ln (1 -θ d ) 2 θ d + (θ d ) 2 1 -θ d |x ′ d -x d | < ln 1 + 1 ζ µ - 1 2 2 . Because x d and x ′ d are binary, the last inequality is equivalent to Din d=1 ln (1 -θ d ) 2 θ d + (θ d ) 2 1 -θ d |x ′ d -x d | 0 < ln 1 + 1 ζ µ - 1 2 2 . F.3.2 SPARSITY-AWARE SMOOTHING FOR PERTURBATIONS OF BINARY DATA Sparsity-aware randomized smoothing (Bojchevski et al., 2020) is an alternative smoothing approach for binary data. It uses different probabilities for randomly deleting (1 → 0) and adding (0 → 1) bits to preserve data sparsity. For a random variable z distributed according to the sparsity-aware distribution S(x, θ + , θ -) with x ∈ {0, 1} Din and addition and deletion probabilities θ + , θ -∈ [0, 1] Din , we have: Pr[z d = 0] = 1 -θ + d 1-x d • θ - d x d , Pr[z d = 1] = θ + d 1-x d • 1 -θ - d x d . The Bernoulli smoothing distribution we discussed in the previous section is a special case of sparsity-aware smoothing with θ + = θ -. The runtime of the robustness certificate derived by Bojchevski et al. (2020) increases exponentially with the number of unique values in θ + and θ -, which makes it unsuitable for localized smoothing. Variance-constrained smoothing, on the other hand, allows us to efficiently compute a certificate in closed form. Corollary F.6. Given an output g n : R Din → ∆ |Y| mapping to scores from the (|Y| -1)dimensional probability simplex, let f n (x) = argmax y∈Y E z∼S(x,θ + ,θ -) [g n (z) y ] be the corresponding smoothed classifier with θ + , θ -∈ [0, 1] Din . Given an input x ∈ {0, 1} Din and smoothed prediction y n = f n (x), let µ = E z∼S(x,θ + ,θ -) [g n (z) y ] and ζ = Var z∼S(x,θ + ,θ -) [g n (z) y ]. Then, ∀x ′ ∈ H : f n (x ′ ) = y n for H = x ′ ∈ {0, 1} Din | Din d=1 γ + d • I [x d = 0 ̸ = x ′ d ] + γ - d • I [x d = 1 ̸ = x ′ d ] < η , where γ + , γ -∈ R Din , γ + d = ln (θ - d ) 2 1-θ + d + (1-θ - d ) 2 θ + d , γ - d = ln (1-θ + d ) 2 θ - d + (θ + d ) 2 1-θ - d . and η = ln 1 + 1 ζ µ -1 2 2 . Proof. Just like with the Bernoulli distribution we discussed in the previous section, all bits are flipped independently, meaning our probability mass function π x (z) = Pr z∼Ψ(x) [ z = z] factorizes: π x (z) = Din d=1 π x d (z d ) with π x d (z d ) = θ d if z d ̸ = x d 1 -θ d else . As before, our expected likelihood ratio can be written as z∈{0,1} D in π x ′ (z) 2 π x (z) = z∈{0,1} D in Din d=1 π x ′ d (z d ) 2 π x d (z d ) = Din d=1 z d ∈{0,1} π x ′ d (z d ) 2 π x d (z d ) . We can now distinguish three cases. If both the perturbed and unperturbed input are the same in dimension d, i.e. x ′ d = x d , then π x ′ d (z) πx d (z) = 1 and thus z d ∈{0,1} π x ′ d (z d ) 2 π x d (z d ) = z d ∈{0,1} π x ′ d (z d ) = 1. If x ′ d = 1 and x d = 0, i.e. a bit was added, then z d ∈{0,1} π x ′ d (z) 2 π x d (z) = z d ∈{0,1} π 1 (z d ) 2 π 0 (z d ) = π 1 (0) 2 π 0 (0) + π 1 (1) 2 π 0 (1) = θ - d 2 1 -θ + d + 1 -θ - d 2 θ + d If x ′ d = 0 and x d = 1, i.e. a bit was deleted, then z d ∈{0,1} π x ′ d (z) 2 π x d (z) = z d ∈{0,1} π 0 (z d ) 2 π 1 (z d ) = π 0 (0) 2 π 1 (0) + π 0 (1) 2 π 1 (1) = 1 -θ + d 2 θ - d + θ + d 2 1 -θ - d . Therefore, the expected likelihood ratio is Din d=1 z d ∈{0,1} π x ′ d (z d ) 2 π x d (z d ) (90) = Din d=1 θ - d 2 1 -θ + d + 1 -θ - d 2 θ + d I[x d =0̸ =x ′ d |] 1 -θ + d 2 θ - d + θ + d 2 1 -θ - d I[x d =1̸ =x ′ d |] (91) = Din d=1 exp γ + d I[x d =0̸ =x ′ d |] • exp γ - d I[x d =1̸ =x ′ d |] . In the last equation, we have simply used the shorthands γ + d and γ - d defined in Corollary F.6. Due to Theorem 5.1 (and using ν = µ when computing the variance), we know that our prediction is robust, i.e. f n (x ′ ) = y n , if z∈{0,1} D in π x ′ (z) 2 π x (z) < 1 + 1 ζ µ - 1 2 2 (93) ⇐⇒ Din d=1 exp γ + d I[x d =0̸ =x ′ d |] • exp γ - d I[x d =1̸ =x ′ d |] < 1 + 1 ζ µ - 1 2 2 (94) ⇐⇒ Din d=1 γ + d • I [x d = 0 ̸ = x ′ d |] • γ - d • I [x d = 1 ̸ = x ′ d |] < ln 1 + 1 ζ µ - 1 2 2 . ( ) Use for collective certification. It should be noted that this certificate does not comply with our interface for base certificates (see Definition 4.1), meaning we can not directly use it to certify robustness to norm-bound perturbations using our collective linear program from Theorem 4.2. We can however use it to certify collective robustness to the more refined threat model used in (Schuchardt et al., 2021) : Let the set of admissible perturbed inputs be B x = x ′ ∈ {0, 1} Din | Din d=1 [x d = 0 ̸ = x ′ d |] ≤ ϵ + ∧ Din d=1 [x d = 1 ̸ = x ′ d |] ≤ ϵ -with ϵ + , ϵ y ∈ N 0 specifying the number of bits the adversary is allowed to add or delete. We can now follow the procedure outlined in § 3.2 to combine the per-prediction base certificates into a collective certificate for our new collective perturbation model. As discussed in, we can bound the number of predictions that are robust to simultaneous attacks by minimizing the number of predictions that are certifiably robust according to their base certificates: min x ′ ∈Bx n∈T I [f n (x ′ ) = y n ] ≥ min x ′ ∈Bx n∈T I x ′ ∈ H (n) . ( ) Inserting the linear inequalities characterizing our perturbation model and base certificates results in: min x ′ ∈{0,1} D in n∈T I Din d=1 γ + d • I [x d = 0 ̸ = x ′ d ] + γ - d • I [x d = 1 ̸ = x ′ d ] < η (n) (97) s.t. Din d=1 [x d = 0 ̸ = x ′ d |] ≤ ϵ + , Din d=1 [x d = 1 ̸ = x ′ d |] ≤ ϵ -. Instead of optimizing over the perturbed input x ′ , we can define two vectors b + , b-∈ {0, 1} Din that indicate in which dimension bits were added or deleted. Using these new variables, Eq. 97 can be rewritten as min b + ,b -∈{0,1} D in n∈T I γ + T b + + γ -T b -< η (n) (99) s.t. sum{b + } ≤ ϵ + , sum{b -} ≤ ϵ -, d|x d =1 b + d = 0, d|x d =0 b - d = 0. The last two constraints ensure that bits can only be deleted where x d = 1 and bits can only be added where x d = 0. Finally, we can use the procedure for replacing the indicator functions with indicator variables that we discussed in § D to restate the above problem as the mixed-integer problem min b + ,b -∈{0,1} D in ,t∈{0,1} D out n∈T t n (102) s.t. γ + T b + + γ -T b -≥ (1 -t n )η (n) , ( ) sum{b + } ≤ ϵ + , sum{b -} ≤ ϵ -, d|x d =1 b + d = 0, d|x d =0 b - d = 0. ( ) The first constraint ensures that t n can only be set to 0 if the l.h.s. is greater or equal η n , i.e. only when the base certificate can no longer guarantee robustness. The efficiency of the certificate can be improved by applying any of the techniques discussed in § E.

F.3.3 GAUSSIAN SMOOTHING FOR PERTURBATIONS OF CONTINUOUS DATA

Even though we specifically proposed variance-constrained certification as a means of efficiently certifying anisotropically smoothed classifiers for discrete data, it can be generalized to continuous distributions by replacing sums with integrals and mass functions with density functions (the proof is analogous to that in Appendix F.3). In the following, we assume Gaussian smoothing, i.e. Ψ(x) ∼ N (x, Σ) with Σ ∈ R D×D + with density function π x . In this case, the expected ratio between π x ′ and π x is the exponential of the squared Mahalanobis distance (see Table 2 of (Gil et al., 2013) with α = 2), i.e. R D π x ′ (z) π x (z) π x ′ (z) dz = exp (x ′ -x)Σ -1 (x ′ -x) . This leads us to the following corollary of Theorem 5.1: Corollary F.7. Given a function h : R D → ∆ |Y| mapping to scores from the (|Y| -1)-dimensional probability simplex, let f (x) = argmax y∈Y E z∼N (x,Σ) [h(z) y ] with covariance matrix Σ ∈ R D×D + . Given an input x ∈ X and smoothed prediction y = f (x), let µ = E z∼N (x,Σ) [h(z) y ] and ζ = E z∼N (x,Σ) (h(z) y -ν) 2 with ν ∈ R. Assuming ν ≤ µ, then f (x ′ ) = y if (x ′ -x)Σ -1 (x ′ -x) < ln 1 + 1 ζ -(µ -ν) 2 µ - 1 2 . ( ) As with Theorem 5.1, The r.h.s. of Eq. 106 depends on the expected softmax score µ, a variable ν ≤ µ and the expected squared difference ζ between µ and ν. For ν = µ the parameter ζ is the variance of the softmax score. A higher expected value and a lower variance allow us to certify robustness for larger adversarial perturbations. For comparison, ANCER (Eiras et al., 2022) guarantees robustness for the smoothed prediction y n = argmax y∈Y Pr [g(x) = y] if (x ′ -x)Σ -1 (x ′ -x) < Φ -1 (q yn ) 2 , ( ) where q yn is the probability of predicting class y n , i.e. q yn = Pr z∼N (x,Σ) [g(z) = y n ]. Here, g : R D → Y directly outputs a class label instead of a softmax score. We see that both the varianceconstrained certificate and ANCER yield the same certified ellipsoid, scaled by a different factor. This factor is the certifiable radius η, i.e. the r.h.s. term of Eqs. ( 106) and ( 107). We also see that both certificates have the same computational complexity -they both involve calculation of the squared Mahalanobis distance and a constant number of operations for evaluation of the certifiable radius. In the following, we briefly assess under which conditions which certificate yields a larger certifiable radius η. For this evaluation, we assume that g(x) = argmax y h(x), i.e. g predicts the class with the highest softmax score. We then vary the prediction probability q yn and the expected softmax score µ within [0.5, 1.0]. For each µ, we calculate the largest possible variance ζ (using the Bhatia-Davis inequality ζ ≤ (1 -µ) • µ), which will give us the weakest possible variance-constrained certificate (see Eq. ( 106)). Fig. 13 shows the difference in certifiable radius η, with the dashed line indicating parameters for which both certificates are identical. We have omitted all combinations of q yn and µ that are not possible, namely µ > q yn + 1 2 (1 -q yn ). We see that ANCER is stronger when q yn is large, i.e. almost all samples from the smoothing distribution are correctly classified, but not necessarily with high confidence. The variance-constrained certificate is stronger when q yn is smaller and µ is larger, i.e. some samples are misclassified but the correctly classified ones have high confidence. Note however, that this is the worst case for the variance-constrained certificate. For ζ → 0, much larger radii can be certified (see Fig. 14 and Eq. ( 106)). 

G MONTE CARLO RANDOMIZED SMOOTHING

To make predictions and certify robustness, randomized smoothing requires computing certain properties of the distribution of a base model's output, given an input smoothing distribution. For example, the certificate of Cohen et al. (2019) assumes that the smoothed model f predicts the most likely label output by base model g, given a smoothing distribution N (0, σ • 1): f (x) = argmax y∈Y Pr z∼N (0,σ•1) [g(x + z) = y]. To certify the robustness of a smoothed prediction y = f (x) for a specific input x, we have to compute the probability q = Pr z∼N (0,σ•1) [g(x + z) = y] to then calculate the maximum certifiable radius σΦ -1 (q) with standard-normal inverse CDF Φ -1 . For complicated models like deep neural networks, computing such properties in closed form is usually not tractable. Instead, they have to be estimated using Monte Carlo sampling. The result are predictions and certificates that only hold with a certain probability. Randomized smoothing with Monte Carlo sampling usually consists of three distinct steps: 1. First, a small number of samples N 1 from the smoothing distribution are used to generate a candidate prediction ŷ, e.g. the most frequently predicted class. 2. Then, a second round of N 2 samples is taken and a statistical test is used to determine whether the candidate prediction is likely to be the actual prediction of smoothed classifier f , i.e. whether ŷ = f (x) with a certain probability (1 -α 1 ). If this is not the case, one has to abstain from making a prediction (or generate a new candidate prediction). 3. To certify the robustness of prediction ŷ, a final round of N 3 samples is taken to estimate all quantities needed for the certificate. In the case of (Cohen et al., 2019) , we need to estimate the probability q = Pr z∼N (0,σ•1) [g(x + z) = ŷ] to compute the certificate σΦ -1 (q), whose strength is monotonically increasing in q. To ensure that the certificate holds with high probability (1 -α 2 ), we have to compute a probabilistic lower bound q ≤ q. Instead of performing two separate round of sampling, one can also re-use the same samples for the abstention test and certification. One particularly simple abstention mechanism is to just compute the Monte Carlo randomized smoothing certificate to determine whether ∀x ′ ∈ {x} : f (x ′ ) = ŷ with high probability, i.e. whether the prediction is robust to input x ′ that is the result of "perturbing" clean input x with zero adversarial budget. In the following, we discuss how we perform Monte Carlo randomized smoothing for our base certificates, as well as the baselines we use for our experimental evaluation. In § G.4, we discuss how we account for the multiple comparisons problem, i.e. the fact that we are not just trying to probabilistically certify a single prediction, but multiple predictions at once.

G.1 MONTE CARLO BASE CERTIFICATES FOR CONTINUOUS DATA

For our base certificates for continuous data, we follow the approach we already discussed in the previous paragraphs (recall that the certificate of Cohen et al. ( 2019) is a special case of our certificate with Gaussian noise for l 2 perturbations). We are given an input space X Din , label space Y, base model (or -in the case of multi-output classifiers -base model output) g : X Din → Y and smoothing distribution Ψ(x) (either multivariate Gaussian or multivariate uniform). To generate a candidate prediction, we apply the base classifier to N 1 samples from the smoothing distribution in order to obtain predictions y (1) , . . . , y (N1) and compute the majority prediction ŷ = argmax y∈Y n | y (n) = ŷ . Recall that for Gaussian and uniform noise, our certificate guarantees ∀x ′ ∈ H : f (x) = ŷ for H = x ′ ∈ X Din Din d=1 w d • |x ′ d -x d | p < η , with η = Φ -1 (q) 2 or η = Φ -1 (q) (depending on the distribution), q = Pr z∼N (0,σ•1) [g(x + z) = ŷ] and standard-normal inverse CDF Φ -1 . To obtain a probabilistic certificate that holds with high probability 1 -α, we need a probabilistic lower bound on η. Both η are monotonically increasing in q, i.e. we can bound them by finding a lower bound q on q. For this, we take N 2 more samples from the smoothing distribution and compute a Clopper-Pearson lower confidence bound (Clopper & Pearson, 1934) on q. For abstentions, we use the aforementioned simple mechanism: We test whether x ∈ H. Given the definition of H, this is equivalent to testing whether 0 < Φ -1 (q) ⇐⇒ Φ(0) < q ⇐⇒ 0.5 < q. If q ≤ 0.5, we abstain.

G.2 MONTE CARLO VARIANCE-CONSTRAINED CERTIFICATION

For variance-constrained certification, we smooth a model's softmax scores. That is, we are given an input space X Din , label space Y, base model (or -in the case of multi-output classifiers -base model output) g : X Din → ∆ |Y| with (|Y| -1)-dimensional probability simplex ∆ |Y| and smoothing distribution Ψ(x) (Bernoullli or sparsity-aware noise, in the case of binary data). To generate a candidate prediction, we apply the base classifier to N 1 samples from the smoothing distribution in order to obtain vectors s (1) , . . . , s (N1) with s ∈ ∆ |Y| , compute the average softmax scores s = 1 N1 N n=1 s and select the label with the highest score ŷ = arg max y s y . Recall that our certificate guarantees robustness if the optimal value of the following optimization problem is greater than 0.5: min h:X→R E z∼Ψ(x ′ ) [h(z)] .t. E z∼Ψ(x) [h(z)] ≥ µ, E z∼Ψ(x) (h(z) -ν) 2 ≤ ζ, with µ = E z∼Ψ(x) [g(z) ŷ ], ζ = E z∼Ψ(x) (g(z) ŷ -ν) 2 and a fixed scalar ν ∈ R. To obtain a probabilistic certificate, we have to compute a probabilistic lower bound on the optimal value of the optimization problem. Because it is a minimization problem, this can be achieved by loosening its constraints, i.e. computing a probabilistic lower bound µ on µ and a probabilistic upper bound ζ on ζ. Like in CDF-smoothing (Kumar et al., 2020) , we bound the parameters using CDF-based nonparametric confidence intervals. Let F (s) = Pr z∼Ψ(x) [g(z) ŷ ≤ s] be the CDF of g ŷ (Z) with Z ∼ Ψ(x). Define M thresholds ≤ 0τ 1 ≤ τ 2 . . . , τ M -1 ≤ τ M ≤ 1 with ∀m : τ m ∈ [0, 1]. We then take N 2 samples x (1) , . . . , x (N2) from the smoothing distribution to compute the empirical CDF F (s) = N2 n=1 I g(z (n) ) ŷ ≤ s . We can then use the Dvoretzky-Keifer-Wolfowitz inequality (Dvoretzky et al., 1956) to compute an upper bound F and a lower bound F on the CDF of g ŷ : F (s) = max F (s) -υ, 0 ≤ F (s) ≤ min F (s) + υ, 1 = F (s), with υ = ln 2/α 2•N2 , which holds with high probability (1 -α). Using these bounds on the CDF, we can bound µ = E z∼Ψ(x) [g(z) ŷ ] as follows (Anderson, 1969) : µ ≥ τ M -τ 1 F (τ 1 ) + M -1 m=1 (τ m+1 -τ m ) F (τ m ). The parameter ζ = E z∼Ψ(x) (g(z) ŷ -ν) 2 can be bounded in a similar fashion. Define ξ 0 , . . . , ξ M ∈ R + with: ξ 0 = max κ∈[0,τ1] (κ -ν) 2 ξ M = max κ∈[τ M ,1] (κ -ν) 2 ξ m = max κ∈[τm,τm+1] (κ -ν) 2 ∀m ∈ {1, . . . , M -1}, i.e. compute the maximum squared distance to ν within each bin [τ m , τ m+1 ]. Then: ζ ≤ ξ 0 F (τ 1 ) + ξ M (1 -F (τ M )) + M -1 m=1 ξ m (F (τ m+1 -F (τ m )) (113) = ξ M + M -1 m=1 (ξ m-1 -ξ m ) F (τ m ) (114) ≤ ξ M + M -1 m=1 (ξ m-1 -ξ m ) sgn (ξ m-1 -ξ m ) F (τ m ) + (1 -sgn (ξ m-1 -ξ m )) F (τ m ) (115) with probability (1 -α). In the first inequality, we bound the expected squared distance from ν by assuming that the probability mass in each bin [τ m , τ m+1 ] is concentrated at the farthest point from ν. The equality is a result of reordering the telescope sum. In the second inequality, we upper-bound the CDF where it is multiplied with a non-negative value and lower-bound it where it is multiplied with a negative value. With the probabilistic bounds µ and ζ we can now -in principle -evaluate our robustness certificate, i.e. check whether z∈X π x ′ (z) 2 π x (z) < 1 + 1 ζ -µ -ν 2 µ - 1 2 2 . ( ) where the π are the probability mass functions of smoothing distributions Ψ(x) and Ψ(x ′ ). But one crucial detail of Theorem 5.1 underlying the certificate was that it only holds for ν ≤ µ. To use the method with Monte Carlo sampling, one has to ensure that ν ≤ µ by first computing µ and then choosing some smaller ν. In our experiments, we use an alternative method that allows us to use arbitrary ν: From our proof of Theorem 5.1we know that the dual problem of Eq. 108 is max α,β≥0 αµ -βζ - α 2 4β + α 2β -αν + ν - 1 4β z∈X π x ′ (z) 2 π x (z) , Instead of trying to find an optimal α (which causes problems in subsequent derivations if ν ≰ µ), we can simply choose α = 1. By duality, the result is still a lower bound on the primal problem, i.e. the certificate remains valid. The dual problem becomes max β≥0 µ -βζ + 1 4β - 1 4β z∈X π x ′ (z) 2 π x (z) . ( ) The problem is concave in β (because the expected likelihood ratio is ≥ 1). Finding the optimal β, comparing the result to 0.5 and solving for the expected likelihood ratio, shows that a prediction is robust if z∈X π x ′ (z) 2 π x (z) < 1 + 1 ζ µ - 1 2 2 . ( ) For our abstention mechanism, like in the previous section, we compute the certificate H and then test whether x ∈ H. In the case of Bernoulli smoothing and sparsity-aware smoothing), this corresponds to testing whether 1 < ln 1 + 1 ζ µ - 1 2 (120) ⇐⇒ µ > 1 2 . G.3 MONTE CARLO CENTER SMOOTHING While we can not use center smoothing as a base certificate, we benchmark our method against it during our experimental evaluation. The generation of candidate predictions, the abstention mechanism and the certificate are explained in (Kumar & Goldstein, 2021) . The authors allow multiple options for generating candidate predictions. We use the "β minimum enclosing ball" with β = 2 that is based on pair-wise distance calculations.

G.4 MULTIPLE COMPARISONS PROBLEM

The first step of our collective certificate is to compute one base certificate for each of the D out predictions of the multi-output classifier. With Monte Carlo randomized smoothing, we want all of these probabilistic certificates to simultaneously hold with a high probability (1 -α). But as the number of certificates increases, so does the probability of at least one of them being invalid. To account for this multiple comparisons problem, we use Bonferroni (Bonferroni, 1936 ) correction, i.e. compute each Monte Carlo certificate such that it holds with probability (1 -α n ). For base certificates that only depend on q n = Pr z∼Ψ (n) [g n (z) = ŷn ], i.e. the probability of the base classifier predicting a particular label ŷn under the smoothing distribution, one can also use the strictly better Holm correction (Holm, 1979) . This includes our Gaussian and uniform smoothing certificates for continuous data. Holm correction is a procedure than can be used to correct for the multiple comparisons problem when performing multiple arbitrary hypothesis tests. Given N hypotheses, their p-values are ordered in ascending order p 1 , . . . , p N . Starting at i = 1, the i'th hypothesis is rejected if p i < α N +1-i , until one reaches an i such that p i ≥ α N +1-i . Fischer et al. (2021) proposed to use Holm correction as part of their procedure for certifying that all (non-abstaining) predictions of an image segmentation model are robust to adversarial perturbations. In the following, we first summarize their approach and then discuss how Holm correction can be used for certifying our notion of collective robustness, i.e. certifying the number of robust predictions. As in § G.1, the goal is to obtain a lower bound q n on q n = Pr z∼Ψ (n) [g n (z) = ŷn ] for each of the D out classifier outputs. Assume we take N 2 samples z (1) , . . . , z (N2) from the smoothing distribution. Let ν n = N2 i=1 I g n (z (i) ) = ŷn and let π : {1, . . . , D out } → {1, . . . , D out } be a bijection that orders the ν n in descending order, i.e. ν π(1) ≥ ν π(2) • • • ≥ ν π(Dout) . Instead of using Clopper-Pearson confidence intervals to obtain tight lower bounds on the q n , Fischer et al. ( 2021) define a threshold τ ∈ [0.5, 1) and use Binomial tests to determine for which n the bound τ ≤ q n holds with high-probability. Let BinP (ν n , N 2 , ≤, τ ) be the p-value of the one-sided binomial test, which is monotonically decreasing in ν n . Following the Holm correction scheme, the authors test whether BinP ν π(k) , N 2 , ≤, τ < α D out + 1 -k for k = 1, . . . , D out until reaching a k * for which the null-hypothesis can no longer be rejected, i.e. the p-value is g.e.q. α Dout+1-k * . They then know that with probability 1 -α, the bound τ ≤ q n holds for all n ∈ {π(k) | k ∈ {1, . . . , k * }. For these outputs, they use the lower bound τ to compute robustness certificates. They abstain with all other outputs. This approach is sensible when one is concerned with the least robust prediction from a set of predictions. But our collective certificate benefits from having tight robustness guarantees for each of the individual predictions. Holm correction can be used with arbitrary hypothesis tests. For instance, we can use a different threshold τ n per output g n , i.e. test whether BinP ν π(k) , N 2 , ≤, τ π(k) < α D out + 1 -k for k = 1, . . . , D out . In particular, we can use τ n = sup t s.t. BinP (ν n , N 2 , ≤, t) < α D out + 1 -π -1 (n) , i.e. choose the largest threshold such that the null hypothesis can still be rejected. Eq. 124 is the lower Clopper-Pearson confidence bound with significance α Dout+1-π -1 (n) . This means that, instead of performing hypothesis tests, we can obtain probabilistic lower bounds q n ≤ q n by computing Clopper-Pearson confidence bounds with significance parameters α Dout , . . . , α 1 . The q n can then be used to compute the base certificates. Due to the definition of the τ n , all of the null hypotheses are rejected, i.e. we obtain valid probabilistic lower bounds on all q n . We can thus use the abstention mechanism from § G.1, i.e. only abstain if q n ≤ 0.5.

H COMPARISON TO THE COLLECTIVE CERTIFICATE OF FISCHER ET AL.

(2021) Our collective certificate based on localized smoothing is designed to bound the number of simultaneously robust predictions. Fischer et al. (2021) designed SegCertify to determine whether all predictions are simultaneously robust. As discussed in § 3.2, their work is based on the naïve collective certification approach applied to isotropic Gaussian smoothing: They first certify each output independently, then count the number of certifiably robust predictions for a specific adversarial budget and then test whether the number of certifiably robust predictions equals the overall number of predictions. To obtain better guarantees in practical scenarios, they further propose to • use Holm correction to address the multiple comparisons problem (see § G.4), • Abstain at a higher rate to avoid "bad componets", i.e. predictions y n that have a low consistency q n = Pr z∼N (x,σ) [g(z) = y] and thus very small certifiable radii. A more technical summary of their method can be found in § G.4. In the following, we discuss why our certificate can always offer guarantees that are at least as strong as SegCertify, both for our notion of collective robustness (number of robust predictions) and their notion of collective robustness (robustness of all predictions). In short, isotropic smoothing is a special case of localized smoothing and Holm correction can also be used for our base certificates. Before proceedings, please read the discussion on Monte Carlo base certificates and Clopper-Pearson confidence intervals in § G.1 and the multiple comparisons problem in § G.4. A direct consequence of the results in § G.4 is that using Clopper-Pearson confidence intervals and Holm correction will yield stronger per-prediction robustness guarantees and lower abstention rates than the method of Fischer et al. (2021) . The Clopper-Pearson-based method only abstains if one cannot guarantee that q n > 0.5 with high probability, while their method abstains if one cannot guarantee that q n ≥ τ with τ ≥ 0.5 (or specific other predictions abstain). For all non-abstaining predictions, the Clopper-Pearson-based certificate will be at least as strong as the one obtained using a single threshold τ , as it computes the tightest bound for which the null hypothesis can still be rejected (see Eq. 124). Consequently, when certifying our notion of collective robustness, i.e. determining the number of robust predictions given adversarial budget ϵ, a naïve collective robustness certificate (i.e. counting the number of predictions whose robustness are guaranteed by the base certificates) based on Clopper-Pearson bounds will also be stronger than the method of Fischer et al. (2021) . It should however be noted that their method could potentially be used with other methods of family-wise error rate correction, although they state that "these methods do not scale to realistic segmentation problems" and do not discuss any further details. Conversely, when certifying their notion of collective robustness, i.e. determining whether all nonabstaining predictions are robust given adversarial budget ϵ, the certificate based on Clopper-Pearson confidence bounds is also at least as strong as that of Fischer et al. (2021) . To certify their notion of robustness, they iterate over all predictions and determine whether all non-abstaining predictions are certifiably robust, given ϵ. Naturally, as the Clopper-Pearson-based certificates are stronger, any prediction that is robust according to (Fischer et al., 2021) is also robust acccording to the Clopper-Pearson-based certificates. The only difference is that, for τ > 0.5, their method will have more abstaining predictions. But, due to the direct correspondence of Clopper-Pearson confidence bounds and Binomial tests, we can modify our abstention mechanism to obtain exactly the same set of abstaining predictions: We simply have to use q n ≤ τ instead of q n ≤ 0.5 as our criterion. Finally, it should be noted that our proposed collective certificate based on linear programming is at least as strong as the naïve collective certificate (see Eq. 1.1 and Eq. 1.2 in § 3.2). Thus, letting the set of targeted predictions T be the set of all non-abstaining predictions and checking whether the collective certificate guarantees robustness for all of T will also result in a certificate that is at least as strong as that of Fischer et al. (2021) in their setting. I COMPARISON TO THE COLLECTIVE CERTIFICATE OF SCHUCHARDT ET AL. (2021) In the following, we first present the collective certificate for binary graph-structured data proposed by Schuchardt et al. (2021) (see § I.1. We then show that, when using sparsity-aware smoothing distributions (Bojchevski et al., 2020) -the family of smoothing distributions used both in our work and that of Schuchardt et al. (2021) -our certificate subsumes their certificate. That is, our collective robustness certificate based on localized randomized smoothing can provide the same robustness guarantees (see § I.2).

I.1 THE COLLECTIVE CERTIFICATE

Their certificate assumes the input space to be G = {0, 1} N ×D × {0, 1} N ×N -the set of undirected attributed graphs with N nodes and D attributes per node. The model is assumed to be a multioutput classifier f : G → Y N that assigns a label from label set Y to each of the nodes. Given an input graph G = (X, A) and a corresponding prediction y = f (G), they want to certify collective robustness to a set of perturbed graphs B ⊆ G. The perturbation model B is characterized by four scalar parameters r + X , r - X , r + A , r + A ∈ N 0 , specifying the number of bits the adversary is allowed to add (0 → 1) and delete (1 → 0) in the attribute and adjacency matrix, respectively. It can also be extended to feature additional constraints (e.g. per-node budgets). We discuss how these can be integrated after showing our main result. A formal definition of the perturbation model can be found in Section B of (Schuchardt et al., 2021) . The goal of their work is to certify collective robustness for a set of targeted nodes T ⊆ {1, . . . , N }, i.e. compute a lower bound on min G ′ ∈B n∈T I [f n (G ′ ) = y n ] . Their approach to obtaining this lower-bound shares the same high-level idea as ours (see § 3.2): Combining per-prediction base certificates and leveraging some notion of locality. But while our method uses localized randomized smoothing, i.e. smoothing different outputs with different noni.i.d. smoothing distributions to obtain base certificates that encode locality, their method uses apriori knowledge about the strict locality of the classifier f . A model is strictly local if each of its outputs f n only operates on a well-defined subset of the input data. To encode this strict locality, Schuchardt et al. (2021) associate each output f n with an indicator vector ψ (n) and an indicator matrix Ψ (n) that fulfill N m=1 D d=1 ψ (n) m I X m,d ̸ = X ′ i,j + N i=1 N j=1 Ψ (n) m I A m,d ̸ = A ′ i,j = 0 =⇒ f n (X, A) = f n (X ′ , A ′ ). for any perturbed graph G ′ = (X ′ , A ′ ). Eq. 126 expresses that the prediction of output f n remains unchanged if all inputs in its receptive field remain unchanged. Conversely, it expresses that perturbations outside the receptive field can be ignored. Unlike in our work, Schuchardt et al. (2021) describe their base certificates as sets in adversarial budget space. That is, some certification procedure is applied to each output f n to obtain a set (n) , then prediction y n is robust to any perturbed input with exactly c + X attribute additions, c - X attribute deletions, c + A edge additions and c - A edge deletions. A more detailed explanation can be found in Section 3 of (Schuchardt et al., 2021) . Note that the base certificates only depend on the number of perturbations, not their location in the input. Only by combining them using the receptive field indicators from Eq. 126 can one obtain a collective certificate that is better than the naïve collective certificate (i.e. counting how many predictions are certifiably robust to the collective threat model). The resulting collective certificate 0 or 1 with equal probability. Similarly, we sample random adjacency matrices from distribution K (n) ⊆ [r + X ] × [r - X ] × [r + A ] × [r - X ] S A, Θ + A (n) , Θ - A (n) with Θ + A (n) , Θ - A (n) ∈ [0, 1] N ×D and Θ + A (n) i,j = Ψ (n) i,j • p + A + 1 -Ψ (n) i,j • 0.5, (138) Θ - A (n) u,j = Ψ (n) i,j • p - A + 1 -Ψ (n) i,j • 0.5, where Ψ (n) is the receptive field indicator matrix defined in Eq. 126. Note that, since we only alter the distribution of bits outside the receptive field, the smoothed prediction y n = f n (X, A) will be the same as the one obtained via the smoothing distribution used by Schuchardt et al. (2021) . Applying Corollary F.6 (to the flattened and concatenated attribute and adjacency matrices) shows that smoothed prediction y n = f n (X, A) is robust to the perturbed graph (X ′ , A ′ ) if N m=1 D d=1 τ + X m,d • I X m,d = 0 ̸ = X ′ m,d + τ - X m,d • I X m,d = 1 ̸ = X ′ m,d + N i=1 N j=1 τ + A i,j • I A i,j = 0 ̸ = A ′ i,j + τ - A i,j • I A i,j = 1 ̸ = A ′ i,j < η (n) . (140) Because we only changed the distribution outside the receptive field, the scalar η (n) , which depends on the output distribution's mean and variance µ and σ will be the same as the one obtained via the smoothing scheme used by Schuchardt et al. (2021)  τ - A i,j = Ψ (n) i,j • γ + A + 1 -Ψ (n) i,j • 2 • ln (1 -0.5) 2 0.5 + 0.5 2 1 -0.5 (143) τ - A i,j = Ψ (n) i,j • γ - A + 1 -Ψ (n) i,j • 2 • ln (1 -0.5) 2 0.5 + 0.5 2 1 -0.5 , where the γ are the same weights as those of the base certificate Eq. 131 of Schuchardt et al. (2021) . Inserting the above values of τ into the base certificate Eq. 140 and using the fact that ln (1-0.5) 2 0.5 + 0.5 2 1-0.5 = ln(1) = 0 results in N m=1 D d=1 ψ (n) m • γ + X • I X m,d = 0 ̸ = X ′ m,d + ψ (n) m • γ - X • I X m,d = 1 ̸ = X ′ m,d + N i=1 N j=1 Ψ (n) i,j • γ - A • I A i,j = 0 ̸ = A ′ i,j + Ψ (n) i,j • γ - A • I A i,j = 1 ̸ = A ′ i,j < η (n) . (145) While our collective certificate derived in § 4 only considers one perturbation type, we have already discussed how to certify robustness to perturbation models with multiple perturbation types in § F.3.2: We use a different budget variable per input dimension and perturbation type. Furthermore, the attribute bits of each node share the same noise level. Therefore, we can use the method discussed in § E.3, i.e. use a single budget variable per node instead of using one per node and attribute.



An implementation will be made available at https://www.cs.cit.tum.de/daml/localized-smoothing. In practice, all probabilities have to be estimated using Monte Carlo sampling (see discussion in § G). This term is equivalent to the exponential of the Rényi-divergence exp (Dα (Ψ x ′ ||Ψx)) with α = 2. I.e. add up confusion matrices over the entire dataset, compute per-class IOUs and average over all classes. Note that they never evaluated their approach on image segmentation. This is however not necessary for the previously discussed strictly local model. This research is funded by the Bavarian Ministry of Economic Affairs, Regional Development and Energy with funds from the Hightech Agenda Bayern. Further, it is supported by the German Research Foundation, grant GU 1409/4-1.



to deletions, using S x, 0.01, θ -. to additions, using S x, 0.01, θ -.

Figure 6: Comparison of our LP-based collective certificate for localized randomized smoothing with a 3 × 5 grid to CenterSmooth and SegCertify * , using DeepLabV3 on Cityscapes. Increasing the number of samples used for certifying each output from 6400 = 153600 24

Using(Bojchevski et al., 2020) for the naïve isotropic smoothing baseline.

Robustness to deletions, using S x, 0.01, θ -. 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0Robustness to additions, using S x, 0.01, θ -.

Robustness to deletions, using S x, 0.01, θ -. Robustness to additions, using S x, 0.01, θ -.

LOWER CERTIFIABLE ROBUSTNESS TO ADDITIONS 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0Using (Bojchevski et al., 2020) for the naïve isotropic smoothing certificate. 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0

Certificate weight w + for S x, 0.01, θ -for varying θ -. Certificate weight w + for S x, θ + , 0.6 for varying θ + .

2.46) (MOSEK ApS, 2019) through the CVXPY interface (version 1.1.13) C.3 NODE CLASSIFICATION Here, we provide all parameters of our experiments on node classification.

per input dimension d with a smaller weight vector u (i) featuring one weight u (i) l per input subset J(l) . For our linear program, this means that we no longer need a budget vector b ∈ R Din + to model the elementwise distance |x ′ d -x d | p in each dimension d. Instead, we can use a smaller budget vector b ∈ R Nin + to model the overall distance within each input subset J (l) , i.e. b (l) = d∈J (l) |x ′ d -x d | p . Combined with the quantization of certificate parameters from the previous section, our optimization problem becomes min i∈{1,...,Nout} j∈{1,...,N bins }

GAUSSIAN SMOOTHING FOR l 2 PERTURBATIONS OF CONTINUOUS DATA Proposition F.1. Given an output g n : R Din → Y, let f n (x) = argmax y∈Y Pr z∼N (x,Σ) [g n (z) = y] be the corresponding smoothed output with Σ = diag (σ) 2 and σ ∈ R Din + . Given an input x ∈ R Din and smoothed prediction

Figure13: Worst-case difference in certifiable radius η between ANCER(Eiras et al., 2022) and the variance-constrained certificate for anisotropic Gaussian smoothing. The dashed line indicates combinations of prediction probability q yn and expected softmax score µ for which both certificates are equally strong.

et al. Due to Corollary F.6 and the definition of our smoothing distribution parameters in Eqs. (136) to (139) , the scalars τ + X m,d , τ - X m,d , τ + A i,j , τ -

Our goal is to verify that, if a model is sufficiently local, localized smoothing offers a better accuracy-robustness tradeoff than isotropic smoothing. As an extreme example, we construct a strictly local model from our U-Net segmentation model. This modified model partitions each image into a grid of size 2 × 2. It then iterates over cells (i, j), sets all values outside (i, j) to 0 and applies the original model. Finally, it stitches all 4 segmentation masks into a single one. For such a strictly local model, we can apply localized smoothing with the same 2 × 2 grid and σ max → ∞ to recover the certificate ofSchuchardt et al. (2021) (see § I). 5 Fig.2compares the resulting trade-off for σ min = σ iso to that of both isotropic smoothing baselines using 153600 Monte Carlo samples. Localized smoothing yields the same mIOUs as SegCertify * , but up to 22.4 p.p. larger ACR. Both approaches Pareto-dominate center smoothing.

In Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022, 2022. Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, and Qiang Liu. Black-box certification with randomized smoothing: A functional optimization based framework. In Advances in Neural Information Processing Systems, 2020. Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations, 2019. Daniel Zügner and Stephan Günnemann. Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019. Certificate Strength Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.2 Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

annex

Proof. Using the definitions of µ and ξ, as well as some simple algebra, we can show:It is well known for the variance that E z∼D (zµ) 2 = E z∼D z 2 -µ 2 . Because the variance is always non-negative, the above inequality holds.Using the previously described approach and lemmata, we can show the soundness of the following robustness certificate: Theorem 5.1 (Variance-constrained certification). a function g : X → ∆ |Y| mapping from discrete set X to scores from the (|Y| -1)-dimensional probability simplex, let f (x) = argmax y∈Y E z∼Ψx [g(z) y ] with smoothing distribution Ψ x and probability mass function π x (z) = Pr z∼Ψx [ z = z] . Given an input x ∈ X and smoothed prediction y = f (x), let µ = E z∼Ψx [g(z) y ]Proof. Following our discussion above, we know that f ) y ] > 0.5 with F defined as in Eq. 39. We can compute a (tight) lower bound on min h∈F E z∼Ψ(x ′ ) by following the functional optimization approach for randomized smoothing proposed by Zhang et al. (2020) . That is, we solve a dual problem in which we optimize the value h(z) for each z ∈ X. By the definition of the set F, our optimization problem isThe corresponding dual problem with dual variables α, β ≥ 0 isWe first move move all terms that don't involve h out of the inner optimization problem:Writing out the expectation terms and combining them into one sum (or -in the case of continuous X -one integral), our dual problem becomes(recall that π x ′ and π x ′ refer to the probability mass functions of the smoothing distributions). The inner optimization problem can be solved by finding the optimal h(z) in each point z:The variables defined in Eq. 130 model how the adversary allocates their adversarial budget, i.e. how many attributes are perturbed per node and which edges are modified. Eq. 129 ensures that this allocation in compliant with the collective threat model. Finally, in Eq. 128 the indicator vector and matrix ψ (n) and Ψ (n) are used to mask out any allocated perturbation budget that falls outside the receptive field of f n before evaluating its base certificate.To solve the optimization problem, Schuchardt et al. (2021) replace each of the indicator functions with binary variables and include additional constraints to ensure that they have value 1 i.f.f. the indicator function would have value 1. To do so, they define one linear constraint per point separating the set of certifiable budgets K (n) from its complement K (n) in adversarial budget space (the "Pareto front" discussed in Section 3 of (Schuchardt et al., 2021) ).From the above explanation, the main drawbacks of this collective certificate compared to our localized randomized smoothing approach and corresponding collective certificate should be clear. Firstly, if the classifier f is not strictly local, i.e. the receptive field indicators ψ and Ψ only have non-zero entries, then all base certificates are evaluated using the entire collective adversarial budget. It thus degenerates to the naïve collective certificate. Secondly, even if the model is strictly local, each of the outputs may assign varying levels of importance to different parts of its receptive field. Their method is incapable of capturing this additional soft locality. Finally, their means of evaluating the base certificates may involve evaluating a large number of linear constraints. Our method, on the other hand, only requires a single constraint per prediction. Our collective certificate can thus be more efficiently computed.

I.2 PROOF OF SUBSUMPTION

In the following, we show that any robustness certificate obtained by using the collective certificate of Schuchardt et al. (2021) with sparsity-aware randomized smoothing base certificates can also be obtained by using our proposed collective certificate with an appropriately parameterized localized smoothing distribution. The fundamental idea is that, for randomly smoothed models, completely randomizing all input dimensions outside the receptive field is equivalent to masking out any perturbations outside the receptive field.First, we derive the certificate of Schuchardt et al. (2021) for predictions obtained via sparsityaware smoothing. Schuchardt et al. (2021) require base certificates that guarantee robustness when c n) , where the c indicate the number of added and deleted attribute and adjacency bits. That is, the certificates must only depend on the number of perturbations, not on their location. To achieve this, all entries of the attribute matrix and all entries of the adjacency matrix, respectively, must share the same distribution. For the attribute matrix, they define scalar distribution parameters p + X , p - A ∈ [0, 1]. Given attribute matrix X ∈ {0, 1} N ×D , they then sample random attribute matrices Z X that are distributed according to sparsity-aware smoothing distribution S X, 1Given input adjacency matrix A, random adjacency matrices Z A are sampled from the distribution S A, 1 • p + A , 1 • p - A . Applying Corollary F.6 (to the flattened and concatenated attribute and adjacency matrices) shows that smoothed prediction y n = f n (X, A) is robust to the perturbed graphwhere µ (n) is the mean and σ (n) is the variance of the base classifier's output distribution, given the input smoothing distribution. Since the indicator functions for each perturbation type in Eq. 131 share the same weights, Eq. 131 can be rewritten aswhere c + X , c - X , c + A , c - A are the overall number of added and deleted attribute and adjacency bits, respectively. Eq. 132 matches the notion of base certificates defined by Schuchardt et al. (2021) , i.e. it corresponds to a set K (n) in adversarial budget space for which we provably know that prediction n) . When we insert the base certificate Eq. 132 into objective function Eq. 128, the collective certificate of Schuchardt et al. (2021) becomes equivalent toNext, we show that obtaining base certificates through localized randomized smoothing with appropriately chosen parameters and using these base certificates within our proposed collective certificate (see Theorem 4.2) will result in the same optimization problem. Instead of using the same smoothing distribution for all outputs, we use different distribution parameters for each one. For the n'th output, we sample random attributes matrices from distribution S X, Θ +Note that, in order to avoid having to index flattened vectors, we overload the definition of sparsity-aware smoothing to allow for matrix-valued parameters. For example, the value Θ + X (n) n,d indicates the probability of flipping the value of input attribute X n,d from 0 to 1 and the value Θ - X (n) n,d indicates the probability of flipping the value of input attribute X n,d from 1 to 0. We choose the following values for these parameters:where ψ (n) is the receptive field indicator vector defined in Eq. 126 and p + X , •p - X ∈ [0, 1] are the same flip probabilities we used for the certificate of Schuchardt et al. (2021) . Due to this parameterization, attribute bits inside the receptive field are randomized using the same distribution as in the certificate of Schuchardt et al. (2021) , while attribute bits outside are set to either Modelling our collective problem in this way, using Eq. 145 as our base certificates and rewriting the first two sums using inner products results in the optimization problemThis optimization problem is identical to that of Schuchardt et al. (2021) from Eqs. ( 133) to ( 135).The only difference is in how these problems would be mapped to a mixed-integer linear program. We would directly model the indicator functions in the objective using a single linear constraint. Schuchardt et al. (2021) would use multiple linear constraints, each corresponding to one point in the adversarial budget space.To summarize: For randomly smoothed models, masking out perturbations using a-priori knowledge about a model's strict locality is equivalent to completely randomizing (here: flipping bits with probability 50%) parts of the input. While Schuchardt et al. (2021) only derived their certificate for binary data, it can also be applied to strictly local models for continuous data. Considering our certificates for Gaussian (Proposition F.1) and uniform (Proposition F.2) smoothing, where the base certificate weights are 1 σ 2 and 1 λ , respectively, it is possible to perform the same masking operation as Schuchardt et al. (2021) by using σ → ∞ and λ → ∞.Finally, it should be noted that the certificate by Schuchardt et al. (2021) allows for additional constraints, e.g. on the adversarial budget per node or the number of nodes controlled by the adversary. As all of them can be modelled using linear constraints on the budget variables (see Section C of their paper), they can be just as easily integrated into our mixed-integer linear programming certificate.

