EFFICIENT CONDITIONALLY INVARIANT REPRESEN-TATION LEARNING

Abstract

We introduce the Conditional Independence Regression CovariancE (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features φ(X) of data X to estimate a target Y , while being conditionally independent of a distractor Z given Y . Both Z and Y are assumed to be continuous-valued but relatively low dimensional, whereas X and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning. The procedure requires just a single ridge regression from Y to kernelized features of Z, which can be done in advance. It is then only necessary to enforce independence of φ(X) from residuals of this regression, which is possible with attractive estimation properties and consistency guarantees. By contrast, earlier measures of conditional feature dependence require multiple regressions for each step of feature learning, resulting in more severe bias and variance, and greater computational cost. When sufficiently rich features are used, we establish that CIRCE is zero if and only if φ(X) ⊥ ⊥ Z | Y . In experiments, we show superior performance to previous methods on challenging benchmarks, including learning conditionally invariant image features. * Equal contribution. † Code for image data experiments is available at github.com/namratadeka/circe We begin by providing a general-purpose characterization of conditional independence. We then introduce CIRCE, a conditional independence criterion based on this characterization, which is zero if and only if conditional independence holds (under certain required conditions). We provide a finite sample estimate with convergence guarantees, and strategies for efficient estimation from data.

1. INTRODUCTION

We consider a learning setting where we have labels Y that we would like to predict from features X, and we additionally observe some metadata Z that we would like our prediction to be 'invariant' to. In particular, our aim is to learn a representation function φ for the features such that φ(X) ⊥ ⊥ Z | Y . There are at least three motivating settings where this task arises. 1. Fairness. In this context, Z is some protected attribute (e.g., race or sex) and the condition φ(X) ⊥ ⊥ Z | Y is the equalized odds condition (Mehrabi et al., 2021) . 2. Domain invariant learning. In this case, Z is a label for the environment in which the data was collected (e.g., if we collect data from multiple hospitals, Z i labels the hospital that the ith datapoint is from). The condition φ(X) ⊥ ⊥ Z | Y is sometimes used as a target for invariant learning (e.g., Long et al., 2018; Tachet des Combes et al., 2020; Goel et al., 2021; Jiang & Veitch, 2022) . Wang & Veitch (2022) argue that this condition is well-motivated in cases where Y causes X. 3. Causal representation learning. Neural networks may learn undesirable "shortcuts" for their tasks -e.g., classifying images based on the texture of the background. To mitigate this issue, various schemes have been proposed to force the network to use causally relevant factors in its decision (e.g., Veitch et al., 2021; Makar et al., 2022; Puli et al., 2022) . The structural causal assumptions used in such approaches imply conditional independence relationships between the features we would like the network to use, and observed metadata that we may wish to be invariant to. These approaches then try to learn causally structured representations by enforcing this conditional independence in a learned representation. In this paper, we will be largely agnostic to the motivating application, instead concerning ourselves with how to learn a representation φ that satisfies the target condition. Our interest is in the (common) case where X is some high-dimensional structured data -e.g., text, images, or video -and we would like to model the relationship between X and (the relatively low-dimensional) Y, Z using a neural network representation φ(X). There are a number of existing techniques for learning conditionally invariant representations using neural networks (e.g., in all the motivating applications mentioned above). Usually, however, they rely on the labels Y being categorical with a small number of categories. We develop a method for conditionally invariant representation learning that is effective even when the labels Y and attributes Z are continuous or moderately high-dimensional. To understand the challenge, it is helpful to contrast with the task of learning a representation φ satisfying the marginal independence φ(X) ⊥ ⊥ Z. To accomplish this, we might define a neural network to predict Y in the usual manner, interpret the penultimate layer as the representation φ, and then add a regularization term that penalizes some measure of dependence between φ(X) and Z. As φ changes at each step, we'd typically compute an estimate based on the samples in each mini-batch (e.g., Beutel et al., 2019; Veitch et al., 2021) . The challenge for extending this procedure to conditional invariance is simply that it's considerably harder to measure. More precisely, as conditioning on Y "splits" the available data,foot_0 we require large samples to assess conditional independence. When regularizing neural network training, however, we only have the samples available in each mini-batch: often not enough for a reliable estimate. The main contribution of this paper is a technique that reduces the problem of learning a conditionally independent representation to the problem of learning a marginally independent representation, following a characterization of conditional independence due to Daudin (1980) . We first construct a particular statistic ζ(Y, Z) such that enforcing the marginal independence φ(X) (Song et al., 2009; Grunewalder et al., 2012; Park & Muandet, 2020; Li et al., 2022) . This makes CIRCE a suitable regularizer for any setting where the conditional independence relation φ(X) ⊥ ⊥ Z | Y should be enforced when learning φ(X). In particular, the learned relationship between Z and Y doesn't depend on the mini-batch size, sidestepping the tension between small mini-batches and the need for large samples to estimate conditional dependence. Moreover, when sufficiently expressive features (those corresponding to a characteristic kernel) are employed, then CIRCE is zero if and only if φ(X) ⊥ ⊥ Z | Y : this result may be of broader interest, for instance in causal structure learning Zhang et al. (2011) and hypothesis testing Fukumizu et al. (2008) ; Shah & Peters (2020); Huang et al. (2022) . ⊥ ⊥ ζ(Y, Z) is (approximately) equivalent to enforcing φ(X) ⊥ ⊥ Z | Y . Our paper proceeds as follows: in Section 2, we introduce the relevant characterization of conditional independence from (Daudin, 1980), followed by our CIRCE criterion -we establish that CIRCE is indeed a measure of conditional independence, and provide a consistent empirical estimate with finite sample guarantees. Next, in Section 3, we review alternative measures of conditional dependence. Finally, in Section 4, we demonstrate CIRCE in two practical settings: a series of counterfactual invariance benchmarks due to Quinzan et al. (2022) , and image data extraction tasks on which a "cheat" variable is observed during training.

2.1. CONDITIONAL INDEPENDENCE

We begin with a natural definition of conditional independence for real random variables: Definition 2.1 (Daudin, 1980) . X and Z are Y -conditionally independent, X ⊥ ⊥ Z | Y , if for all test functions g ∈ L 2 XY and h ∈ L 2 ZY , i.e. for all square-integrable functions of (X, Y ) and (Z, Y ) respectively, we have almost surely in Y that E XZ [g(X, Y ) h(Z, Y ) | Y ] = E X [g(X, Y ) | Y ] E Z [h(Z, Y ) | Y ] . (1) The following classic result provides an equivalent formulation: Proposition 2.2 (Daudin, 1980) . X and Z are Y -conditionally independent if and only if it holds for all test functions g ∈ E 1 = g ∈ L 2 XY | E X [g(X, Y ) | Y ] = 0 and h ∈ E 2 = h ∈ L 2 ZY | E Z [h(Z, Y ) | Y ] = 0 that E[g(X, Y ) h(Z, Y )] = 0. Daudin (1980) notes that this condition can be further simplified (see Corollary A.3 for a proof): Proposition 2.3 (Equation 3.9 of Daudin 1980). X and Z are Y -conditionally independent if and only if it holds for all g ∈ L 2 X and h ∈ E 2 = h ∈ L 2 ZY | E Z [h(Z, Y ) | Y ] = 0 that E[g(X) h(Z, Y )] = 0. (3) An equivalent way of writing this last condition (see Lemma B.1 for a formal proof) is: for all g ∈ L 2 X and h ∈ L 2 ZY , E g(X) h(Z, Y ) -E Z ′ [h(Z ′ , Y ) | Y ] = 0. The reduction to g not depending on Y is crucial for our method: when we are learning the representation φ(X), then evaluating the conditional expectations E X [g(φ(X), Y ) | Y ] from Proposition 2.2 on every minibatch in gradient descent requires impractically many samples, but E Z [h(Z, Y ) | Y ] does not depend on X and so can be pre-computed before training the network.

2.2. CONDITIONAL INDEPENDENCE REGRESSION COVARIANCE (CIRCE)

The characterization (4) of conditional independence is still impractical, as it requires checking all pairs of square-integrable functions g and h. We will now transform this condition into an easy-toestimate measure that characterizes conditional independence, using kernel methods. A kernel k(x, x ′ ) is a symmetric positive-definite function k : X ×X → R. A kernel can be represented as an inner product k(x, x ′ ) = ⟨ϕ(x), ϕ(x ′ )⟩ H for a feature vector ϕ(x) ∈ H, where H is a reproducing kernel Hilbert space (RKHS). These are spaces H of functions f : X → R, with the key reproducing property ⟨ϕ(x), f ⟩ H = f (x) for any f ∈ H. For M points we denote K X • a row vector of ϕ(x i ), such that K Xx is an M × 1 matrix with k(x i , x) entries and K XX is an M × M matrix with k(x i , x j ) entries. For two separable Hilbert spaces G, F, a Hilbert-Schmidt operator A : G → F is a linear operator with a finite Hilbert-Schmidt norm ∥A∥ 2 HS(G,F ) = j∈J ∥Ag j ∥ 2 F , where {g j } j∈J is an orthonormal basis of G (for finite-dimensional Euclidean spaces, obtained from a linear kernel, A is just a matrix and ∥A∥ HS its Frobenius norm). The Hilbert space HS(G, F) includes in particular the rank-one operators ψ ⊗ ϕ for ψ ∈ F, ϕ ∈ G, representing outer products, [ψ ⊗ ϕ]g = ψ ⟨ϕ, g⟩ G , ⟨A, ψ ⊗ ϕ⟩ HS(G,F ) = ⟨ψ, A ϕ⟩ F . See Gretton (2022, Lecture 5) for further details. We next introduce a kernelized operator which (for RKHS functions g and h) reproduces the condition in (4), which we call the Conditional Independence Regression CovariancE (CIRCE). Definition 2.4 (CIRCE operator). Let G be an RKHS with feature map ϕ : X → G, and F an RKHS with feature map ψ : (Z × Y) → F, with both kernels bounded: sup x ∥ϕ(x)∥ < ∞, sup z,y ∥ψ(z, y)∥ < ∞. Let X, Y , and Z be random variables taking values in X , Y, and Z respectively. The CIRCE operator is C c XZ|Y = E ϕ(X) ⊗ ψ(Z, Y ) -E Z ′ [ψ(Z ′ , Y ) | Y ] ∈ HS(G, F). For any two functions g ∈ G and h ∈ F, Definition 2.4 gives rise to the same expression as in (4), C c XZ|Y , g ⊗ h HS = E [g(X) (h(Z, Y ) -E Z ′ [h(Z ′ , Y ) | Y ])] . The assumption that the kernels are bounded in Definition 2.4 guarantees Bochner integrability (Steinwart & Christmann, 2008, Def. A.5.20) , which allows us to exchange expectations with inner products as above: the argument is identical to that of Gretton (2022, Lecture 5) for the case of the unconditional feature covariance. For unbounded kernels, Bochner integrability can still hold under appropriate conditions on the distributions over which we take expectations, e.g. a linear kernel works if the mean exists, and energy distance kernels may have well-defined feature (conditional) covariances when relevant moments exist (Sejdinovic et al., 2013) . Our goal now is to define a kernel statistic which is zero iff the CIRCE operator C c XZ|Y is zero. One option would be to seek the functions, subject to a bound such as ∥g∥ G ≤ 1 and ∥f ∥ F ≤ 1, that maximize (8); this would correspond to computing the largest singular value of C c XZ|Y . For unconditional covariances, the equivalent statistic corresponds to the Constrained Covariance, whose computation requires solving an eigenvalue problem (e.g. Gretton et al., 2005a, Lemma 3) . We instead follow the same procedure as for unconditional kernel dependence measures, and replace the spectral norm with the Hilbert-Schmidt norm (Gretton et al., 2005b) : both are zero when C c XZ|Y is zero, but as we will see in Section 2.3 below, the Hilbert-Schmidt norm has a simple closed-form empirical expression, requiring no optimization. Next, we show that for rich enough RKHSes G, F (including, for instance, those with a Gaussian kernel), the Hilbert-Schmidt norm of C c XZ|Y characterizes conditional independence. Theorem 2.5. For G and F with Lfoot_1 -universal kernels (see, e.g., Sriperumbudur et al., 2011) , ∥C c XZ|Y ∥ HS = 0 if and only if X ⊥ ⊥ Z | Y. The "if" direction is immediate from the definition of C c XZ|Y . The "only if" direction uses the fact that the RKHS is dense in L 2 , and therefore if ( 8) is zero for all RKHS elements, it must be zero for all L 2 functions. See Appendix B for the proof. Therefore, minimizing an empirical estimate of ∥C c XZ|Y ∥ HS will approximately enforce the conditional independence we need. Definition 2.6. For convenience, we define CIRCE(X, Z, Y ) = ∥C c XZ|Y ∥ 2 HS . In the next two sections, we construct a differentiable estimator of this quantity from samples.

REGULARIZER

To estimate CIRCE, we first need to estimate the conditional expectation µ ZY | Y (y) = E Z [ψ(Z, y) | Y = y ]. We define 2 ψ(Z, Y ) = ψ(Z) ⊗ ψ(Y ), which for radial basis kernels (e.g. Gaussian, Laplace) is L 2 -universal for (Z, Y ). 3 Therefore, µ ZY | Y (y) = E Z [ψ(Z) | Y = y ] ⊗ ψ(y) = µ Z| Y (y) ⊗ ψ(y). The CIRCE operator can be written as C c XZ|Y = E ϕ(X) ⊗ ψ(Y ) ⊗ ψ(Z) -µ Z|Y (Y ) We need two datasets to compute the estimator: a holdout set of size M used to estimate conditional expectations, and the main set of size B (e.g., a mini-batch). The holdout dataset is used to estimate conditional expectation µ ZY | Y with kernel ridge regression. This requires choosing the ridge parameter λ and the kernel parameters for Y . We obtain both of these using leave-one-out crossvalidation; we derive a closed form expression for the error by generalizing the result of Bachmann et al. (2022) to RKHS-valued "labels" for regression (see Theorem C.1). The following theorem defines an empirical estimator of the Hilbert-Schmidt norm of the empirical CIRCE operator, and establishes the consistency of this statistic as the number of training samples B, M increases. The proof and a formal description of the conditions may be found in Appendix C.2 Theorem 2.7. The following estimator of CIRCE for B points and M holdout points (for the conditional expectation): CIRCE = 1 B(B -1) Tr K XX K Y Y ⊙ Kc ZZ . ( ) converges as O p (1/ √ B + 1/M (β-1)/(2(β+p)) ), when the regression in Equation ( 30) is wellspecified. K XX and K Y Y are kernel matrices of X and Y ; elements of K c ZZ are defined as K c zz ′ = ψ(z) -µ Z|Y (y), ψ(z ′ ) -µ Z|Y (y ′ ) ; β ∈ (1, 2 ] characterizes how well-specified the solution is and p ∈ (0, 1] describes the eigenvalue decay rate of the covariance operator over Y . The notation O p (A) roughly states that with any constant probability, the estimator is O(A). Remark. For the smoothly well-specified case we have β = 2, and for a Gaussian kernel p is arbitrarily close to zero, giving a rate O p (1/ √ B + 1/M 1/4 ). The 1/M 1/foot_3 rate comes from conditional expectation estimation, where it is minimax-optimal for the well-specified case (Li et al., 2022) . Using kernels whose eigenvalues decay slower than the Gaussian's would slow the convergence rate (see Li et al., 2022, Theorem 2) . The algorithm is summarized in Algorithm 2. We can further improve the computational complexity for large training sets with random Fourier features (Rahimi & Recht, 2007) ; see Appendix D. Algorithm 1 Estimation of CIRCE Holdout data {(z i , y i )} M i=1 , mini-batch {(x i , z i , y i )} B i=1 Holdout data Leave-one-out (Theorem C.1) for λ (ridge parameter) and σ y (parameters of Y kernel): λ, σ y = arg min M i=1 ∥ψ(zi)-Ky i Y (K Y Y +λI) -1 K Z • ∥ 2 Hz (1-(KY Y (K Y Y +λ I) -1 ) ii ) 2 W 1 = (K Y Y + λI) -1 , W 2 = W 1 K ZZ W 1 Mini-batch Compute kernel matrices K xx , K yy , K yY , K yZ (x, y, z: mini-batch, Y, Z: holdout) Kc = K yy ⊙ K zz -K yY W 1 K Zz -(K yY W 1 K Zz ) ⊤ + K yY W 2 K Y y CIRCE = 1 B(B-1) Tr K xx Kc We can use of our empirical CIRCE as a regularizer for conditionally independent regularization learning, where the goal is to learn representations that are conditionally independent of a known distractor Z. We switch from X to an encoder φ θ (X). If the task is to predict Y using some loss L(φ θ (X), Y ), the CIRCE regularized loss with the regularization weight γ > 0 is as follows: min θ L(φ θ (X), Y ) + γ CIRCE(φ θ (X), Z, Y ) . ( )

3. RELATED WORK

We review prior work on kernel-based measures of conditional independence to determine or enforce X ⊥ ⊥ Z| Y, including those measures we compare against in our experiments in Section 4. We begin with procedures based on kernel conditional feature covariances. The conditional kernel cross-covariance was first introduced as a measure of conditional dependence by Sun et al. (2007) . Following this work, a kernel-based conditional independence test (KCI) was proposed by Zhang et al. (2011) . The latter test relies on satisfying Proposition 2.2 leading to a statistic 4 that requires regression of φ(X) on Y in every minibatch (as well as of Z on Y , as in our setting). More recently, Quinzan et al. (2022) introduced a variant of the Hilbert-Schmidt Conditional Independence Criterion (HSCIC; Park & Muandet, 2020) as a regularizer to learn a generalized notion of counterfactually-invariant representations (Veitch et al., 2021) . Estimating HSCIC(X, Z|Y ) from finite samples requires estimating the conditional mean-embeddings µ X,Z|Y , µ X|Y and µ Z|Y via regressions (Grunewalder et al., 2012) . HSCIC requires three times as many regressions as CIRCE, of which two must be done online in minibatches to account for the conditional cross-covariance terms involving X. We will compare against HSCIC in experiements, being representative of this class of methods, and having been employed successfully in a setting similar to ours. Alternative measures of conditional independence make use of additional normalization over the measures described above. The Hilbert-Schmidt norm of the normalized cross-covariance was introduced as a test statistic for conditional independence by Fukumizu et al. (2008) , and was used for structure identification in directed graphical models. Huang et al. (2022) proposed using the ratio of the maximum mean discrepancy (MMD) between P X|ZY and P X|Y , and the MMD between the Dirac measure at X and P X|Y , as a measure of the conditional dependence between X and Z given Y . The additional normalization terms in these statistics can result in favourable asymptotic properties when used in statistical testing. This comes at the cost of increased computational complexity, and reduced numerical stability when used as regularizers on minibatches. Another approach, due to Shah & Peters (2020), is the Generalized Covariance Measure (GCM). This is a normalized version of the covariance between residuals from kernel-ridge regressions of X on Y and Z on Y (in the multivariate case, a maximum over covariances between univariate regressions is taken). As with the approaches discussed above, the GCM also involves multiple regressions -one of which (regressing X on Y ) cannot be done offline. Since the regressions are univariate, and since GCM simply regresses Z and X on Y (instead of ψ(Z, Y ) and ϕ(X) on Y ), we anticipate that GCM might provide better regularization than HSCIC on minibatches. This comes at a cost, however, since by using regression residuals rather than conditionally centered features, there will be instances of conditional dependence that will not be detectable. We will investigate this further in our experiments.

4. EXPERIMENTS

We conduct experiments addressing two settings: (1) synthetic data of moderate dimension, to study effectiveness of CIRCE at enforcing conditional independence under established settings (as envisaged for instance in econometrics or epidemiology); and (2) high dimensional image data, with the goal of learning image representations that are robust to domain shifts. We compare performance over all experiments with HSCIC (Quinzan et al., 2022) and GCM (Shah & Peters, 2020). We report in-domain MSE loss, and measure the level of counterfactual invariance of the predictor using the VCF (Quinzan et al., 2022, eq. 4 ; lower is better). Given X = (A, Y, Z), VCF := E x∼X V z ′ ∼Z E B * z ′ |X B|X = x . ( ) P B * z ′ |X is the counterfactual distribution of B given X = x and an intervention of setting z to z ′ . Multivariate Cases We present results on 2 multivariate cases: case 1 has high dimensional Z and case 2 has high dimensional Y . For each multivariate case, we vary the number of dimensions d = {2, 5, 10, 20}. To visualize the trade-offs between in-domain performance and invariant representation, we plot the Pareto front of MSE loss and VCF. With high dimensional Z (Figure 2A ), CIRCE and HSCIC have a similar trade-off profile, however it is notable that GCM needs to sacrifice more in-domain performance to achieve the same level of invariance. This may be because the GCM statistic is a maximum over normalized covariances of univariate residuals, which can be less effective in a multivariate setting. For high dimensional Y (Figure 2B ), the regression from Y to ψ(Z) is much harder. We observe that HSCIC becomes less efficient with increasing d until at d = 20 it fails completely, while GCM still sacrifices more in-domain performance than CIRCE. The basic setting is as follows: for the in-domain (train) samples, the observed Y and Z are correlated through the true Y as

Univariate Cases

Y ∼ P Y , ξ z ∼ N (0, σ z ) , Z = β(Y ) + ξ z , Y ′ = Y + ξ y , ξ y ∼ N (0, σ y ) , Z ′ = f z (Y, Z, ξ z ) , X = f x (Y ′ , Z ′ ) . Y and Z are observed; f z is the structural equation for Z ′ (in the simplest case Z ′ = Z); f x is the generative process of X. Y ′ and Z ′ represent noise added during generation and are unobserved. A regular predictor would take advantage of the association β between Z and Y during training, since this is a less noisy source of information on Y . For unseen out-of-distribution (OOD) regime, where Y and Z are uncorrelated, such solution would be incorrect. Therefore, our task is to learn a predictor Ŷ = φ(X) that is conditionally independent of Z: φ(X) ⊥ ⊥ Z| Y , so that during the OOD/testing phase when the association between Y and Z ceases to exist, the model performance is not harmed as it would be if φ(X) relied on the "shortcut" Z to predict Y . For all image experiments we use the AdamW (Loshchilov & Hutter (2019) ) optimizer and anneal the learning rate with a cosine scheduler (details in Appendix F). We select the hyper-parameters of the optimizer and scheduler via a grid search to minimize the in-domain validation set loss.

4.2.1. DSPRITES

Of the six independent generative factors in d-Sprites, we choose the y-coordinate of the object as our target Y and the x-coordinate of the object in the image as our distractor variable Z. Our neural network consists of three convolutional layers interleaved with max pooling and leaky ReLU activations, followed by three fully-connected layers with 128, 64, 1 unit(s) respectively. Linear dependence We sample images from the dataset as per the linear relation Z ′ = Z = Y + ξ z . We then translate all sampled images (both in-domain and OOD) vertically by ξ y , resulting in an observed object coordinate of (Z, Y + ξ y ). In this case, linear residual methods, such as GCM, are able to sufficiently handle the dependence as the residual Z -E [Z | Y ] = ξ z is correlated with Z -which is the observed x-coordinate. As a result, penalizing the cross-covariance between φ(X)-E [φ(X) | Y ] and Z -E [Z | Y ] will also penalize the network's dependence on the observed x-coordinate to predict Y . In Figure 4 we plot the in-domain and OOD losses over a range of regularization strengths and demonstrate that indeed GCM is able to perform quite well with a linear function relating Z to Y . CIRCE is comparable to GCM with strong regularization and outperforms HSCIC. To get the optimal OOD baseline we train our network on an OOD training set where Y and Z are uncorrelated. Non-linear dependence To demonstrate the limitation of GCM, which simply regresses Z on Y instead of ψ(Z, Y ) on Y , we next address a more complex nonlinear dependence β(Y ) = 0 and Z ′ = Y + α Z 2 . The observed coordinate of the object in the image is (Y + αξ 2 z , Y + ξ y ) . For a small α, the unregularized network will again exploit the shortcut, i.e. the observed x-coordinate, in order to predict Y . The linear residual, if we don't use features of Z, is Z -E [Z | Y ] = ξ z , which is uncorrelated with Y + αξ 2 z , because E [ξ 3 z ] = 0 due to the symmetric and zero-mean distribution of ξ z . As a result, penalizing cross-covariance with the linear residual (as done by GCM) will not penalize solutions that use the observed x-coordinate to predict Y . Whereas CIRCE which uses a feature map ψ(Z) can capture higher order features. Results are shown in Figure 5 : we see again that CIRCE performs best, followed by HSCIC, with GCM doing poorly. Curiously, GCM performance does still improve slightly on OOD data as regularization increases -we conjecture that the encoder φ(X) may extract non-linear features of the coordinates. However, GCM is numerical unstable for large regularization weights, which might arise from combining a ratio normalization and a max operation in the statistic. 

4.2.2. EXTENDED YALE-B

Finally, we evaluate CIRCE as a regressor for supervised tasks on the natural image dataset of Extended Yale-B Faces. The task here is to estimate the camera pose Y from image X while being conditionally independent of the illumination Z which is represented as the azimuth angle of the light source with respect to the subject. Since, these are natural images, we use the ResNet-18 (He et al., 2016) model pre-trained on ImageNet (Deng et al., 2009) to extract image features, followed by three fully-connected layers containing 128, 64 and 1 unit(s) respectively. Here we sample the training data according to the non-linear relation We avoid it here for simplicity.) Note that GCM can in principle find the correct solution using a linear decoder. Results are shown in Figure 6 . CIRCE shows a small advantage over HSCIC in OOD performance for the best regularizer choice. GCM suffers from numerical instability in this example, which leads to poor performance.  Z ′ = Z = 0.5(Y + εY 2 ), where ε is either +1 or -1 with equal probability. In this case E [Z | Y ] = 0.5Y + 0.5Y 2 E [ε | Y ] = 0.5Y,

5. DISCUSSION

We have introduced CIRCE: a kernel-based measure of conditional independence, which can be used as a regularizer to enforce conditional independence between a network's predictions and a pre-specified variable with respect to which invariance is desired. The technique can be used in many applications, including fairness, domain invariant learning, and causal representation learning. Following an initial regression step (which can be done offline), CIRCE enforces conditional independence via a marginal independence requirement during representation learning, which makes it well suited to minibatch training. By contrast, alternative conditional independence regularizers require an additional regression step on each minibatch, resulting in a higher variance criterion which can be less effective in complex learning tasks. As future work, it will be of interest to determine whether or not CIRCE is statistically significant on a given dataset, so as to employ it as a statistic for a test of conditional dependence.

APPENDICES A CONDITIONAL INDEPENDENCE DEFINITIONS

We first repeat the proof of the main theorem in Daudin 1980, as the missing proofs we need for the alternative definitions of independence rely on the main one. Theorem A.1 (Theorem 1 of Daudin 1980) . Define E 1 = {g : g ∈ L 2 XY , E [g | Y ] = 0}, E 2 = {h : h ∈ L 2 Y Z , E [h | Y ] = 0}. Then, the following two conditions are equivalent: E [g 1 h 1 ] = 0 ∀g 1 ∈ E 1 , ∀h 1 ∈ E 2 , E [gh | Y ] = E [g | Y ] E [h | Y ] ∀g ∈ L 2 XY , ∀h ∈ L 2 Y Z . Proof. Necessary condition: E [gh | Y ] = E [g | Y ] E [h | Y ] =⇒ E[g 1 h 1 ] = 0 Because E 1 ⊆ L 2 XY and E 2 ⊆ L 2 Y Z , for g 1 ∈ E 1 and h 1 ∈ E 2 we have E [g 1 h 1 | Y ] = E [g 1 | Y ] E [h 1 | Y ] = 0 =⇒ E[g 1 h 1 ] = E Y [E [g 1 h 1 | Y ]] = 0 . Sufficient condition: E[g 1 h 1 ] = 0 =⇒ E [gh | Y ] = E [g | Y ] E [h | Y ] Let g ′ = g -E [g | Y ] where g ∈ L 2 XY and h ′ = h -E [h | Y ] where h ∈ L 2 XY . Then, g ′ ∈ E 1 and h ′ ∈ E 2 E[g ′ h ′ ] = E [(g -E [g | Y ])(h -E [h | Y ])] = E [gh -h E [g | Y ] -g E [h | Y ] + E [g | Y ] E [h | Y ]] = E Y [E [(gh -h E [g | Y ] -g E [h | Y ] + E [g | Y ] E [h | Y ]) | Y ]] = E Y [E [gh | Y ] -E [g | Y ] E [h | Y ]] = 0 . ( ) Let B be a Borel set of the image space of Y , g * = gI B where I B is an indicator function of B. We have g * 2 dP = g 2 I B dP = B g 2 dP ≤ g 2 dP < ∞, therefore g * ∈ L 2 XY . Using Equation ( 16), E Y [E [g * h | Y ] -E [g * | Y ] E [h | Y ]] = E Y [E [ghI B | Y ] -E [gI B | Y ] E [h | Y ]] = B E [gh | Y ] dP - B E [g | Y ] E [h | Y ] dP = 0 So E [gh | Y ] = E [g | Y ] E [h | Y ] almost surely. Corollary A.2 (Equation 3.8 of Daudin 1980). The following two conditions are equivalent: E [gh 1 ] = 0 ∀g ∈ L 2 XY , ∀h 1 ∈ E 2 , E [gh | Y ] = E [g | Y ] E [h | Y ] ∀g ∈ L 2 XY , ∀h ∈ L 2 Y Z . Proof. Necessary condition is identical to the previous proof.

Sufficient condition: E[gh

1 ] = 0 =⇒ E [gh | Y ] = E [g | Y ] E [h | Y ] Let h ′ = h -E [h | Y ] where h ∈ L 2 Y Z , then h ′ ∈ E 2 E[gh ′ ] = E[g(h -E [h | Y ])] = E[gh -g E [h | Y ]] = E Y [E [(gh -g E [h | Y ]) | Y ]] = E Y [E [gh | Y ] -E [g E [h | Y ] | Y ]] = E Y [E [gh | Y ] -E [g | Y ] E [h | Y ]] = 0 . Using the same argument as for Theorem A.1, E [gh | Y ] = E [g | Y ] E [h | Y ] almost surely. Corollary A.3 (Equation 3.9 of Daudin 1980). The following two conditions are equivalent: E [g ′ h 1 ] = 0 ∀g ′ ∈ L 2 X , ∀h 1 ∈ E 2 , E [gh | Y ] = E [g | Y ] E [h | Y ] ∀g ∈ L 2 XY , ∀h ∈ L 2 Y Z . Proof. Necessary condition: As E 2 ⊆ L 2 Y Z and L 2 X ⊆ L 2 XY , E [g ′ h 1 | Y ] = E [g ′ | Y ] E [h 1 | Y ] = 0 . Sufficient condition: E[g ′ h 1 ] = 0 =⇒ E [gh | Y ] = E [g | Y ] E [h | Y ] Take a simple function g a = n i=1 a i I Ai for an integrable Borel set A i in XY . As integrable simple functions are dense in L 2 XY , we only need to prove the condition for all g a . In our case, the indicator function decomposes as I Ai = I A X i I A Y i , and therefore for g i = a i I A X i g a = n i g i I A Y i . Therefore, E[g a h 1 ] = E n i=1 I A Y i E [g i h 1 | Y ] = E n i=1 I A Y i • 0 = 0 . As simple functions are dense in L 2 XY , we immediately have  E[gh 1 ] = 0 ∀g ∈ L 2 XY , h 1 ∈ E 2 . Applying Corollary A. E ′ 2 = h ′ = h -E [h | Y ] , h ∈ L 2 ZY . Proof. E 2 ⊆ E ′ 2 : any h ∈ E 2 is in L 2 ZY and has the form h = h-E [h | Y ] by construction because the last term is zero. E ′ 2 ⊆ E 2 : first, any h ′ ∈ E ′ 2 satisfies E [h ′ | Y ] = 0 by construction. Second, (h ′ ) 2 dµ(Z, Y ) = (h -E [h | Y ]) 2 dµ(Z, Y ) (17) = h 2 -2 h E [h | Y ] + (E [h | Y ]) 2 dµ(Z, Y ) (18) = h 2 -(E [h | Y ]) 2 dµ(Z, Y ) < +∞ , as h ∈ L 2 ZY and the second term is non-positive. Proof of Theorem 2.5. For the "if" direction, we simply "pull out" the Y expectation in the definition of the CIRCE operator and apply conditional independence: C c XZ|Y = E Y E X [ϕ(X) | Y ] ⊗ E Z [ψ(Z, Y ) | Y ] -E Z ′ [ψ(Z ′ , Y ) | Y ] 0 = 0. For the other direction, first, ∥C c XQ ∥ HS = 0 implies that for any g ∈ G and h ∈ F, E [g (h -E [h | Y ])] = 0 by Cauchy-Schwarz. Now, we use that an L 2 -universal kernel is dense in L 2 by definition (see Sriperumbudur et al. (2011) ). Therefore, for any g ∈ L 2 X and h ∈ L 2 ZY , for any ϵ > 0 we can find g ϵ ∈ G and h ϵ ∈ F such that ∥g -g ϵ ∥ 2 ≤ ϵ, ∥h -h ϵ ∥ 2 ≤ ϵ . (21) For the L 2 function, we can now write the conditional independence condition as E [g (h -E [h | Y ])] = E [(g ± g ϵ ) (h ± h ϵ -E [h ± h ϵ | Y ])] (22) = 0 + E [(g -g ϵ ) (h -h ϵ -E [h -h ϵ | Y ])] (23) + E [g ϵ (h -h ϵ -E [h -h ϵ | Y ])] -E [(g -g ϵ ) (h ϵ -E [h ϵ | Y ])] . The first term is zero because ∥C c XQ ∥ HS = 0. For the rest, we need to apply Cauchy-Schwarz: E [(g -g ϵ ) (h -h ϵ )] ≤ ∥g -g ϵ ∥ 2 ∥h -h ϵ ∥ 2 ≤ ϵ 2 (25) E [(g -g ϵ ) (E [h -h ϵ | Y ])] ≤ ∥g -g ϵ ∥ 2 ∥h -h ϵ ∥ 2 ≤ ϵ 2 , ( ) where in the last inequality we used that E (E [X | H ]) 2 ≤ E X 2 for conditional expectations. Similarly, also using the reverse triangle inequality, E [g ϵ (h -h ϵ )] ≤ ϵ ∥g ϵ ∥ 2 ≤ ϵ (∥g∥ 2 + ϵ) . Repeating this calculation for the rest of the terms, we can finally apply the triangle inequality to show that |E [g (h -E [h | Y ])]| ≤ 2 ϵ 2 + 2 ϵ (∥g∥ 2 + ϵ) + 2 ϵ (∥h∥ 2 + ϵ) (28) = 2 ϵ (3 ϵ + ∥g∥ 2 + ∥h∥ 2 ) . As ∥g∥ 2 and ∥h∥ 2 are fixed and finite, we can make the bound arbitrary small, and hence E [g (h -E [h | Y ])] = 0.

C PROOFS FOR ESTIMATORS C.1 ESTIMATING THE CONDITIONAL MEAN EMBEDDING

We will construct an estimate of the term E Z [ψ(Z, Y ) | Y ] that appears inside CIRCE, as a function of Y . We summarize the established results on conditional feature mean estimation: see (Grunewalder et al., 2012; Park & Muandet, 2020; Mollenhauer & Koltai, 2020; Klebanov et al., 2020; Li et al., 2022) for further details. To learn E [ψ(Q) | Y ] for some feature map ψ(q) ∈ H Q and random variable Q (both to be specified shortly), we can minimize the following loss: μQ|Y,λ (y) = arg min F ∈G QY N i=1 ∥ψ(q i ) -F (y i )∥ 2 H Q + λ∥F ∥ 2 G QY , where G QY is the space of functions from Y to H Q . The above solution is said to be well-specified when there exists a Hilbert-Schmidt operator A * ∈ HS(H Y , H Q ) such that F * (y) = A * ψ(y) for all y ∈ Y , where H Y is the RKHS on Y with feature map ψ(y) (Li et al., 2022) . We now consider the case relevant to our setting, where Q := (Z, Y ). We definefoot_5 ψ(Z, Y ) = ψ(Z) ⊗ ψ(Y ), which for radial basis kernels (e.g. Gaussian, Laplace) is L 2 -universal for (Z, Y ).foot_6  We then write E Z [ψ(Z, y) | Y = y ] = E Z [ψ(Z) | Y = y ] ⊗ ψ(y). The conditional feature mean E [ψ(Z) | Y ] can be found with kernel ridge regression (Grunewalder et al., 2012; Li et al., 2022) : µ Z| Y (y) ≡ E [ψ(Z) | Y ] (y) ≈ K yY (K Y Y + λI) -1 K Z • where K Z • indicates a "matrix" with rows ψ(z i ), (K Y Y ) i,j = k(y i , y j ), and (K yY ) i = k(y, y i ). Note that we have used the argument of k to identify which feature space it pertains to -i.e., the kernel on Z need not be the same as that on Y . We can find good choices for the Y kernel and the ridge parameter λ by minimizing the leave-oneout cross-validation error. In kernel ridge regression, this is almost computationally free, based on the following version of a classic result for scalar-valued ridge regression. The proof generalizes the proof of Theorem 3.2 of Bachmann et al. (2022) to RKHS-valued outputs. Theorem C.1 (Leave-one-out for kernel mean embeddings). Denote the predictor trained on the full dataset as F S , and the one trained without the i-th point as F -i . For λ > 0 and A ≡ K Y Y (K Y Y + λ I) -1 , the leave-one-out (LOO) error for Equation (30) is 1 N N i=1 ∥ψ(z i ) -F -i (y i )∥ 2 H Z = 1 N N i=1 ∥ψ(z i ) -F S (y i )∥ 2 H Z (1 -A ii ) 2 . ( ) Proof. Denote the full dataset S = {(y i , z i )} M i=1 ; the dataset missing the i-th point is denoted S -i . Prediction on the full dataset takes the form F (Y ) = AK Z • . Consider the prediction obtained without the M -th point (w.l.o.g.) but evaluated on y M : F -M (y M ). Define a new dataset Z = S -M ∪ {(y M , F -M (y M ))} and compute the loss for it: L Z (F -M ) = M -1 i=1 ∥ψ(z i ) -F -M (y i )∥ 2 H Z + ∥F -M (y M ) -F -M (y M )∥ 2 H Z + λ∥F -M ∥ 2 G ZY (33) = L S -M (F -M ) ≤ L S -M (F ) ≤ L S -M (F ) + ∥F -M (y M ) -F (y M )∥ 2 H Z ≤ L Z (F ) , where the first inequality is due to F -M minimizing L S -M . Therefore, F -M also minimizes L Z . As A in the prediction expression F S (Y ) = AK Z • depends only on Y , and not on Z, F -M has to have the same form as the full prediction: F -M (Y ) = AK Z • , K zi, • = ψ(z i ), i < M , F -M (y M ), i = M . ( ) This allows us to solve for F -M (y M ): F -M (y M ) = K y M Y (K Y Y + λ I) -1 K Z • = M i=1 A M i ψ(z i ) (36) = M -1 i=1 A M i ψ(z i ) + A M M ψ(z i ) ± A M M ψ(z M ) (37) = M i=1 A M i ψ(z i ) -A M M ψ(z M ) + A M M ψ(z i ) (38) = F S (y M ) -A M M ψ(z M ) + A M M ψ(z i ) (39) = F S (y M ) -A M M ψ(z M ) + A M M F -M (y M ) . ( ) As A M M is a scalar, we can solve for F -M (y M ): F -M (y M ) = F S (y M ) -A M M ψ(z M ) 1 -A M M Therefore, ψ(z M ) -F -M (y M ) = (1 -A M M )ψ(z M ) -F S (y M ) + A M M ψ(z M ) 1 -A M M (42) = ψ(z M ) -F S (y M ) 1 -A M M . ( ) Taking the norm and summing this result over all points (not just M ) gives the LOO error. Proof. The bias is straightforward: 1 B(B -1) E [Tr (K XX (K Y Y ⊙ K c ZZ ))] = 1 B(B -1) E   i,j̸ =i K xixj K yiyj K c zizj   + 1 B(B -1) E i K xixi K yiyi K c zizi = 1 B(B -1) i,j̸ =i E xx ′ yy ′ zz ′ [K xx ′ K yy ′ K c zz ′ ] + O 1 B =∥C c XQ ∥ 2 HS + O 1 B . For the variance, first note that our estimator has bounded differences. Denote K T T = K Y Y ⊙ K c ZZ and t = (y, z), if we switch one datapoint (x i , t i ) to (x ′ i , t ′ i ) and denote the vectors with switch coordinates as X i , T i |Tr (K XX K T T ) -Tr (K X i X i K T i T i )| = K xixi K titi -K x ′ i x ′ i K t ′ i t ′ i + 2 j̸ =i K xj xi K tj ti -K xj x ′ i K tj t ′ i ≤ (2 + 4(B -1))K x max K t max ≤ (4B -2)K x max K y max K c z max . Therefore, for any index i 1 B(B -1) |Tr (K XX (K Y Y ⊙ K c ZZ )) -Tr (K X i X i (K Y i Y i ⊙ K c Z i Z i ))| ≤ 4B -2 B(B -1) K x max K y max K c z max . We can now use McDiarmid's inequality (McDiarmid, 1989) with c = c i = 4B -2 B(B -1) K x max K y max K c z max , meaning that for any ϵ > 0  P Tr (K XX K T T ) B(B -1) -E Tr (K XX K T T ) B(B -1) ≥ ϵ ≤ 2 exp - 2ϵ 2 Bc 2 = 2 exp - 2ϵ 2 B(B -1) 2 (4B -2) 2 K 2 x max K 2 y max K 2c



If Y is categorical, naively we would measure a marginal independence for each level of Y . We abuse notation in using ψ to denote feature maps of (Y, Z), Y, and Z; in other words, we use the argument of the feature map to specify the feature space, to simplify notation. Fukumizu et al. (2008, Section 2.2) show this kernel is characteristic, and Sriperumbudur et al. (2011, Figure1(3)) that being characteristic implies L2 universality in this case. The conditional-independence test statistic used by KCI is 1 B Tr K Ẍ|Y KZ|Y , where Ẍ = (X, Y ) and K is a centered kernel matrix. Unlike CIRCE, K Ẍ|Y requires regressing Ẍ on Y using kernel ridge regression. Google and DeepMind do not have access or handle the Yale-B Face dataset. We abuse notation in using ψ to denote feature maps of (Y, Z), Y, and Z; in other words, we use the argument of the feature map to specify the feature space, to simplify notation. Fukumizu et al. (2008, Section 2.2) show this kernel is characteristic, and Sriperumbudur et al. (2011, Figure1(3)) that being characteristic implies L2 universality in this case.



The construction is straightforward: given a fixed feature map ψ(Y, Z) on Y × Z (which may be a kernel or random Fourier feature map), we define ζ(Y, Z) as the conditionally centered features, ζ(Y, Z) = ψ(Y, Z) -E[ψ(Y, Z) | Y ]. We obtain a measure of conditional independence, the Conditional Independence Regression CovariancE (CIRCE), as the Hilbert-Schmidt Norm of the kernel covariance between φ(X) and ζ(Y, Z). A key point is that the conditional feature mean E[ψ(Y, Z) | Y ] can be estimated offline, in advance of any neural network training, using standard methods

Figure 1: Causal structure for synthetic datasets.We first evaluate performance on the synthetic datasets proposed byQuinzan et al. (2022): these use the structural causal model (SCM) shown in Figure1, and comprise 2 univariate and 2 multivariate cases (see Appendix E for details). Given samples of A, Y and Z, the goal is to learn a predictor B = φ(A, Y, Z) that is counterfactually invariate to Z. Achieving this requires enforcing conditional independence φ(A, Y, Z) ⊥ ⊥ Z|Y . For all experiments on synthetic data, we used a fully connected network with 9 hidden layers. The inputs of the network were A, Y and Z. The task is to predict B and the network is learned with the MSE loss. For each test case, we generated 10k examples, where 8k were used for training and 2k for evaluation. Data were normalized with zero mean and unit standard deviation. The rest of experimental details is provided in Appendix E.

Figure 2: Pareto front of MSE and VCF for multivariate synthetic dataset. A: case 1; B: case 2. 4.2 IMAGE DATA

Figure 4: dSprites (linear). Blue: in-domain test loss; orange: out-of-domain loss (OOD); red: loss for OOD-trained encoder. Solid lines: median over 10 seeds; shaded areas: min/max values.

Figure 5: dSprites (non-linear). Blue: in-domain test loss; orange: out-of-domain loss (OOD); red: loss for OOD-trained encoder. Solid lines: median over 10 seeds; shaded areas: min/max values.

and thus the linear residuals depend on Y . (In experiments, Y and ε are re-scaled to be in the same range.

Figure 6: Yale-B. Blue: in-domain test loss; orange: out-of-domain loss (OOD); red: loss for OODtrained encoder. Solid lines: median over 10 seeds; shaded areas: min/max values.

2 concludes the proof. B CIRCE DEFINITION First, we need a more convenient function class: Lemma B.1. The function class E 2 = h ∈ L 2 ZY , E [h | Y ] = 0 coincides with the function class

CIRCE ESTIMATORS Lemma C.2. For B points and K c zz ′ = ⟨ψ(z) -E [Z | Y ] (y), ψ(z ′ ) -E [Z | Y ] (y ′ )⟩, B) bias and O p (1/ √ B)deviation from the mean for any fixed probability of the deviation.

any fixed probability the deviation ϵ from the mean decays as O(1/ √ B). Definition C.3. A (β, p)-kernel for a given data distribution satisfies the following conditions (see Fischer & Steinwart (2020); Li et al. (2022) for precise definition using interpolation spaces): (EVD) Eigenvalues µ i of the covariance operator C Y Y decay as µ i ≤ c • i -1/p . (EMB) For α ∈ (p, 1], the inclusion map [H α Y → L ∞ (π)] is continuous and bounded by A.

Table1summarizes the in-domain MSE loss and VCF comparing CIRCE to baselines. Without regularization, MSE loss is low in-domain but the representation is not invariant to changes of Z. With regularization, all three methods successfully achieve counterfactual invariance in these simple settings, and exhibit similar in-domain performance. MSE loss and VCF for univariate synthetic datasets. Comparison of representation without conditional independence regularization against regularization with GCM, HSCIC and CIRCE.

ACKNOWLEDGMENTS

This work was supported by DeepMind, the Gatsby Charitable Foundation, the Wellcome Trust, the Canada CIFAR AI Chairs program, the Natural Sciences and Engineering Resource Council of Canada, SHARCNET, Calcul Québec, the Digital Resource Alliance of Canada, and Open Philanthropy. Finally, we thank Alexandre Drouin and Denis Therien for the Bellairs Causality workshop which sparked the project.

annex

(SRC) F ∈ [G] β for β ∈ [1, 2] (note that β < 1 would include the misspecified setting).Lemma C.4. Consider the well-specified case of conditional expectation estimation (see Li et al., 2022) . For bounded kernels over X, Z, Y and a (β, p)-kernel over Y , F (y) = E [ψ(Z) | Y ] (y), bounded ∥F ∥ ≤ C F , and M points used to estimate F , define the conditional expectation estimate aswhere λ M = Θ(1/M β+p ).Then, the estimator Tr K XX Kc ZZ /(B(B -1)) of the "true" CIRCE estimator (i.e., with the actual conditional expectation) deviates from the true value as O p (1/M (β-1)/(2(β+p)) ).Proof. First, decompose the difference:where in the last line we used that all matrices are symmetric.Let's concentrate on the difference:As we're working in the well-specified case, by definition the operator F ∈ G, where G is a vectorvalued RKHS (Li et al., 2022, Definition 1) . This implies that for the functionWe can now re-write the difference asWe can use the triangle inequality and then Cauchy-Schwarz to obtainfor some positive constants C 1,2,3 (since the kernels over both z and y are bounded, F is bounded too and henceAs all kernels are bounded,for positive constants C 1 to C 4 . Now we can use Theorem 2 of Li et al. (2022) with γ = 1 and λ = Θ(1/M β+p ), which shows thatfor some positive constant K, which gives us theNow we can combine the two lemmas to prove Theorem 2.7:Proof of Theorem 2.7. Combining Lemma C.2 and Lemma C.4 and using a union bound, we obtain the O p (1/Corollary C.5. For B points and M holdout points, the CIRCE estimatorProof. This follows from the previous two proofs.Corollary C.6. For B points and M holdout points, the CIRCE estimatorProof. This follows from the previous two proofs and the fact that K c is a centered matrix, meaning that in expectation HK c H = K c . This estimator can be less biased in practice, as Kc ZZ is typically biased due to conditional expectation estimation, and H Kc H re-centers it.

D RANDOM FOURIER FEATURES

Random Fourier features (RFF) Rahimi & Recht (2007) , and therefore K = RR ⊤ . The algorithm to estimate CIRCE with RFF is provided in Algorithm 2. We sample D 0 points every L iterations, but in every batch only use D of them to reduce computational costs. It takes

E SYNTHETIC DATA AND ADDITIONAL RESULTS

We used Adam (Kingma & Ba, 2015) for optimization with batch size 256, and trained the network for 100 epochs. For experiments on univariate datasets, the learning rate was 1e-4 and weight decay was 0.3; for experiments on multivariate datasets, the learning rate was 3e-4 and weight decay was 0.1. We implemented CIRCE with random Fourier features (Rahimi & Recht, 2007) (see Appendix D) of dimension 512 for Gaussian kernels. We swept over the hyperparameters, including RBF scale, regularization weight for ridge regression, and regularization weight for the conditional independence regularization strength.All synthetic datasets are using the same causal structure as shown in Figure 1 . Hyperparameters sweep is listed in Table 2 and it is the same for all test cases.Algorithm 2 Estimation of CIRCE with random Fourier featuresHoldout data Leave-one-out (Theorem C.1) for λ (ridge parameter) and σ y (parameters of Y kernel): Structural causal model for multivariate case 1:Structural causal model for multivariate case 2: For both dSpritres and Yale-B, we choose the following training hyperparameters over the validation set and without regularization: weight decay (1e-4, 1e-2), learning rate (1e-4, 1e-3, 1e-2) and length of training (200 or 500 epochs). These parameters are used for all runs (including the regularized ones). For dSprites the batch size was 1024. For Yale-B the batch size was 256. The results for both standard (Corollary C.5) and centered (Corollary C.6) CIRCE estimators were similar for dSprites (the reported one is standard), but the centered version was more stable for Yale-B (the reported one is centered). This is likely due to the bias arising from conditional expectation estimation. For dSprites, the training set contained 589824 points, and the holdout set size was 5898 points. For Yale-B, the training set contained 11405 points, and the holdout set size was 1267 points.All kernels were Gaussian: k(x, x ′ ) = exp(-∥x -x ′ ∥ 2 /(2σ 2 )).For Y , σ 2 from [1.0, 0.1, 0.01, 0.001] and ridge regression parameter λ from [0.01, 0.1, 1.0, 10.0, 100.0]. The other two kernels had σ 2 = 0.01 for linear and y-cone dependencies; for the nonlinear case, the kernel over Z had σ 2 = 1 due to a different scaling of the distractor in that case.We additionally tested a setting in which the M holdout points used for conditional expectation estimation are not removed from the training data for CIRCE. As shown in Figure 7 for dSprites with non-linear dependence, this has little effect on the performance.

