SORTED EIGENVALUE COMPARISON d Eig : A SIMPLE ALTERNATIVE TO d FID

Abstract

For i = 1, 2, let S i be the sample covariance of Z i with n i p-dimensional vectors. First, we theoretically justify an improved Fréchet Inception Distance (d FID ) algorithm that replaces np.trace(sqrtm(S 1 S 2 )) with np.sqrt(eigvals(S 1 S 2 )).sum(). With the appearance of unsorted eigenvalues in the improved d FID , we are then motivated to propose sorted eigenvalue comparison (d Eig ) as a simple alternative:



d Eig (S 1 , S 2 ) 2 = p j=1 ( λ 1 j -λ 2 j ) 2 , and λ i j is the j-th largest eigenvalue of S i . Second, we present two main takeaways for the improved d FID and proposed d Eig . (i) d FID : The error bound for computing non-negative eigenvalues of diagonalizable In the image domain, it is of great interest to analyze the distribution shift between two collections of data entries (Wiles et al., 2021; Borji, 2019) . On one hand, this is driven by the increasing awareness about the violation of the assumption of 'identical distribution' between training and (real-world) test datasets (Wu et al., 2022b) . As for instance illustrated in the leaderboard of WILDS (Koh et al., 2021; Sagawa et al., 2021) , many algorithms suffer from performance degradation and fail to generalize to heterogeneous testing settings. On the other hand, the importance of assessing distribution shift has been recognized with the rise of generative adversarial nets (GAN) (Goodfellow et al., 2014; Heusel  S 1 S 2 is reduced to O(ε)∥S 1 ∥∥S 1 S 2 ∥,



Figure 1: Python codes for the square of improved d FID and proposed d Eig .

along with reducing the run time by ∼ 25%.(ii) d Eig : The error bound for computing non-negative eigenvalues of sample covariance S i is further tightened to O(ε)∥S i ∥, with reducing ∼ 90% run time. Taking a statistical viewpoint (random matrix theory) on S i , we illustrate the asymptotic stability of its largest eigenvalues, i.e., rigidity estimates of O(n Last, we discuss limitations and future work for d Eig .

