LATENTAUGMENT: DYNAMICALLY OPTIMIZED LA-TENT PROBABILITIES OF DATA AUGMENTATION

Abstract

Although data augmentation is a powerful technique for improving the performance of image classification tasks, it is difficult to identify the best augmentation policy. The optimal augmentation policy, which is the latent variable, cannot be directly observed. To address this problem, this study proposes LatentAugment, which estimates the latent probability of optimal augmentation. The proposed method is appealing in that it can dynamically optimize the augmentation strategies for each input and model parameter in learning iterations. Theoretical analysis shows that LatentAugment is a general model that includes other augmentation methods as special cases, and it is simple and computationally efficient in comparison with existing augmentation methods. Experimental results show that the proposed LatentAugment has higher test accuracy than previous augmentation methods on the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets.



Data augmentation is a widely used technique for generating additional data to improve the performance of computer vision tasks (Shorten & Khoshgoftaar, 2019) . Although data augmentation performs well in experimental studies, designing data augmentations requires human expertise with prior knowledge of the dataset, and it is often difficult to transfer the augmentation strategies across different datasets (Krizhevsky et al., 2012) . Recent studies on data augmentation consider an automated design process of searching for augmentation strategies from a dataset. For example, Au-toAugment, proposed by Cubuk et al. (2018) , uses reinforcement learning to automatically explore data augmentation policies using smaller network models and reduced datasets. Although AutoAugment shows great improvement on image classification tasks of different datasets, it requires thousands of GPU hours to search for augmentation strategies. Furthermore, the data augmentation operations optimized for reduced datasets using smaller network models may not be optimal for full datasets using larger network models. To address this problem, this study proposes LatentAugment, which estimates the latent probability of the optimal augmentation customized to each input image and network model. There is no doubt that an optimal augmentation policy exists for each input image using a specific network model. However, the optimal augmentation policy, which is a latent variable, cannot be directly observed. Although a latent variable itself cannot be observed , we can estimate the probability of the latent variable being the optimal augmentation policy. LatentAugment applies Bayes' rule, to estimate the conditional probability of the augmentation policy, given the input data and network parameters. Figure 1 shows the concept of the proposed latent augmentation method. Following the Bayesian data augmentation proposed by Tran et al. (2017) , LatentAugment uses the expectationmaximization (EM) algorithm to update the model parameters. In the expectation (E)-step, the expectation of the weighted loss function is calculated using the conditional probability of the latent augmentation policies. In the maximization (M)-step, the expected loss function is minimized using the standard stochastic gradient descent. The conditional probabilities of the highest loss function with the augmentation policy were calculated using the loss function with the updated parameters and input data. The unconditional probabilities of the augmentation policies were generated using the moving average of the conditional probabilities. Note that the conditional probabilities of the latent augmentation policies are dynamically optimized for the input and updated model parameters in the iterations of the EM algorithm. The contribution of this study can be summarized as follows: Figure 1 : An overview of the proposed LatentAugment. The loss functions with augmentation policies are calculated using the input data and the unconditional probability of augmentation policies. ℒ ! ℒ " ℒ # ℒ $ ⋮ ℎ $ ! ℎ $ " ℎ $ # ℎ $ $ ⋮ 𝜋 ! 𝜋 " 𝜋 # 𝜋 $ ⋮ ℎ $ ! ℒ ! ℎ $ " ℒ " ℎ $ # ℒ # ℎ $ $ ℒ $ ⋮ & ℎ $ % ℒ % % Minimizing Expected The model parameters are updated by the EM algorithm. In E-step, the expectation of the weighted loss function is calculated using the conditional probability of the highest loss. In M-step, the expected loss function is minimized using the standard stochastic gradient descent. The conditional probabilities of the highest loss are calculated using the loss function with the updated parameters and input data. The unconditional probabilities of the augmentation policies are generated by the moving average of the conditional probability. • It provides a theoretical model for LatentAugment. This study shows that LatentAugment can dynamically optimize the augmentation methods for each input and model parameter in the learning iterations by calculating the conditional probabilities of the latent augmentation policies. Furthermore, it shows that LatentAugment is a general augmentation model that includes other augmentation methods, such as Adversarial AutoAugment (Zhang et al., 2019) and uncertainty-based sampling (Wu et al., 2020) , as special cases. • LatentAugment is simple and computationally efficient. It does not require the augmentation policies to be searched before training. Adversarial AutoAugment proposes the application of a generative adversarial network (GAN) (Goodfellow et al., 2014) to solve the maximization of the minimum loss function, which requires an additional training cost for the adversarial network. In contrast, the proposed LatentAugment can solve this problem using the simple stochastic gradient descent algorithm without an adversarial network. • Experimental results show that the proposed LatentAugment can improve the test accuracy for the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. For example, a test accuracy of 98.72% was achieved with the PyramidNet+ShakeDrop (Han et al., 2017; Yamada et al., 2018) on CIFAR-10, which is a significantly better performance compared to previous augmentation methods.

1. RELATED WORKS

Several studies have been conducted on data augmentation methods in the literature on machine learning. Shorten & Khoshgoftaar (2019) provided a comprehensive review of image data augmentation. Recent studies have attempted to automatically identify data augmentation methods. Smart augmentation (Lemley et al., 2017) merges two or more samples from the same class to improve the generalization of a target network. AutoAugment (AA) (Cubuk et al., 2018) applies a recurrent neural network (RNN) as a sample controller to search for the best data augmentation policy using small proxy tasks of randomly drawn images from the training dataset. After identifying the best policy, fixed policies are applied to the training dataset. Population-based augmentation (PBA) (Ho et al., 2019) generates dynamic augmentation policy schedules instead of a fixed augmentation policy. RandAugment (RA) (Cubuk et al., 2019) has a significantly reduced search space and allows training on the target task without a separate proxy task. Fast AutoAugment (Fast AA) (Lim et al., 2019) determines the best augmentation policy using a more efficient search strategy based on density matching. Faster AA (Hataya et al., 2019) uses a differentiable policy search pipeline for data augmentation, which is much faster than previous methods. DADA (Li et al., 2020 ) also reduces the cost of policy search using a differentiable optimization problem via Gumbel-Softmax, while DeepAA (Zheng et al., 2022) uses a multi-layer data augmentation pipeline. Adversarial AutoAugment (AdvAA) (Zhang et al., 2019) applies an adversarial network to generate data augmentation. While the training network minimizes the loss, the adversarial network maximizes training loss. Uncertainty-Bases Sampling (UBS) (Wu et al., 2020) generates data augmentation of the highest loss without an adversarial network. As shown in the next section, the loss functions of AdvAA and UBS can be regarded as special cases of LatentAugment proposed in this study. MetaAugment (Zhou et al., 2020) uses an additional augmentation policy network to minimize the weighted losses of augmented training images. DHA (Zhou et al., 2021) uses super and child networks to achieve joint optimization of the data augmentation policy, hyper-parameter, and architecture. In contrast, the proposed LatentAugment does not require any additional network for searching the augmentation policy. The best augmentation policies are the latent variables that cannot be observed. The expectationmaximization (EM) algorithm was proposed to analyze latent variables (Dempster et al., 1977; McLachlan & Krishnan, 2007; Ng et al., 2012) . The EM algorithm estimates parameters using an iterative process of expectation and minimization of the loss function. However, when the dataset is large, it might be difficult to calculate the expectation or minimization of the full dataset. To address the difficulty of working with large datasets, some approaches have been proposed, including the generalized EM algorithm (Dempster et al., 1977) , Monte Carlo EM algorithm (Wei & Tanner, 1990; Tanner, 1991 ), stochastic EM algorithm (Nielsen, 2000) , and generalized Monte Carlo EM (Tran et al., 2017) . For application to data augmentation, Bayesian data augmentation (Tran et al., 2017) estimates the parameters using the EM algorithm to generate data augmentation using the Bayesian approach. Bayesian data augmentation requires an adversarial network, whereas Laten-tAugment does not use an adversarial network. Nevertheless, Bayesian data augmentation is mostly related to this study in the application of the EM algorithm to data augmentation.

2. METHOD

Consider a classification task with C categories for the N training data points X = {x 1 , x 2 , • • • , x N } and labels Y = {y 1 , y 2 , • • • , y N }. Let P (y | x, θ ) denote the predicted probability of the output y, given the input x and the parameter θ. Consider that each input is transformed using random data augmentation. Let S = {1, • • • , S} be the set of augmentation policies, and z * (x, θ) be the optimal augmentation policy for the input given the parameter. We cannot directly observe the optimal policy; therefore, z * (x, θ) is the latent variable. Let π z be the unconditional probability that the augmentation policy z is applied to the input data. The loss function using the augmented data can be written as L (Θ) = -E (x,y)∼(X,Y ) log z∈S π z P (y | o z (x) , θ) , where o z (x) denotes the augmented data using the augmentation policy z, Θ = {θ, π} and π = {π 1 , • • • , π S }.

2.1. GENERALIZED EM ALGORITHM

The loss function with the latent variable can be minimized using the expectation-maximization (EM) algorithm. The EM algorithm is an iterative procedure used to compute the maximum likelihood estimate in the presence of latent variables (Ng et al., 2012) . In the E-step, the expected loss function calculated. In the M-step, the parameter is updated by minimizing the expected loss function. Let Θ (t) = {θ (t) , π (t) } be the parameter and the unconditional probability at iteration t and h (t) z (x, y, Θ (t) , S) be the conditional probability of the augmentation probability of policy z for the individual data point x given the label y at iteration t. Applying Bayes' rule, we can calculate the conditional probability (McLachlan & Krishnan, 2007) : h (t) z (x, y, Θ (t) , S) = π (t) z P y | o z (x) , θ (t) z∈S π (t) z P y | o z (x) , θ . Using h (t) z , as shown in Ng et al. ( 2012), the expected loss function E Θ|Θ (t) can be written as: E Θ|Θ (t) = -E (x,y)∼(X,Y ) z∈S h (t) z log (π z ) + z∈S h (t) z log (P (y | o z (x) , θ)) . In the M-step, the parameter θ and the unconditional probability π z were estimated by minimizing the expected loss function given the conditional probability h (t) z . If solving the minimization problem of E Θ|Θ (t) proves difficult, the generalized EM algorithm proposed by Dempster et al. (1977) can be used to estimate Θ (t+1) , where E Θ (t+1) |Θ (t) < E Θ|Θ (t) . Calculation of E Θ|Θ (t) requires the expectation of possible augmentation policies. When the number of augmentation policies S is large, the computational burden of E Θ|Θ (t) cannot be neglected. Alternatively, the subset K which is randomly drawn from the full set S, of the augmentation policies, can be used. Then, the conditional probability h (t) z can be written as h (t) z x, y, Θ (t) , K = π (t) z P y | o z (x) , θ (t) z∈K π (t) z P y | o z (x) , θ (t) . (2) As shown in the Appendix (A.1), if the subset K is generated using simple random draws from the full set S, the expected loss function using the subset is equal to that obtained using the full set.

2.2. LATENT AUGMENTATION POLICY

The generalized EM algorithm can estimate the parameter by minimizing the expected loss function using latent variables. However, this may cause an overfitting problem. Following AdvAA, an augmentation policy is applied to maximize the loss function using a harder augmentation policy. Let t) be the the contribution to the loss function for input (x, y), using the augmentation policy z at iteration t. Consider the conditional probability of the latent augmentation policy h(t) z such that the augmentation policy z has the highest loss in the set of K, using the softmin function: L (t) z = -log π (t) z • P y | o z (x) , θ h(t) z = Pr[L (t) z ≥ L (t) k , ∀k ∈ K] = Pr[h (t) z ≤ h (t) k , ∀k ∈ K] = exp(-h (t) z /σ) k∈K exp(-h (t) k /σ) , ( ) where σ is the inverse scale parameter. Note that, from the definition of L z , the probability of minimum h z is equal to the one of maximum L z . Thus, the softmin function is related to the goal of the LatentAugment, maximization of minimum loss. The proposed LatentAugment can be implemented by the EM algorithm with h(t) z . In the E-step, the LatentAugment calculates the Algorithm 1 LatentAugment Input: (X, Y ): dataset Require: B: the number of mini-batch, S: the full set of augmentation policies, S: the size of augmentation policies, and σ: the inverse scale. Initialize: π z = 1/S, for z = {1, . . . , S}. Initialize the network parameter θ (0) . for t = 1, . . . , B do Randomly draw the subset K from S. Calculate h(t) z using equation ( 3). E-step: Calculate Ẽ using equation (4) M-step: Update the parameter θ (t) and π (t) using equation ( 5) end for Return: θ (B) and π (B) expected loss function weighted by the probability of the minimum conditional probability h(t) z , instead of h (t) z : Ẽ Θ|Θ (t) = -E (x,y)∼(X,Y ) E K∼S z∈K h(t) z log (π z ) + z∈K h(t) z log (P (y | o z (x) , θ)) . (4) In the M-step, the parameter is updated by minimizing the expected loss function with fixed h(t) z : θ (t+1) = θ (t) -η∇ θ Ẽ Θ|Θ (t) , π (t+1) z = Moving Average of E (x,y)∼(X B ,Y B ) h(t) z E (x,y)∼(X B ,Y B ) z∈K h(t) z , where (X B , Y B ) is the mini-batch of the input data. This process is iterated until convergence is achieved. The estimation procedure with LatentAugment is summarized as Algorithm 1.

2.3. ADVANTAGES OF THE LATENTAUGMENT

The proposed LatentAugment has following advantages over the existing augmentation methods: 1. The weighted augmentation policies are optimized for the individual input. Most recent studies, such as AA, use randomly drawn policies; however, they do not apply policies that are appropriate for each input data. On the other hand, LatentAugment utilizes randomly drawn policies customized for each input by calculating conditional probabilities for the given input. 2. It provides a closed-form solution for the probability of optimal augment polices. La-tentAugment can estimate the unconditional probability (π z ) of optimal augment policies using a closed-form solution (5) of the loss minimization. Thus, LatentAugment does not involve the additional cost of searching for these policies. 3. It is simple and computationally efficient. AdvAA proposed the use of GAN to solve for the maximization of the minimum loss function, which requires additional training costs for the adversarial network. In contrast, the proposed LatentAugment can solve the max-min problem using the conditional probability ( h(t) z ) of the highest loss without an adversarial network. LatentAugment can be solved using a simple stochastic gradient descent Following theorem shows that the UBS with a single data point and AdvAA could be considered to be special cases of LatentAugment: Theorem 2.1. (Special Case of LatentAugment). Assume that the unconditional probabilities for all augmentation policies are the same (π z = 1/S, ∀z). If the inverse scale parameter σ → 0, the gradient of expected loss function of the LatentAugment can be equal to the one of UBS. If σ → ∞, the gradient of expected loss function of the adversarial network with LatentAugment is equal to the one of AdvAA. The proof can be found in Appendix A.2. Note that the the gradient of expected loss function of LatentAugment with σ → ∞ can be equivalent to the one of AdvAA. However, it means that LatentAugment can not maximize minimum loss without an additional network, thus the advantages in efficiency of LatentAugment will be lost.

3.1. EXPERIMENT SETTING

This section describes the experiments investigating the performance of the proposed LatentAugment using the CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009) , SVHN (Netzer et al., 2011) , and ImageNet (Russakovsky et al., 2015) datasets. In these experiments, the network models are Resnet-50 (He et al., 2016) , Wide-ResNet 40-2 and Wide-ResNet 28-10 (Zagoruyko & Komodaki, 2016), Shake-Shake 26 2×32d, 26 2×96d, and 26 2×112d (Gastaldi, 2017) , and PyramidNet with ShakeDrop with a depth of 272 and an alpha of 200 (Han et al., 2017; Yamada et al., 2018) . The unconditional probability (π z ) was initialized as 1/S. The range of the magnitude of each transformation was discretized into 10, which were randomly drawn from the uniform distribution. The unconditional probability (π z ) was calculated using the moving average. The length of the moving average was fixed at 10 iterations in this experiment. It was difficult to estimate the expected loss function using the full set S of augmentation policies because of the computational burden of the large size S = 256. Alternatively, subset K could be used which is randomly drawn from the full set S. In this experiment, the subset size of the augmentation policies was set to six (K = 6). The inverse scale parameter σ was set to one. The effects of the unconditional probability, subset size, and inverse scale are discussed in later sections of the paper.

3.2. CIFAR-10 RESULTS

The CIFAR-10 dataset has a total of 60,000 images, including 50,000 for the training set and 10,000 for the test set. Each image with a size of 32 ×32 belongs to one of the 10 classes. The baseline is trained with standard data augmentation using horizontal flips with 50% probability, zero-padding, and random crops. The proposed LatentAugment first applies the baseline preprocessing, then applies LatentAugment using six policies randomly drawn from 256 policies, and finally applies the Cutout (DeVries & Taylor, 2017) or the Cutmix (Yun et al., 2019) . Table 2 shows the results of the test accuracy for different network models using the CIFAR-10 dataset. For all models, the proposed LatentAugment method achieved a better performance com- (Zhang et al., 2019) , Uncertainty-Based Sampling (UBS) (Wu et al., 2020) , and MetaAugment (MA) (Zhou et al., 2020) . On the proposed LatentAugment (LA), averages of five runs are reported. Network models are Wide-ResNet 40-2 and Wide-ResNet 28-10 (Zagoruyko & Komodaki, 2016), Shake-Shake 26 2×32d, 26 2×96d, and 26 2×112d (Gastaldi, 2017) , PyramidNet with ShakeDrop (Han et al., 2017; Yamada et al., 2018) and Resnet-50 (He et al., 2016) . See text for more details. pared to existing augmentation methods. For example, LatentAugment achieved an improvement of 0.15% and 0.36% compared to AdvAA and UBS on Wide-ResNet 28-10 model, respectively. The test accuracy of the proposed LatentAugment using PyramidNet+ShakeDrop was 98.72 %, which was 0.08% and 0.06% better than that of AdvAA and UBS, respectively. To compare with the AA, we tested the proposed model using the same transformations as AA, which uses the policy set with SamplePairing (Inoue, 2018) , instead of Mixup (Zhang et al., 2017) , and finally applied the Cutout, instead of Cutmix. The test accuracies for the Wide-ResNet 40-2 model and Wide-ResNet 28-10 of the LatentAugment using the same transformation as AA were 96.91±0.05% and 98.01±0.05%, respectively (Table 5 ). Thus, the proposed method outperforms AA, even when neither Mixup nor Cutmix were used. Adversarial AutoAugment (AdvAA) also applies the same transformations as AA, although the subset size of AdvAA is 8. To compare with AdvAA, we tested the model using the same subset size. The test accuracy of Wide-ResNet 28-10 of the LatentAugment using the same transformations as AA with subset size K = 8 is 98.16±0.07%. Thus, the proposed method is marginally better than AdvAA even when the same transformations and subset sizes are used. See table 5 and 6 in Appendix.

3.3. CIFAR-100 RESULTS

The CIFAR-100 dataset also has a total of 60,000 images, including 50,000 for the training set and 10,000 for the test set. The number of categories is 100. The procedure of the baseline and LatentAugment is the same as that of CIFAR-10. As for CIFAR-10, the proposed LatentAugment indicated better accuracy than existing augmentation methods except Shake-Shake (26 2×96d), in which the test accuracy of LatentAugment was slightly lower than that of AdvAA and MetaAugment (MA).

3.4. SVHN RESULTS

The SVHN dataset has 73,257 digit images for the core training set, 531,131 for the additional training set, and 26,032 for the test set. In this experiment, both core and additional training sets were used. The number of categories is 10. The baseline was trained using the normalizing data. The proposed method first applies LatentAugment using six policies, randomly drawn from 256 policies, then normalizes the data, and finally applies the cutout with a region size of 20 ×20 pixels, following the method proposed by DeVries & Taylor (2017). LatentAugment using Wide-ResNet 28-10 achieves 0.03% improvement compared to AA.

3.5. IMAGENET RESULTS

The ImageNet dataset has more than 1.2 million training images, 50,000 validation images, and 100,000 test images. The number of categories is 1,000. Following the AA, baseline augmentation uses the standard Inception-style pre-processing, including horizontal flips with 50% probability and random distortions of colors. The proposed LatentAugment first applies the baseline preprocessing, then applies LatentAugment using six policies randomly drawn from 256 policies, and finally applies the Cutmix. The proposed method outperformed previous augmentation studies.

3.6. CHOICE OF THE SUBSET SIZE

This experiment used a subset size of K = 6. To determine the optimal size of the subset, this study used the Wide-ResNet 28-10 to evaluate the performance of the proposed LatentAugment with different K, where K ∈ {2, 4, 6, 8}. Figure 2 suggests that the test accuracy of the model rapidly increases up to K = 6. However, no significant improvement was observed when K was 8. In contrast, the computational cost increases with K. Therefore, after comparing the computational cost and performance, all the experiments in this study used K = 6 for LatentAugment. Figure 2 also shows the results of AdvAA and UBS. AdvAA uses instances of K ∈ {2, 4, 8, 16, 32} for each input example, augmented by adversarial policies. The study of UBS reports the experimental results using K = 4 with a single data point and K = 8 with four data points for training. This figure suggests that the proposed LatentAugment is more efficient than AdvAA and UBS, because LatentAugment with K = 4 outperforms AdvAA and UBS with K = 8.

3.7. THE EFFECTS OF THE INVERSE SCALE

LatentAugment requires determining the inverse scale parameter σ which is assumed to have a value of 1 in the previous section. This section considers the effects of the inverse scale using different values. Figure 3 shows the test accuracy of LatentAugment with different inverse scale values using the Wide-ResNet 40-2 model on CIFAR-10. This suggests that the test accuracy is maximum at σ = 1, although the effect of the inverse scale is weak except for σ → 0.

3.8. THE EFFECTS OF THE UNCONDITIONAL PROBABILITY

LatentAugment estimates the unconditional probabilities (π s ) as well as the network parameters (θ). As shown in Theorem 2.1, if the unconditional probabilities are fixed at the same values, the derivative of LatentAugment can be reduced to that of adversarial AA or UBS. Table 3 shows the effects of the unconditional probability in LatentAugment using the Wide-ResNet 40-2 model on CIFAR-10. The cell of (a), where the unconditional probabilities (π z ) are fixed and the inverse scale parameter (σ) is set to 0, is equivalent to UBS with a single data point of the highest loss. In contrast, the cell of (d), where π z can be estimated and σ = 1, is the test accuracy of the proposed LatentAugment, which allows variable π z and multiple data points for the expectation of the loss function. This table suggests that an unfixed π z can slightly improve the test accuracy over a fixed π z . However, the effect on test accuracy with an fixed π z is weaker than the effect of σ set to zero. Thus, for the better performance of the proposed LatentAugment, using multiple data points for the expected value of the loss function weighted by the conditional probability of the highest loss, has a more significant effect on the performance than using the unfixed unconditional probability.

4. CONCLUSIONS

This study introduces LatentAugment, which estimates the probability of the latent augmentation customized to each input image and network model. The proposed method is appealing in that it can dynamically optimize the augmentation methods for each input and model parameter in learning iterations. As shown in the theoretical analysis, LatentAugment is a general model that includes AdvAA and UBS as special cases. Furthermore, the proposed method is simple and computationally efficient in comparison with the existing methods, which requires a generative adversarial network. Experimental results show that the proposed LatentAugment has better performance than previous augmentation methods on the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. Finally, an open question remains in the robustness of LatentAugment using the EM algorithm, which typically converges to a local optimum. While we checked the stability of the test accuracy with five runs using different random seeds, the issue of convergence of LatentAugment is an important theme for future research. An application to the object detection, image generation, and text recognition using LatentAugment is also an interesting topic. We leave such directions to future work. This section evaluates transferability with LatentAugment across different datasets and model architectures. We first take a snapshot of the unconditional probability π z of ResNet-50 on ImageNet using LA, and then apply the fixed π z to train the models of Wide-ResNet 40-2 on CIFAR-10 or CIFAR-100 using LA. This section provides the results of additional experiments using datasets of MNIST (LeCun et al., 1998) , Fashion MNIST (Xiao et al., 2017) , and Oxford flowers102 (Nilsback & Zisserman, 2008) .

A.7.1 MNIST

The MNIST is a large database of handwritten digits. It has a total of 70,000 images, including 60,000 for the training set and 10,000 for the test set. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The baseline is trained with standard data augmentation using zero-padding, and random crops. The proposed LatentAugment first applies the baseline preprocessing, then applies LatentAugment using six policies randomly drawn from 256 policies, and finally applies the Cutout. We use hyperparameters of WRN40-2 on CIFAR-10 in Table 4 .

A.7.2 FASHION MNIST

The Fashion MNIST is a dataset of Zalando's article images-consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The baseline is trained with standard data augmentation using horizontal flips with 50% probability, zero-padding, and random crops. The proposed LatentAugment first applies the baseline preprocessing, then applies LatentAugment using six policies randomly drawn from 256 policies, and finally applies the Cutmix. We use hyperparameters of WRN40-2 on CIFAR-10 in Table 4 A.7.3 OXFORD FLOWERS102 Oxford 102 Flower is an image classification dataset consisting of 102 flower categories. The flowers chosen to be flower commonly occurring in the United Kingdom. Each class consists of between 40 and 258 images. Baseline augmentation uses the standard Inception-style pre-processing, including horizontal flips with 50% probability and random distortions of colors. The proposed LatentAugment first applies the baseline preprocessing, then applies LatentAugment using six policies randomly drawn from 256 policies, and finally applies the Cutmix. We use hyperparameters of ResNet-50 on ImageNet in Table 4 Table 8 : The test accuracies of the additional experiments using MNIST (LeCun et al., 1998) , Fashion MNIST (Xiao et al., 2017) , and Oxford flowers102 (Nilsback & Zisserman, 2008) . Convergence of the EM algorithm is usually defined as a sufficiently small change in the loss function (Aitkin & Aitkin, 1996) . To confirm the convergence, Figure 4 shows the loss functions using the network models of Wide-ResNet 40-2, Wide-ResNet 28-10, Pyramid, and Shake-Shake on CIFAR-10 and CIFAR-100. This figure indicates the convergence of an estimation using the EM algorithm.

Dataset

Figure 4 : The loss functions of the different network models on CIFAR-10 and CIFAR-100.



Pytorch code of the experiments in this paper can be downloaded from GitHub (https:// github.com/xxxx/xxxxxx).



Figure 2: The test accuracies with the different size of the subset (K). It shows the test accuracies of LatentAugment (LA), Uncertainty-Based Sampling (UBS), Adversarial AutoAugment (AdvAA) with the different size of the subset (K) using the Wide-ResNet 28-10 model on CIFAR-10. This figure replicates the results of UBS from Wu et al. (2020) and AdvAA from Zhang et al. (2019).

Figure 3: The test accuracies with the different inverse scale parameter (σ). It shows the test accuracies of LatentAugment with the different inverse scale values using the Wide-ResNet 40-2 model on CIFAR-10.

Comparing the training cost and test accuracy between proposed LatentAugment (LA) with RandAugment (RA (Cubuk et al., 2019)), Adversarial AutoAugment (AdvAA (Zhang et al., 2019)), and Uncertainty-Based Sampling (UBS (Wu et al., 2020)) using the Wide-ResNet 28-10 model on CIFAR-10. Training cost of required GPU hours is reported relative to RA. Table 1 compares the training cost of required GPU hours between proposed LatentAugment and other methods. 4. It is a general model which includes other augmentation methods. The proposed La-tentAugment is a general augmentation method that includes other methods such as UBS and AdvAA.



Test accuracy (%) on CIFAR-10, CIFAR-100, SVHN, and ImageNet. All experiments in this study replicate the results of Baseline and AutoAugment (AA)(Cubuk et al., 2018), Adversarial  AutoAugment (AdvAA)

The test accuracies using fixed or unfixed unconditional probability with the different inverse scale parameters. Averages of five runs are reported.

Test accuracies (%) on CIFAR-10 using the same transformations as AutoAugment (AA). On the proposed LatentAugment (LA), averages of five runs are reported.

Test accuracies (%) on CIFAR-10 using the same subset size and transformations as Adversarial AutoAugment (AdvAA). On the proposed LatentAugment (LA), an average of five runs is reported.

Table 7 provides the experimental result of transferability. It suggests that LA with policy transfer has still good performance.

The test accuracies of the transfer the unconditional probability of augmentation policies.

A APPENDIX

A.1 RANDOMLY DRAWN SUBSET OF THE AUGMENTATION POLICIES Let δ z be the probability that policy z can be drawn. The conditional probability with δ z can be written as:z P y | o z (x) , θ (t) .The expected loss function using the randomly drawn subset given δ z is E Θ|Θ (t) , δ z = -E (x,y)∼(X,Y )Assume that the policies of the subset are drawn using simple random draws:Under this assumption, the expected loss function using a randomly drawn subset is equal to the expected loss function using the full set:A.2 PROOF OF THEOREM 2.1 (SPECIAL CASE OF LATENTAUGMENT)The loss function of UBS isIf the inverse scale σ → 0, the conditional probability hz can be approximated by the indicator function:where t) , ∀r ∈ K . Therefore, the expected loss function of the LatentAugment isNote that the first term is the same as the loss function of the uncertainty-based sampling evaluated at θ = θ (t) , while the second term is constant. Therefore,, if π z = 1/S for all z and σ → 0.

A.2.2 ADVERSARIAL AUTOAUGMENT (ADVAA)

The loss function of AdvAA iswhere A (S, µ) is the adversarial network for the set of augmentation policies, S with parameter µ.If σ → ∞ in the LatentAugment, hz → 1/K. Assume π z = 1/S for all z in LatentAugment.Then, the loss function of the adversarial network with LatentAugment is equal to the loss function of AdvAA plus constant:Therefore, ∇ θ E K∼A(S,µ) Ẽ Θ|Θ (t) → ∇ θ L AdvAA , if π z = 1/S for all z and σ → ∞.

A.3 COMPUTER RESOURCES

We train the models with the LatnetAugment using computers with 4 NVIDIA RTX 2080Ti GPUs and 64 GB memory.A.4 HYPERPARAMETERS This section provides comparison between proposed LatentAugment (LA) and AutoAugment (AA) or Adversarial AutoAugment (AdvAA) using same subset size and transformations. To compare with the AA, we tested the proposed model using the same transformations as AA, which uses the policy set with SamplePairing (Inoue, 2018) , instead of Mixup (Zhang et al., 2017) , and finally applied the Cutout, instead of Cutmix. Table 5 shows the test accuracies for the Wide-ResNet 40-2 model and Wide-ResNet 28-10 of the LA using the same transformation as AA. This table also provides the result of UBS using same transformations as AA, reported by Wu et al. (2020) .AdvAA applies the same transformations as AA, although the subset size of AdvAA is 8. To compare with AdvAA, we tested the model using the same subset size. Table 6 shows the test accuracy of Wide-ResNet 28-10 of the LA using the same transformations as AA with subset size K = 8.

