TOWARDS UNDERSTANDING LABEL SMOOTHING

Abstract

Label smoothing regularization (LSR) has a great success in training deep neural networks by stochastic algorithms such as stochastic gradient descent and its variants. However, the theoretical understanding of its power from the view of optimization is still rare. This study opens the door to a deep understanding of LSR by initiating the analysis. In this paper, we analyze the convergence behaviors of stochastic gradient descent with label smoothing regularization for solving non-convex problems and show that an appropriate LSR can help to speed up the convergence by reducing the variance. More interestingly, we proposed a simple yet effective strategy, namely Two-Stage LAbel smoothing algorithm (TSLA), that uses LSR in the early training epochs and drops it off in the later training epochs. We observe from the improved convergence result of TSLA that it benefits from LSR in the first stage and essentially converges faster in the second stage. To the best of our knowledge, this is the first work for understanding the power of LSR via establishing convergence complexity of stochastic methods with LSR in non-convex optimization. We empirically demonstrate the effectiveness of the proposed method in comparison with baselines on training ResNet models over benchmark data sets.

1. INTRODUCTION

In training deep neural networks, one common strategy is to minimize cross-entropy loss with onehot label vectors, which may lead to overfitting during the training progress that would lower the generalization accuracy (Müller et al., 2019) . To overcome the overfitting issue, several regularization techniques such as 1 -norm or 2 -norm penalty over the model weights, Dropout which randomly sets the outputs of neurons to zero (Hinton et al., 2012b) , batch normalization (Ioffe & Szegedy, 2015) , and data augmentation (Simard et al., 1998) , are employed to prevent the deep learning models from becoming over-confident. However, these regularization techniques conduct on the hidden activations or weights of a neural network. As an output regularizer, label smoothing regularization (LSR) (Szegedy et al., 2016) is proposed to improve the generalization and learning efficiency of a neural network by replacing the one-hot vector labels with the smoothed labels that average the hard targets and the uniform distribution of other labels. Specifically, for a Kclass classification problem, the one-hot label is smoothed by y LS = (1 -θ)y + θ y, where y is the one-hot label, θ ∈ (0, 1) is the smoothing strength and y = 1 K is a uniform distribution for all labels. Extensive experimental results have shown that LSR has significant successes in many deep learning applications including image classification (Zoph et al., 2018; He et al., 2019) , speech recognition (Chorowski & Jaitly, 2017; Zeyer et al., 2018) , and language translation (Vaswani et al., 2017; Nguyen & Salazar, 2019) . Due to the importance of LSR, researchers try to explore its behavior in training deep neural networks. Müller et al. (2019) have empirically shown that the LSR can help improve model calibration, however, they also have found that LSR could impair knowledge distillation, that is, if one trains a teacher model with LSR, then a student model has worse performance. Yuan et al. (2019a) have proved that LSR provides a virtual teacher model for knowledge distillation. As a widely used trick, Lukasik et al. (2020) have shown that LSR works since it can successfully mitigate label noise. However, to the best of our knowledge, it is unclear, at least from a theoretical viewpoint, how the introduction of label smoothing will help improve the training of deep learning models, and to what stage, it can help. In this paper, we aim to provide an affirmative answer to this question and try to deeply understand why and how the LSR works from the view of optimization. Our theoretical analysis will show that an appropriate LSR can essentially reduce the variance of stochastic gradient 1

