

Abstract

Softmax with the cross entropy loss is the standard configuration for current neural text classification models. The gold score for a target class is supposed to be 1, but it is never reachable under the softmax schema. Such a problem makes the training process continue forever and leads to overfitting. Moreover, the "target-approach-1" training goal forces the model to continuously learn all samples, leading to a waste of time in handling some samples which have already been classified correctly with high confidence, while the test goal simply requires the target class of each sample to hold the maximum score. To solve the above weaknesses, we propose the Adaptive Sparse softmax (AS-Softmax) which designs a reasonable and test-matching transformation on top of softmax. For more purposeful learning, we discard the classes with far smaller scores compared with the actual class during training. Then the model could focus on learning to distinguish the target class from its strong opponents, which is also the great challenge in test. In addition, since the training losses of easy samples will gradually drop to 0 in AS-Softmax, we develop an adaptive gradient accumulation strategy based on the masked sample ratio to speed up training. We verify proposed AS-Softmax on a variety of multi-class, multi-label and token classification tasks with class sizes ranging from 5 to 5000+. The results show that AS-Softmax consistently outperforms softmax and its variants, and the loss of AS-Softmax is remarkably correlated with classification performance in validation. Furthermore, adaptive gradient accumulation strategy can bring about 1.2× training speedup comparing with the standard softmax while maintaining classification effectiveness.

1. INTRODUCTION

Softmax is widely used as the last-layer activation function of a neural classification model. It normalizes the output of a network to a probability distribution over predicted output classes. Softmax is extremely appealing in both academics and industries as it is simple to evaluate and differentiate. Many research efforts are devoted to refining softmax. On one hand, several works (Shim et al., 2017; Liu et al., 2017) have explored accelerating the training process by outputting a subset of classes to reduce computational complexity. On the other hand, improving training effectiveness is also an important direction. For example, Liu et al. ( 2016 Despite the probability form, each output score of softmax is within the range of (0, 1), unlike the sparse probability distribution in real classification tasks. Consequently, the weights of all the classes have to be updated endlessly, which wastes a lot of time and leads to overfitting (Sun et al., 2021) . In addition, the training goal of softmax with the cross entropy loss is to make the target score approach to 1, while in test we expect the target score could be superior to scores of other classes. Thus there is an obvious gap between the two objectives. Take the following two prediction cases as an example: It is a 5-class classification task and Class 1 is the gold label. Case A: {0.4, 0.15, 0.15, 0.15, 0.15} Case B: {0.49, 0.5, 0.004, 0.003, 0.003} 1



) andLiang et al. (2017)  enlarged the margin of intra-class and inter-class in softmax, while Szegedy et al. (2016) used label smoothing to prevent the neural networks from excessively trusting gold labels.

