

Abstract

Softmax with the cross entropy loss is the standard configuration for current neural text classification models. The gold score for a target class is supposed to be 1, but it is never reachable under the softmax schema. Such a problem makes the training process continue forever and leads to overfitting. Moreover, the "target-approach-1" training goal forces the model to continuously learn all samples, leading to a waste of time in handling some samples which have already been classified correctly with high confidence, while the test goal simply requires the target class of each sample to hold the maximum score. To solve the above weaknesses, we propose the Adaptive Sparse softmax (AS-Softmax) which designs a reasonable and test-matching transformation on top of softmax. For more purposeful learning, we discard the classes with far smaller scores compared with the actual class during training. Then the model could focus on learning to distinguish the target class from its strong opponents, which is also the great challenge in test. In addition, since the training losses of easy samples will gradually drop to 0 in AS-Softmax, we develop an adaptive gradient accumulation strategy based on the masked sample ratio to speed up training. We verify proposed AS-Softmax on a variety of multi-class, multi-label and token classification tasks with class sizes ranging from 5 to 5000+. The results show that AS-Softmax consistently outperforms softmax and its variants, and the loss of AS-Softmax is remarkably correlated with classification performance in validation. Furthermore, adaptive gradient accumulation strategy can bring about 1.2× training speedup comparing with the standard softmax while maintaining classification effectiveness.

1. INTRODUCTION

Softmax is widely used as the last-layer activation function of a neural classification model. It normalizes the output of a network to a probability distribution over predicted output classes. Softmax is extremely appealing in both academics and industries as it is simple to evaluate and differentiate. Many research efforts are devoted to refining softmax. On one hand, several works (Shim et al., 2017; Liu et al., 2017) have explored accelerating the training process by outputting a subset of classes to reduce computational complexity. On the other hand, improving training effectiveness is also an important direction. For example, Liu et al. (2016) and Liang et al. (2017) enlarged the margin of intra-class and inter-class in softmax, while Szegedy et al. (2016) used label smoothing to prevent the neural networks from excessively trusting gold labels. Despite the probability form, each output score of softmax is within the range of (0, 1), unlike the sparse probability distribution in real classification tasks. Consequently, the weights of all the classes have to be updated endlessly, which wastes a lot of time and leads to overfitting (Sun et al., 2021) . In addition, the training goal of softmax with the cross entropy loss is to make the target score approach to 1, while in test we expect the target score could be superior to scores of other classes. Thus there is an obvious gap between the two objectives. Take the following two prediction cases as an example: It is a 5-class classification task and Class 1 is the gold label. Case A: {0.4, 0.15, 0.15, 0.15, 0.15} Case B: {0.49, 0.5, 0.004, 0.003, 0.003} We can find that in Case A, the score of Class 1 is far larger than all the other classes, leading a successful prediction. By contrast, there is a strong negative class in Case B and its classification result is wrong. Unfortunately, in the training period, since the target score in Case B is much higher than that in Case A, the corresponding cross entropy loss would make the model biased towards improving Case A, although Case B is a hard sample in practice. Our experiments show that sometimes the correlation coefficient between the cross entropy loss of softmax and the classification accuracy is even close to 0 in validation (refer to Table 4 ). To address the above-mentioned deficiency, we propose the Adaptive Spare softmax (AS-Softmax) which involves a reasonable and test-matching transformation on top of softmax. Specifically, when computing the softmax activation function, we exclude the classes whose scores are smaller than the actual class to a given margin. In this way, the model focuses on learning to distinguish the target class from its strong opponents, matching the use in test. It is also noted that the loss of AS-Softmax can drop to 0 as long as the score of the target class is much larger than all other classes. In other words, with the progress of training, more and more easy samples will be dropped from back propagation. Thus the learning process tends to find the useful hard training instances, which makes learning more effective. In practice, it is not always the case that increasing the training sample helps the final performance of the model. 2015). All the benchmarks are publicly available and the class sizes range from 5 to 5000+. Experiments show that AS-Softmax consistently improves the performance of text classification by 1% to 2% and alleviates overfitting to a large extent. In addition, AS-Softmax with the adaptive gradient accumulation strategy can bring about 1.2× training speedup while maintaining the performance. To conclude, AS-Softmax solves the endless training and train-test objective mismatching problems of the original softmax function. Its advantages can be summarized as follows: • With AS-Softmax, easy samples will be dropped from back propagation gradually , moderating overfitting and making training more effective. • The loss of AS-Softmax is highly correlated with the final classification performance. • Its training can be accelerated via the proposed adaptive gradient accumulation strategy. • AS-Softmax is fairly easy to implement. We have made it as an standard module in Pytorch.foot_0 

2. RELATED WORK

Softmax loss function is widely used as an output activation function for modeling categorical probability distributions. There are a lot of variants of softmax proposed in the literature (De Brebisson & Vincent, 2015; Martins & Astudillo, 2016; Yang et al., 2017; Kanai et al., 2018; Gimpel & Smith, 2010) . We will briefly introduce them in terms of efficiency and effectiveness. Enhancing Efficiency Training a model using the full softmax loss becomes prohibitively expensive in the settings where a large number of classes are involved. One important kind of efficient training is to reduce its output dimension to reduce computational complexity. For example, hierarchical softmax (Morin & Bengio, 2005) partitioned the classes into a tree based on class similarities. Meanwhile, many works explored efficient subsets instead of all output classes. We define these methods as sparse softmax family uniformly. 



Code will be released in the final version.



Yao et al. (2022)  andWang et al. (2017)  claimed that training the model with representative instances is more efficient. Easy samples can be discarded by the cascade structure at early stages(Yang et al., 2019)  for learning better classification models. Moreover, the increasing removed samples also guide us in developing a novel adaptive gradient accumulation strategy to make the training process faster and faster.We test AS-Softmax on a variety of text classification tasks, including multi-class classification Socher et al. (2013); Larson et al. (2019), multi-label classification Chalkidis et al. (2021); Kowsari et al. (2017) and token classification Sang & De Meulder (2003); Tseng et al. (

Jean et al. (2014); Rawat et al. (2019) and Blanc & Rendle (2018) preserves a small number of negative classes in training by importance sampling and kernel based sampling, respectively. Martins & Astudillo (2016) suggested a new softmax variant

