SMOOTH ACTIVATIONS AND REPRODUCIBILITY IN DEEP NETWORKS

Abstract

Deep networks are gradually penetrating almost every domain in our lives due to their amazing success. However, with substantive performance accuracy improvements comes the price of irreproducibility. Two identical models, trained on the exact same training dataset may exhibit large differences in predictions on individual examples even when average accuracy is similar, especially when trained on highly distributed parallel systems. The popular Rectified Linear Unit (ReLU) activation has been key to recent success of deep networks. We demonstrate, however, that ReLU is also a catalyzer to irreproducibility in deep networks. We show that not only can activations smoother than ReLU provide better accuracy, but they can also provide better accuracy-reproducibility tradeoffs. We propose a new family of activations; Smooth ReLU (SmeLU), designed to give such better tradeoffs, while also keeping the mathematical expression simple, and thus implementation cheap. SmeLU is monotonic, mimics ReLU, while providing continuous gradients, yielding better reproducibility. We generalize SmeLU to give even more flexibility and then demonstrate that SmeLU and its generalized form are special cases of a more general methodology of REctified Smooth Continuous Unit (RESCU) activations. Empirical results demonstrate the superior accuracy-reproducibility tradeoffs with smooth activations, SmeLU in particular.

1. INTRODUCTION

Recent developments in deep learning leave no question about the advantages of deep networks over classical methods, which relied heavily on linear convex optimization solutions. With their astonishing unprecedented success, deep models are providing solutions to a continuously increasing number of domains in our lives. These solutions, however, while much more accurate than their convex counterparts, are usually irreproducible in the predictions they provide. While average accuracy of deep models on some validation dataset is usually much higher than that of linear convex models, predictions on individual examples of two models, that were trained to be identical, may diverge substantially, exhibiting Prediction Differences that may be as high as non-negligible fractions of the actual predictions (see, e.g., Chen et al. ( 2020 While irreproducibilty may be acceptable for some applications, it can be very detrimental in applications, such as medical ones, where two different diagnoses to the same symptoms may be unacceptable. Furthermore, in online and/or re-enforcement systems, which rely on their predictions to



); Dusenberry et al. (2020)). Deep networks express (only) what they learned. Like humans, they may establish different beliefs as function of the order in which they had seen training data (Achille et al., 2017; Bengio et al., 2009). Due to the huge amounts of data required to train such models, enforcing determinism (Nagarajan et al., 2018) may not be an option. Deep networks may be trained on highly distributed, parallelized systems. Thus two supposedly identical models, with the same architecture, parameters, training algorithm and training hyper-parameters that are trained on the same training dataset, even if they are initialized identically, will exhibit some randomness in the order in which they see the training set and apply updates. Due to the highly non-convex objective, such models may converge to different optima, which may exhibit equal average objective, but can provide very different predictions to individual examples. Irreproducibility in deep models is not the classical type of epistemic uncertainty, widely studied in the literature, nor is it overfitting. It differs from these phenomena in several ways: It does not diminish with more training examples like classical epistemic uncertainty, and it does not cause degradation to test accuracy by overfitting unseen data to the training examples.

