SMOOTH ACTIVATIONS AND REPRODUCIBILITY IN DEEP NETWORKS

Abstract

Deep networks are gradually penetrating almost every domain in our lives due to their amazing success. However, with substantive performance accuracy improvements comes the price of irreproducibility. Two identical models, trained on the exact same training dataset may exhibit large differences in predictions on individual examples even when average accuracy is similar, especially when trained on highly distributed parallel systems. The popular Rectified Linear Unit (ReLU) activation has been key to recent success of deep networks. We demonstrate, however, that ReLU is also a catalyzer to irreproducibility in deep networks. We show that not only can activations smoother than ReLU provide better accuracy, but they can also provide better accuracy-reproducibility tradeoffs. We propose a new family of activations; Smooth ReLU (SmeLU), designed to give such better tradeoffs, while also keeping the mathematical expression simple, and thus implementation cheap. SmeLU is monotonic, mimics ReLU, while providing continuous gradients, yielding better reproducibility. We generalize SmeLU to give even more flexibility and then demonstrate that SmeLU and its generalized form are special cases of a more general methodology of REctified Smooth Continuous Unit (RESCU) activations. Empirical results demonstrate the superior accuracy-reproducibility tradeoffs with smooth activations, SmeLU in particular.

1. INTRODUCTION

Recent developments in deep learning leave no question about the advantages of deep networks over classical methods, which relied heavily on linear convex optimization solutions. With their astonishing unprecedented success, deep models are providing solutions to a continuously increasing number of domains in our lives. These solutions, however, while much more accurate than their convex counterparts, are usually irreproducible in the predictions they provide. While average accuracy of deep models on some validation dataset is usually much higher than that of linear convex models, predictions on individual examples of two models, that were trained to be identical, may diverge substantially, exhibiting Prediction Differences that may be as high as non-negligible fractions of the actual predictions (see, e.g., Chen et al. ( 2020 et al., 2017; Bengio et al., 2009) . Due to the huge amounts of data required to train such models, enforcing determinism (Nagarajan et al., 2018) may not be an option. Deep networks may be trained on highly distributed, parallelized systems. Thus two supposedly identical models, with the same architecture, parameters, training algorithm and training hyper-parameters that are trained on the same training dataset, even if they are initialized identically, will exhibit some randomness in the order in which they see the training set and apply updates. Due to the highly non-convex objective, such models may converge to different optima, which may exhibit equal average objective, but can provide very different predictions to individual examples. Irreproducibility in deep models is not the classical type of epistemic uncertainty, widely studied in the literature, nor is it overfitting. It differs from these phenomena in several ways: It does not diminish with more training examples like classical epistemic uncertainty, and it does not cause degradation to test accuracy by overfitting unseen data to the training examples. While irreproducibilty may be acceptable for some applications, it can be very detrimental in applications, such as medical ones, where two different diagnoses to the same symptoms may be unacceptable. Furthermore, in online and/or re-enforcement systems, which rely on their predictions to determine actions, that in turn, determine the remaining training examples, even small initial irreproducibility can cause large divergence of models that are supposed to be identical. One example is sponsored advertisement online Click-Through-Rate (CTR) prediction (McMahan et al., 2013) . The effect of irreproducibility in CTR prediction can go far beyond changing the predicted CTR of an example, as it may affect actions that take place downstream in a complex system. Reproducibility is a problem even if one trains only a single model, as it may be impossible to determine whether the trained model provides acceptable solutions for applications that cannot tolerate unacceptable ones. A major factor to the unprecedented success of deep networks in recent years has been the Rectified Linear Unit (ReLU) activation, (Nair & Hinton, 2010) . ReLU nonlinearity together with back-propagation give simple updates, accompanied with superior accuracy. ReLU thus became the undisputed activation used in deep learning. However, is ReLU really the best to use? While it gives better optima than those achieved with simple convex models, it imposes an extremely nonconvex objective surface with many such optima. The direction of a gradient update with a gradient based optimizer is determined by the specific example which generates the update. 2016) . Specifically, the Swish activation (Ramachandran et al., 2017) (that can approximate GELU) was found through automated search to achieve superior accuracy to ReLU. Further activations with similarity to GELU were proposed recently; Mish (Misra, 2019), and TanhExp (Liu & Di, 2020). Unlike ReLU, many of these activations are smooth with continuous gradient. Good properties of smooth activations were studied as early as Mhaskar (1997) (see also Du (2019) ; Lokhande et al. ( 2020)). These series of papers started suggesting that smooth activations, if configured properly, may be superior to ReLU in accuracy. Recent work by Xie et al. (2020) , that was done subsequently to our work, and was inspired from our results that we report in this paper, (Lin & Shamir, 2019) , demonstrated also the advantage of smooth activations for adversarial training. Our Contributions: In this paper, we first demonstrate the advantages of smooth activations to reproducibility in deep networks. We show that not only can smooth activations improve accuracy of deep networks, they can also achieve superior tradeoffs between reproducibility and accuracy, by attaining a lower average Prediction Difference (PD) for the same or better accuracy. Smooth activations, like Swish, GELU, Mish, and TanhExp, all have a very similar non-monotonic form, that does not provide a clear stop region (with strict 0; not only approaching 0), and slope 1 region. While these activations approximate the mathematical form of ReLU, they lack these properties of ReLU. All these activations including SoftPlus also require more expensive mathematical expressions, involving exponents, and, in some, logarithms, or even numerically computed values (e.g., GELU). This can make deployment harder, especially with certain simplified hardware that supports only a limited number of operations, as well as can slow down training due to the heavier computations. Unlike ReLU, which can be transformed into Leaky ReLU, the smooth activations described cannot be easily transformed into more general forms. In this work, we propose Smooth ReLU (SmeLU), which is mathematically simple and based only on linear and quadratic expressions. It can be more easily deployed with limited hardware, and can provide faster training when hardware is limited. SmeLU provides a clearly defined 0 activation region, as well as a slope 1 region, is monotonic, and is also extendable to a leaky or more general form. SmeLU gives the good properties of smooth activations providing better reproducibility as well as better accuracy-reproducibility tradeoffs. Its generalized form allows even further accuracy improvements. The methodology to construct SmeLU is shown to be even more general, allowing for more complex smooth activations, which are all clustered under the category of Rectified Smooth Continuous Units (RESCUs). Related Work: Ensembles (Dietterich, 2000) have been used to reduce uncertainty (Lakshminarayanan et al., 2017) . They are useful also for reducing irreproducibility. However, they make models more complex, and can trade off accuracy in favor of reproducibility if one attempts to keep constant computation costs (which require reducing capacity of each component of the ensemble). Compression of deep networks into smaller networks that attempt to describe the same information



); Dusenberry et al. (2020)). Deep networks express (only) what they learned. Like humans, they may establish different beliefs as function of the order in which they had seen training data (Achille

Thus the order of seeing examples or applying updates can determine which optimum is reached. Many such optima, as imposed by ReLU, thus provide a recipe for irreproducibility. In recent years, different works started challenging the dominance of ReLU, exploring alternatives. Overviews of various activations were reported in Nwankpa et al. (2018); Pedamonti (2018). Variations on ReLU were studied in Jin et al. (2015). Activations like SoftPlus (Zheng et al., 2015), Exponential Linear Unit (ELU) (Clevertet al., 2015), Scaled Exponential Linear Unit (SELU)(Klambauer et al., 2017; Sakketou & Ampazis,  2019; Wang et al., 2017), or Continuously differentiable Exponential Linear Unit (CELU) (Barron, 2017) were proposed, as well as the Gaussian Error Linear Unit (GELU) (Hendrycks & Gimpel,

