ON DROPOUT, OVERFITTING, AND INTERACTION EF-FECTS IN DEEP NEURAL NETWORKS

Abstract

We examine Dropout through the perspective of interactions. Given N variables, there are O(N 2 ) possible pairwise interactions, O(N 3 ) possible 3-way interactions, i.e. O(N k ) possible interactions of k variables. Conversely, the probability of an interaction of k variables surviving Dropout at rate p is O((1 -p) k ). In this paper, we show that these rates cancel, and as a result, Dropout selectively regularizes against learning higher-order interactions. We prove this new perspective analytically for Input Dropout and empirically for Activation Dropout. This perspective on Dropout has several practical implications: (1) higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions, (2) caution must be used when interpreting Dropout-based feature saliency measures, and (3) networks trained with Input Dropout are biased estimators, even with infinite data. We also compare Dropout to regularization via weight decay and early stopping and find that it is difficult to obtain the same regularization against high-order interactions with these methods.

1. INTRODUCTION

We examine Dropout through the perspective of interactions: learned effects that require multiple input variables. Given N variables, there are O(N 2 ) possible pairwise interactions, O(N 3 ) possible 3-way interactions, etc. We show that Dropout contributes a regularization effect which helps neural networks (NNs) explore simpler functions of lower-order interactions before considering functions of higher-order interactions. Dropout imposes this regularization by reducing the effective learning rate of interaction effects according to the number of variables in the interaction effect. As a result, Dropout encourages models to learn simpler functions of lower-order additive components. This understanding of Dropout has implications for choosing Dropout rates: higher Dropout rates should be used when we need stronger regularization against spurious high-order interactions. This perspective also issues caution against using Dropout to measure term saliency because Dropout regularizes against terms for high-order interactions. Finally, this view of Dropout as a regularizer of interaction effects provides insight into the varying effectiveness of Dropout for different architectures and data sets. We also compare Dropout to regularization via weight decay and early stopping and find that it is difficult to obtain the same regularization effect for high-order interactions with these methods. Why Interaction Effects? When it was introduced, Dropout was motivated to prevent "complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors" (Hinton et al., 2012; Srivastava et al., 2014) . Because most "complex co-adaptations" are interaction effects, we examine Dropout under the lens of interaction. This perspective is valuable because (1) modern NNs have so many weights that understanding networks by looking at their weights is infeasible, but interactions are far more tractable because interaction effects live in function space, not weight space, (2) the decomposition that we use to calculate interaction effects has convenient properties such as identifiability, and (3) this perspective has practical implications on choosing Dropout rates for NN systems. To preview the experimental results, when NNs are trained on data that has no interactions, the optimal Dropout rate is high, but when NNs are trained on datasets which have important 2nd and 3rd order interactions, the optimal Dropout rate is 0.

