SHORTCUT LEARNING THROUGH THE LENS OF EARLY TRAINING DYNAMICS Anonymous authors Paper under double-blind review

Abstract

Deep Neural Networks (DNNs) are prone to learn shortcut patterns that damage the generalization of the DNN during deployment. Shortcut Learning is concerning, particularly when the DNNs are applied to safety-critical domains. This paper aims to better understand shortcut learning through the lens of the learning dynamics of the internal neurons during the training process. More specifically, we make the following observations: (1) While previous works treat shortcuts as synonymous with spurious correlations, we emphasize that not all spurious correlations are shortcuts. We show that shortcuts are only those spurious features that are "easier" than the core features. (2) We build upon this premise and use instance difficulty methods (like Prediction Depth (Baldock et al., 2021)) to quantify "easy" and to identify this behavior during the training phase. (3) We empirically show that shortcut learning can be detected by observing the learning dynamics of the DNN's early layers, irrespective of the network architecture used. In other words, easy features learned by the initial layers of a DNN early during the training are potential shortcuts. We verify our claims on simulated and real medical imaging data and justify the empirical success of our hypothesis by showing the theoretical connections between Prediction Depth and information-theoretic concepts like V-usable information (Ethayarajh et al., 2021). Lastly, our experiments show the insufficiency of monitoring only accuracy plots during training (as is common in machine learning pipelines), and we highlight the need for monitoring early training dynamics using example difficulty metrics.

1. INTRODUCTION

Shortcuts are spurious features that perform well on standard benchmarks but fail to generalize to real-world settings (Geirhos et al., 2020) . Deep neural networks (DNNs) tend to rely on shortcuts even in the presence of core features that generalize well, which poses serious problems when deploying them in safety-critical applications such as finance, healthcare, and autonomous driving (Geirhos et al., 2020; Oakden-Rayner et al., 2020; DeGrave et al., 2021) . Previous works view shortcut learning as a distribution shift problem (Kirichenko et al., 2022; Wiles et al., 2021; Bellamy et al., 2022; Adnan et al., 2022) . However, we show that not all spurious correlations are shortcuts. Models suffer from shortcut learning only when the spurious features are much easier to learn than signals that generalize well. We show how monitoring example difficulty metrics like Prediction Depth (PD) (Baldock et al., 2021) can reveal valuable insights into shortcut learning quite early during the training process. Early detection of shortcut learning is useful as it can help develop intervention schemes to fix the shortcut early. To the best of our knowledge, we are the first to detect shortcut learning by monitoring the training dynamics of the model. 2020) define shortcuts as spurious correlations that exist in standard benchmarks but fail to hold in more challenging test conditions, like real-world settings. The emphasis on shortcuts being synonymous with spurious correlations has led to widespread adoption of viewing shortcut learning as a distribution shift problem (Bellamy et al., 2022; Wiles et al., 2021; Adnan et al., 2022; Kirichenko et al., 2022) . While the distribution shift explains part of the story, we emphasize that what is equally important for shortcut learning is the difficulty of the spurious features themselves (see Fig- 1 ). Previous works like Shah et al. (2020); Scimeca et al. (2021) hint at this. But we take this line of thought further by viewing shortcut learning as a phenomenon that impacts the dataset difficulty, which can be captured by monitoring early training dynamics.

