SHORTCUT LEARNING THROUGH THE LENS OF EARLY TRAINING DYNAMICS Anonymous authors Paper under double-blind review

Abstract

Deep Neural Networks (DNNs) are prone to learn shortcut patterns that damage the generalization of the DNN during deployment. Shortcut Learning is concerning, particularly when the DNNs are applied to safety-critical domains. This paper aims to better understand shortcut learning through the lens of the learning dynamics of the internal neurons during the training process. More specifically, we make the following observations: (1) While previous works treat shortcuts as synonymous with spurious correlations, we emphasize that not all spurious correlations are shortcuts. We show that shortcuts are only those spurious features that are "easier" than the core features. (2) We build upon this premise and use instance difficulty methods (like Prediction Depth (Baldock et al., 2021)) to quantify "easy" and to identify this behavior during the training phase. (3) We empirically show that shortcut learning can be detected by observing the learning dynamics of the DNN's early layers, irrespective of the network architecture used. In other words, easy features learned by the initial layers of a DNN early during the training are potential shortcuts. We verify our claims on simulated and real medical imaging data and justify the empirical success of our hypothesis by showing the theoretical connections between Prediction Depth and information-theoretic concepts like V-usable information (Ethayarajh et al., 2021). Lastly, our experiments show the insufficiency of monitoring only accuracy plots during training (as is common in machine learning pipelines), and we highlight the need for monitoring early training dynamics using example difficulty metrics.

1. INTRODUCTION

Shortcuts are spurious features that perform well on standard benchmarks but fail to generalize to real-world settings (Geirhos et al., 2020) . Deep neural networks (DNNs) tend to rely on shortcuts even in the presence of core features that generalize well, which poses serious problems when deploying them in safety-critical applications such as finance, healthcare, and autonomous driving (Geirhos et al., 2020; Oakden-Rayner et al., 2020; DeGrave et al., 2021) . Previous works view shortcut learning as a distribution shift problem (Kirichenko et al., 2022; Wiles et al., 2021; Bellamy et al., 2022; Adnan et al., 2022) . However, we show that not all spurious correlations are shortcuts. Models suffer from shortcut learning only when the spurious features are much easier to learn than signals that generalize well. We show how monitoring example difficulty metrics like Prediction Depth (PD) (Baldock et al., 2021) can reveal valuable insights into shortcut learning quite early during the training process. Early detection of shortcut learning is useful as it can help develop intervention schemes to fix the shortcut early. To the best of our knowledge, we are the first to detect shortcut learning by monitoring the training dynamics of the model. et al. (2020) define shortcuts as spurious correlations that exist in standard benchmarks but fail to hold in more challenging test conditions, like real-world settings. The emphasis on shortcuts being synonymous with spurious correlations has led to widespread adoption of viewing shortcut learning as a distribution shift problem (Bellamy et al., 2022; Wiles et al., 2021; Adnan et al., 2022; Kirichenko et al., 2022) . While the distribution shift explains part of the story, we emphasize that what is equally important for shortcut learning is the difficulty of the spurious features themselves (see Fig- 1 ). Previous works like Shah et al. ( 2020 The premises that support our hypothesis are as follows: (P1) Shortcuts are only those spurious features that are "easier" to learn than the core features (see Fig- 1 ). (P2) Initial layers of a DNN tend to learn easy features, whereas the later layers tend to learn the harder ones (Zeiler & Fergus, 2014; Baldock et al., 2021) . (P3) Easy features are learned much earlier than the harder ones during training (Mangalam & Prabhu, 2019; Rahaman et al., 2019) . Premises (P1-3) lead us to conjecture that: "Easy features learned by the initial layers of a DNN early during the training are potential shortcuts."

Geirhos

We empirically show that our hypothesis works well on simulated and real medical imaging data (section-4.2) regardless of the DNN architecture used. We justify this empirical success by theoretically connecting prediction depth with information-theoretic concepts like V-usable information (Ethayarajh et al., 2021) (section-3 and appendix-A.1). Lastly, our experiments highlight that monitoring only accuracy during training, as is common in machine learning pipelines, is insufficient. Rather we need to monitor the learning dynamics of the model using suitable metrics to detect shortcut learning (section-4.3). This could potentially save a lot of time and computational costs and help develop reliable models that do not rely on spurious features. 2022) use causal diagrams to explain shortcuts as spurious correlations that hold during training but not during deployment. All these papers characterize shortcuts purely as a consequence of distribution shift; methods exist to build models robust to such shifts (Arjovsky et al., 2019; Krueger et al., 2021; Puli et al., 2022) . In contrast, we stress that not all spurious correlations are shortcuts. Rather only those spurious features that are easier than the core features are potential shortcuts (see Fig- 1 2021) use the "too-good-to-be-true" prior to emphasize that simple solutions are unlikely to be valid across contexts. Veitch et al. (2021) distinguish various model features using tools from causality and stress test the models for counterfactual invariance. Other works in natural language inference, visual question answering, and action recognition, also assume that simple solutions could be potential shortcuts (Sanh et al., 2020; Li & Vasconcelos, 2019; Clark et al., 2019; Cadene et al., 2019; He et al., 2019) . We take this line of thought further by viewing shortcuts as simple solutions or, more explicitly, easy features, which affect the early training dynamics of the model. We suggest using suitable example difficulty metrics to measure this effect.



Figure 1: An illustration of how the causal view of shortcut learning is insufficient. In the causal view, training and testing are different graphical models between input (x), output (y), and the spurious feature (s). If x can predict s, and y is not causally related to s on the test data, then s is viewed as a shortcut. (A) The figure shows two scenarios for even-odd classification. Scenario-1shows a dataset where all even numbers have a spurious composite number (located at the top-left), and odd numbers have a prime number. Scenario-2 shows a dataset where all odd numbers have a spurious white patch. The spurious white patch is an easy feature, so the model uses it as a shortcut. Whereas classifying prime numbers, as shown in scenario 1, is challenging. So the model ignores such spurious features. This shows not all spurious correlations are shortcuts.

Not all spurious correlations are shortcuts: Geirhos et al. (2020) define shortcuts as spurious correlations that exist in standard benchmarks but fail to hold in more challenging test conditions. Wiles et al. (2021) view shortcut learning as a distribution shift problem where two or more attributes are correlated during training but are independent in the test data. Bellamy et al. (

). Previous works like Shah et al. (2020); Scimeca et al. (2021) hint at this by saying that DNNs are biased towards simple solutions, and Dagaev et al. (

