WHY CONVOLUTIONAL NETWORKS LEARN ORI-ENTED BANDPASS FILTERS: THEORY AND EMPIRICAL SUPPORT

Abstract

It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally composed of oriented contours at various scales and oriented bandpass filters are matched to such structure. We offer an alternative explanation based not on the structure of images, but rather on the structure of convolutional architectures. In particular, complex exponentials are the eigenfunctions of convolution. These eigenfunctions are defined globally; however, convolutional architectures operate locally. To enforce locality, one can apply a windowing function to the eigenfunctions, which leads to oriented bandpass filters as the natural operators to be learned with convolutional architectures. From a representational point of view, these filters allow for a local systematic way to characterize and operate on an image or other signal. We offer empirical support for the hypothesis that convolutional networks learn such filters at all of their convolutional layers. While previous research has shown evidence of filters having oriented bandpass characteristics at early layers, ours appears to be the first study to document the predominance of such filter characteristics at all layers. Previous studies have missed this observation because they have concentrated on the cumulative compositional effects of filtering across layers, while we examine the filter characteristics that are present at each layer.

1. INTRODUCTION

1.1 MOTIVATION Convolutional networks (ConvNets) in conjunction with deep learning have shown state-of-the-art performance in application to computer vision, ranging across both classification (e.g., Krizhevsky et al. (2012) ; Tran et al. (2015) ; Ge et al. ( 2019)) and regression (e.g., Szegedy et al. (2013) ; Eigen & Fergus (2015) ; Zhou et al. ( 2017)) tasks. However, understanding of how these systems achieve their remarkable results lags behind their performance. This state of affairs is unsatisfying not only from a scientific point of view, but also from an applications point of view. As these systems move beyond the lab into real-world applications better theoretical understanding can help establish performance bounds and increase confidence in deployment. Visualization studies of filters that have been learned during training have been one of the key tools marshalled to lend insight into the internal representations maintained by ConvNets in application to computer vision, e.g., Zeiler & Fergus (2014); Yosinski et al. (2015) ; Mahendran & Vedaldi (2015) ; Shang et al. (2016); Feichtenhofer et al. (2018) . Here, an interesting repeated observation is that early layers in the studied networks tend to learn oriented bandpass filters, both in two image spatial dimenstions, (x, y) , in application to single image analysis as well as in three spatiotemporal dimensions, (x, y, t) , in application to video. An example is shown in Fig. 1 . Emergence of such filters seems reasonable, because local orientation captures the first-order correlation structure of the data, which provides a reasonable building block for inferring more complex structure (e.g., local measurements of oriented structure can be assembled into intersections to capture corner structure, etc.). Notably, however, more rigorous analyses of exactly why oriented bandpass filters might be 2016), as well as three spatiotemporal dimensions, e.g., Feichtenhofer et al. (2018) . Indeed, earlier studies with architectures that also constrained their filters to be convolutional in nature, albeit using a Hebbian learning strategy MacKay (2003) rather than the currently dominant back-propagation approach Rumelhart et al. (1986) , also yielded filters that visualized as having oriented bandpass filter characteristics Linsker (1986) . Interestingly, biological vision systems also are known to show the presence of oriented bandpass filters at their earlier layers of processing in visual cortex; see Hubel & Wiesel (1962) for pioneering work along these lines and for more general review DeValois & DeValois (1988) . The presence of oriented bandpass filters in biological systems often has been attributed to their being well matched to the statistics of natural images Field (1987); Olshausen & Field (1996) ; Karklin & Lewicki (2009); Simoncelli & Olshausen (2001) , e.g., the dominance of oriented contours at multiple scales. Similar arguments have been made regarding why such filters are learned by ConvNets. Significantly, however, studies have shown that even when trained with images comprised of random noise patterns, convolutional architectures still learn oriented bandpass filters Linsker (1986) . These latter results suggest that the emergence of such filter tunings cannot be solely attributed to systems being driven to learn filters that were matched to their training data. Similarly, recent work showed that randomly initialized networks serve well in image restoration problems Ulyanov et al. (2018) . 



Figure 1: Visualization of pointspread functions (convolutional kernels) previously observed to be learned in the early layers of ConvNets. Brightness corresponds to pointwise function values. The majority of the plots show characteristics of oriented bandpass filters in two spatial dimensions, i.e., oscillating values along one direction, while remaining relatively constant in the orthogonal direction, even as there is an overall amplitude fall-off with distance from the center. The specific examples derive from the early layers of a ResNet-50 architecture He et al. (2016) trained on Ima-geNet Russakovsky et al. (2015).learned has been limited. This state of affairs motivates the current paper in its argument that the analytic structure of ConvNets constrains them to learn oriented bandpass filters.

Some recent multilayer convolutional architectures have specified their earliest layers to have oriented bandpass characteristics, e.g., Bruna & Mallat (2013); Jacobsen et al. (2016); Hadji & Wildes (2017); indeed, some have specified such filters across all layers Bruna & Mallat (2013); Hadji & Wildes (2017). These design decisions have been variously motivated in terms of being well matched to primitive image structure Hadji & Wildes (2017) or providing useful building blocks for learning higher-order structures Jacobsen et al. (2016) and capturing invariances Bruna & Mallat (2013). Other work has noted that purely mathematical considerations show that ConvNets are well suited to realizing filter designs for capturing multiscale, windowed spectra Bruna et al. (2016); however, it did not explicitly established the relationship to eigenfunctions of convolution nor offer an explanation for why deep-learning yields oriented bandpass filters when applied to ConvNets. It also did not provide empirical investigation of exactly what filter characteristics are learned at each convolutional layer of ConvNets.

