WHY CONVOLUTIONAL NETWORKS LEARN ORI-ENTED BANDPASS FILTERS: THEORY AND EMPIRICAL SUPPORT

Abstract

It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally composed of oriented contours at various scales and oriented bandpass filters are matched to such structure. We offer an alternative explanation based not on the structure of images, but rather on the structure of convolutional architectures. In particular, complex exponentials are the eigenfunctions of convolution. These eigenfunctions are defined globally; however, convolutional architectures operate locally. To enforce locality, one can apply a windowing function to the eigenfunctions, which leads to oriented bandpass filters as the natural operators to be learned with convolutional architectures. From a representational point of view, these filters allow for a local systematic way to characterize and operate on an image or other signal. We offer empirical support for the hypothesis that convolutional networks learn such filters at all of their convolutional layers. While previous research has shown evidence of filters having oriented bandpass characteristics at early layers, ours appears to be the first study to document the predominance of such filter characteristics at all layers. Previous studies have missed this observation because they have concentrated on the cumulative compositional effects of filtering across layers, while we examine the filter characteristics that are present at each layer.



)) tasks. However, understanding of how these systems achieve their remarkable results lags behind their performance. This state of affairs is unsatisfying not only from a scientific point of view, but also from an applications point of view. As these systems move beyond the lab into real-world applications better theoretical understanding can help establish performance bounds and increase confidence in deployment. 2018). Here, an interesting repeated observation is that early layers in the studied networks tend to learn oriented bandpass filters, both in two image spatial dimenstions, (x, y) , in application to single image analysis as well as in three spatiotemporal dimensions, (x, y, t) , in application to video. An example is shown in Fig. 1 . Emergence of such filters seems reasonable, because local orientation captures the first-order correlation structure of the data, which provides a reasonable building block for inferring more complex structure (e.g., local measurements of oriented structure can be assembled into intersections to capture corner structure, etc.). Notably, however, more rigorous analyses of exactly why oriented bandpass filters might be



Convolutional networks (ConvNets) in conjunction with deep learning have shown state-of-the-art performance in application to computer vision, ranging across both classification (e.g., Krizhevsky et al. (2012); Tran et al. (2015); Ge et al. (2019)) and regression (e.g., Szegedy et al. (2013); Eigen & Fergus (2015); Zhou et al. (

Visualization studies of filters that have been learned during training have been one of the key tools marshalled to lend insight into the internal representations maintained by ConvNets in application to computer vision, e.g., Zeiler & Fergus (2014); Yosinski et al. (2015); Mahendran & Vedaldi (2015); Shang et al. (2016); Feichtenhofer et al. (

