COMPUTATIONAL SEPARATION BETWEEN CONVOLU-TIONAL AND FULLY-CONNECTED NETWORKS

Abstract

Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks. Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent, but at the same time is hard to learn using a polynomial-size fully-connected network.

1. INTRODUCTION

Convolutional neural networks (LeCun et al., 1998; Krizhevsky et al., 2012) achieve state-of-the-art performance on every possible task in computer vision. However, while the empirical success of convolutional networks is indisputable, the advantage of using them is not well understood from a theoretical perspective. Specifically, we consider the following fundamental question: Why do convolutional networks (CNNs) perform better than fully-connected networks (FCNs)? Clearly, when considering expressive power, FCNs have a big advantage. Since convolution is a linear operation, any CNN can be expressed using a FCN, whereas FCNs can express a strictly larger family of functions. So, any advantage of CNNs due to expressivity can be leveraged by FCNs as well. Therefore, expressive power does not explain the superiority of CNNs over FCNs. There are several possible explanations to the superiority of CNNs over FCNs: parameter efficiency (and hence lower sample complexity), weight sharing, and locality prior. The main result of this paper is arguing that locality is a key factor by proving a computational separation between CNNs and FCNs based on locality. But, before that, let's discuss the other possible explanations. First, we observe that CNNs seem to be much more efficient in utilizing their parameters. A FCN needs to use a greater number of parameters compared to an equivalent CNN: each neuron of a CNN is limited to a small receptive field, and moreover, many of the parameters of the CNN are shared. From classical results in learning theory, using a large number of parameters may result in inferior generalization. So, can the advantage of CNNs be explained simply by counting parameters? To answer this question, we observe the performance of CNN and FCN based architecture of various widths and depths trained on the CIFAR-10 dataset. For each architecture, we observe the final test accuracy versus the number of trainable parameters. The results are shown in Figure 1 . As can be seen, CNNs have a clear advantage over FCNs, regardless of the number of parameters used. As is often observed, a large number of parameters does not hurt the performance of neural networks, and so parameter efficiency cannot explain the advantage of CNNs. This is in line with various theoretical works on optimization of neural networks, which show that over-parameterization is beneficial for convergence of gradient-descent (e.g., Du et al. (2018) ; Soltanolkotabi et al. (2018) ; Li & Liang (2018) ). The superiority of CNNs can be also attributed to the extensive weight sharing between the different convolutional filters. Indeed, it has been previously shown that weight sharing is important for the optimization of neural networks (Shalev-Shwartz et al., 2017b) . Moreover, the translation-invariant nature of CNNs, which relies on weight sharing, is often observed to be beneficial in various signal processing tasks (Kauderer-Abrams, 2017; Kayhan & Gemert, 2020) . So, how much does the weight sharing contribute to the superiority of CNNs over FCNs? To understand the effect of weight sharing on the behavior of CNNs, it is useful to study locallyconnected network (LCN) architectures, which are similar to CNNs, but have no weight sharing between the kernels of the network. While CNNs are far more popular in practice (also due to the fact that they are much more efficient in terms of model size), LCNs have also been used in different contexts (e.g., Bruna et al. (2013) ; Chen et al. (2015) ; Liu et al. (2020) ). It has been recently observed that in some cases, the performance of LCNs is on par with CNNs (Neyshabur, 2020). So, even if weight sharing explains some of the advantage of CNNs, it clearly doesn't tell the whole story. Finally, a key property of CNN architectures is their strong utilization of locality in the data. Each neuron in a CNN is limited to a local receptive field of the input, hence encoding a strong locality bias. In this work we demonstrate how CNNs can leverage the local structure of the input, giving them a clear advantage in terms of computational complexity. Our results hint that locality is the principal property that explains the advantage of using CNNs. Our main result is a computational separation result between CNNs and FCNs. To show this result, we introduce a family of functions that have a very strong local structure, which we call k-patterns. A k-pattern is a function that is determined by k consecutive bits of the input. We show that for inputs of n bits, when the target function is a (log n)-pattern, training a CNN of polynomial size with gradient-descent achieves small error in polynomial time. However, gradient-descent will fail to learn (log n)-patterns, when training a FCN of polynomial-size. 1.1 RELATED WORK It has been empirically observed that CNN architectures perform much better than FCNs on computer vision tasks, such as digit recognition and image classification (e.g., Urban et al. (2017) ; Driss et al. ( 2017)). While some works have applied various techniques to improve the performance of FCNs (Lin et al. (2015) ; Fernando et al. (2016) ; Neyshabur (2020)), there is still a gap between performance of CNNs and FCNs, where the former give very good performance "out-of-the-box". The focus of this work is to understand, from a theoretical perspective, why CNNs give superior performance when trained on input with strong local structure. Various theoretical works show the advantage of architectures that leverage local and hierarchical structure. The work of Poggio et al. (2015) shows the advantage of using deep hierarchical models over wide and shallow functions. These results are extended in Poggio et al. (2017) , showing an exponential gap between deep and shallow networks, when approximating locally compositional functions. The works of Mossel (2016) ; Malach & Shalev-Shwartz (2018) study learnability of deep hierarchical models. The work of Cohen et al. (2017) analyzes the expressive efficiency of convolutional networks via hierarchical tensor decomposition. While all these works show that indeed CNNs powerful due to their hierarchical nature and the efficiency of utilizing local structure, they do not explain why these models are superior to fully-connected models. There are a few works that provide a theoretical analysis of CNN optimization. The works of Brutzkus & Globerson (2017) ; Du et al. (2018) show that gradient-descent can learn a shallow CNN with a single filter, under various distributional assumptions. The work of Zhang et al. ( 2017) +1 +1 1 1 +1 1 1 +1 . . . 1 x7 ! Q 7 j=3 xi =======) +1 Figure 2: Example of a k-pattern with k = 5. shows learnability of a convex relaxation of convolutional networks. While these works focus on computational properties of learning CNNs, as we do in this work, they do not compare CNNs to FCNs, but focus only on the behavior of CNNs. The works of Cohen & Shashua (2016) ; Novak et al. (2018) study the implicit bias of simplified CNN models. However, these result are focused on generalization properties of CNNs, and not on computational efficiency of the optimization.

2. DEFINITIONS AND NOTATIONS

Let X = {±1} n be our instance space, and let Y = {±1} be the label space. Throughout the paper, we focus on learning a binary classification problem using the hinge-loss: `(ŷ, y) = max{1 y ŷ, 0}. Given some distribution D over X , some target function f : X ! Y and some hypothesis h : X ! Y, we define the loss of h with respect to f on the distribution D by: L f,D (h) = E x⇠D [`(h(x), f(x))] The goal of a supervised learning algorithm is, given access to examples sampled from D and labeled by f , to find a hypothesis h that minimizes L f,D (h). We focus on the gradient-descent (GD) algorithm: given some parametric hypothesis class H = {h w : w 2 R q }, gradient-descent starts with some (randomly initialized) hypothesis h w (0) and, for some learning rate ⌘ > 0, updates: w (t) = w (t 1) ⌘r w L f,D (h w (t 1) ) We compare the behavior of gradient-descent, when learning two possible neural network architectures: a convolutional network (CNN) and a fully-connected network (FCN). Definition 1. A convolutional network h u,W,b is defined as follows: h u,W,b (x) = n k X j=1 D u (j) , (W x j...j+k 1 + b) E for activation function , with kernel W 2 R q⇥k , bias b 2 R q and readout layer u (1) , . . . , u (n) 2 R q . Note that this is a standard depth-2 CNN with kernel k, stride 1 and q filters. Definition 2. A fully-connected network h u,w,b is defined as follows: h u,w,b (x) = q X i=1 u i ⇣D w (i) , x E + b i ⌘ for activation function , first layer w (1) , . . . , w (q) 2 R n , bias b 2 R q and second layer u 2 R q . We demonstrate the advantage of CNNs over FCNs by observing a problem that can be learned using CNNs, but is hard to learn using FCNs. We call this problem the k-pattern problem: Definition 3. A function f : X ! Y is a k-pattern, if for some g : {±1} k ! Y and index j ⇤ : f (x) = g(x j ⇤ ...j ⇤ +k 1 ) Namely, a k-pattern is a function that depends only on a small pattern of consecutive bits of the input. The k-pattern problem is the problem of learning k-patterns: for some k-pattern f and some distribution D over X , given access to D labeled by f , find a hypothesis h with L f,D (h)  ✏. We note that a similar problem has been studied in Golovnev et al. (2017) , providing results on PAC learnability of a related target class.

3. CNNS EFFICIENTLY LEARN (log n)-PATTERNS

The main result in this section shows that gradient-descent can learn k-patterns when training convolutional networks for poly(2 k , n) iterations, and when the network has poly(2 k , n) neurons: Theorem 4. Assume we uniformly initialize W (0) ⇠ {±1/k} q⇥k , b i = 1/k 1 and u (0,j) = 0 for every j. Assume the activation satisfies | |  c, | 0 |  1, for some constant c. Fix some > 0, some k-pattern f and some distribution D over X . Then, if q > 2 k+3 log(2 k / ), with probability at least 1 over the initialization, when training a convolutional network h u,W,b using gradient descent with ⌘ = p n p qT we have: 1 T T X t=1 L f,D (h u (t) ,W (t) ,b )  2cn 2 k 2 2 k q + 2(2 k k) 2 p qn + c 2 n 1.5 p q T Before we prove the theorem, observe that the above immediately implies that when k = O(log n), gradient-descent can efficiently learn to solve the k-pattern problem, when training a CNN: Corollary 5. Let k = O(log n). Then, running GD on a CNN with q = O(✏ 2 n 3 log 2 n) neurons for T = O(✏ 2 n 3 log n) iterations, using a sample S ⇠ D of size O(✏ 2 nkq log(nkq/ )), learns the k-pattern problem up to accuracy ✏ w.p. 1 . Proof. Sample S ⇠ D, and let b D be the uniform distribution over S. Then, from Theorem 4 and the choice of q and T there exists t 2 [T ] with L f, b D (h u (t) ,W (t) ,b )  ✏/2, i.e. GD finds a hypothesis with train loss at most ✏/2. Now, using the fact the VC dimension of depth-2 ReLU networks with W weights is O(W log W ) (see Bartlett et al. ( 2019)), we can bound the generalization gap by ✏/2. To prove Theorem 4, we show that, for a large enough CNN, the k-pattern problem becomes linearly separable, after applying the first layer of the randomly initialized CNN: Lemma 6. Assume we uniformly initialize W ⇠ {±1/k} q⇥k and b i = 1/k 1. Fix some > 0. Then if q > 2 k+3 log(2 k / ), w.p. 1 over the choice of W , for every k-pattern f there exist u ⇤(1) , . . . , u ⇤(n k) 2 R q with u ⇤(j ⇤ )  2 k+1 k p q and u ⇤(j) = 0 for j 6 = j ⇤ , s.t. h u ⇤ ,W,b = f (x). Proof. Fix some z 2 {±1} k , then for every w (i) ⇠ {±1/k} k , we have: P ⇥ sign(w (i) ) = z ⇤ = 2 k . Denote by J z ✓ [q] the subset of indexes satisfying sign w (i) = z, for every i 2 J z , and note that E W |J z | = q2 k . From Chernoff bound: P ⇥ |J z |  q2 k /2 ⇤  e q2 k /8  2 k by choosing q > 2 k+3 log(2 k / ). So, using the union bound, w.p. at least 1 , for every z 2 {±1} k we have |J z | q2 k 1 . By the choice of b i we have ( ⌦ w (i) , z ↵ + b i ) = (1/k)1{sign w (i) = z}. Now, fix some k-pattern f , where f (x) = g(x j ⇤ ,...,j ⇤ +k 1 ). For every i 2 J z we choose u ⇤(j ⇤ ) i = k |Jz| g(z) and u ⇤(j) = 0 for every j 6 = j ⇤ . Therefore, we get: h u ⇤ ,W,b (x) = n k X j=1 D u ⇤(j) , (W x j...j+k 1 + b) E = X z2{±1} k i2Jz u ⇤(j ⇤ ) i ⇣D w (i) , x j ⇤ ...j ⇤ +k 1 E + b i ⌘ = X z2{±1} k 1{z = x j ⇤ ...j ⇤ +k 1 }g(z) = g(x j ⇤ ...j ⇤ +k 1 ) = f (x) Note that by definition of u ⇤(j ⇤ ) we have u ⇤(j⇤) 2 = P z2{±1} k P i2Jz k 2 |Jz| 2  4 (2 k k) 2 q . Comment 7. Admittedly, the initialization assumed above is non-standard, but is favorable for the analysis. A similar result can be shown for more natural initialization (e.g., normal distribution), using known results from random features analysis (for example, Bresler & Nagaraj (2020) ). From Lemma 6 and known results on learning linear classifiers with gradient-descent, solving the k-pattern problem can be achieved by optimizing the second layer of a randomly initialized CNN. However, since in gradient-descent we optimize both layers of the network, we need a more refined analysis to show that full gradient-descent learns to solve the problem. We follow the scheme introduced in Daniely (2017), adapting it our setting. We start by showing that the first layer of the network does not deviate from the initialization during the training: Lemma 8. We have u (T,j)  ⌘T p q for all j 2 [n k], and W (0) W (T )  c⌘ 2 T 2 n p qk We can now bound the difference in the loss when the weights of the first layer change during the training process: Lemma 9. For every u ⇤ we have: L f,D (h u ⇤ ,W (T ) ,b ) L f,D (h u ⇤ ,W (0) ,b )  c⌘ 2 T 2 nk p q n k X j=1 u ⇤(j) The proofs of Lemma 8 and Lemma 9 are shown in the appendix. Finally, we use the following result on the convergence of online gradient-descent to show that gradient-descent converges to a good solution. The proof of the Theorem is given in Shalev-Shwartz et al. ( 2011), with an adaptation to a similar setting in Daniely & Malach (2020) . Theorem 10. (Online Gradient Descent) Fix some ⌘, and let f 1 , . . . , f T be some sequence of convex functions. Fix some ✓ 1 , and update ✓ t+1 = ✓ t ⌘rf t (✓ t ). Then for every ✓ ⇤ the following holds: 1 T T X t=1 f t (✓ t )  1 T T X t=1 f t (✓ ⇤ ) + 1 2⌘T k✓ ⇤ k 2 + k✓ 1 k 1 T T X t=1 krf t (✓ t )k + ⌘ 1 T T X t=1 krf t (✓ t )k 2 Proof of Theorem 4. From Lemma 6, with probability at least 1 over the initialization, there exist u ⇤(1) , . . . , u ⇤(n k) 2 R q with u ⇤(1)  2 k+1 k p q and u ⇤(j) = 0 for j > 1 such that ) is convex with respect to u, we have: h u ⇤ ,W 1 T T X t=1 L f,D (h u (t) ,W (t) ,b )  1 T T X t=1 L f,D (h u ⇤ ,W (t) ,b ) + 1 2⌘T n k X j=1 u ⇤(j) 2 + ⌘ 1 T T X t=1 @ @u L f,D (f u (t) ,W (t) ,b ) 2  1 T T X t=1 L f,D (h u ⇤ ,W (t) ,b ) + 2(2 k k) 2 q⌘T + c 2 ⌘nq = (⇤) Using Lemma 9 we have: (⇤)  1 T T X t=1 L f,D (h u ⇤ ,W (0) ,b ) + c⌘ 2 T 2 nk p q n k X j=1 u ⇤(j) + 2(2 k k) 2 q⌘T + c 2 ⌘nq  2c⌘ 2 T 2 nk 2 2 k + 2(2 k k) 2 q⌘T + c 2 ⌘nq Now, choosing ⌘ = p n p qT we get the required.

3.1. ANALYSIS OF LOCALLY-CONNECTED NETWORKS

The above result shows that polynomial-size CNNs can learn (log n)-patterns in polynomial time. As discussed in the introduction, the success of CNNs can be attributed to either the weight sharing or the locality-bias of the architecture. While weight sharing may contribute to the success of CNNs in some cases, we note that it gives no benefit when learning k-patterns. Indeed, we can show a similar positive result for locally-connected networks (LCN), which have no weight sharing. Observe the following definition of a LCN with one hidden-layer: Definition 11. A locally-connected network h u,w,b is defined as follows: h u,W,b (x) = n k X j=1 D u (j) , (W (j) x j...j+k 1 + b (j) )

E

for some activation function , with W (1) , . . . , W (q) 2 R q⇥k , bias b (1) , . . . , b (q) 2 R q and readout layer u (1) , . . . , u (n) 2 R q . Note that the only difference from Definition 1 is the fact that the weights of the first layer are not shared. It is easy to verify that Theorem 4 can be modified in order to show a similar positive result for LCN architectures. Specifically, we note that in Lemma 6, which is the core of the Theorem, we do not use the fact that the weights in the first layer are shared. So, LCNs are "as good as" CNNs for solving the k-pattern problem. This of course does not resolve the question of comparing between LCN and CNN architectures, which we leave for future work.

4. LEARNING (log n)-PATTERNS WITH FCN

In the previous section we showed that patterns of size log n are efficiently learnable, when using CNNs trained with gradient-descent. In this section we show that, in contrast, gradient-descent fails to learn (log n)-patterns using fully-connected networks, unless the size of the network is superpolynomial (namely, unless the network is of size n ⌦(log n) ). For this, we will show an instance of the k-pattern problem that is hard for fully connected networks. We take D to be the uniform distribution over X , and let f (x) = Q i2I x i , where I is some set of k consecutive bits. Specifically, we take I = {1, . . . , k}, although the same proof holds for any choice of I. In this case, we show that the initial gradient of the network is very small, when a fully-connected network is initialized from a permutation invariant distribution. Theorem 12. Assume | |  c, | 0 |  1. Let W be some permutation invariant distribution over R n , and assume we initialize w (1) , . . . , w (q) ⇠ W and initialize u such that |u i |  1 and for all x we have h u,w (x) 2 [ 1, 1]. Then, the following holds: • E w⇠W @ @W L f,D (h u,w,b ) 2 2  qn • min n n 1 k 1 , n 1 k 1 1 o • E w⇠W @ @u L f,D (h u,w,b ) 2 2  c 2 q n k 1 From the above result, if k = ⌦(log n) then the average norm of initial gradient is qn ⌦(log n) . Therefore, unless q = n ⌦(log n) , we get that with overwhelming probability over the randomness of the initialization, the gradient is extremely small. In fact, if we run GD on a finite-precision machine, the true population gradient is effectively zero. A formal argument relating such bound on the gradient norm to the failure of gradient-based algorithms has been shown in various previous works (e.g. Shamir (2018) ; Abbe & Sandon (2018) ; Malach & Shalev-Shwartz (2020) ). The key for proving Theorem 12 is the following observation: since the first layer of the FCN is initialized from a symmetric distribution, we observe that if learning some function that relies on k bits of the input is hard, then learning any function that relies on k bits is hard. Using Fourier analysis (e.g., Blum et al. (1994) ; Kearns (1998) ; Shalev-Shwartz et al. ( 2017a)), we can show that learning k-parities (functions of the form x 7 ! Q i2I x i ) using gradient-descent is hard. Since an arbitrary k-parity is hard, then any k-parity, and specifically a parity of k consecutive bits, is also hard. That is, since the first layer is initialized symmetrically, training a FCN on the original input is equivalent to training a FCN on an input where all the input bits are randomly permuted. So, for a FCN, learning a function that depends on consecutive bits is just as hard as learning a function that depends on arbitrary bits (a task that is known to be hard). Proof of Theorem 12. Denote I 0 = Q i2I 0 x i , so f (x) = I with I = {1, . . . , k}. We begin by calculating the gradient w.r.p. to w (i) j : @ @w (i) j L f,D (h u,w,b ) = E D " @ @w (i) j `(h u,w,b (x), f(x)) # = E D h x j u i 0 ⇣D w (i) , x E + b i ⌘ I (x) i Fix some permutation ⇡ : [n] ! [n]. For some vector x 2 R n we denote ⇡(x) = (x ⇡(1) , . . . , x ⇡(n) ), for some subset I ✓ [n] we denote ⇡(I) = [ j2I {⇡(j)}. Notice that we have for all x, z 2 R n : I (⇡(x)) = ⇡(I) and h⇡(x ), zi = ⌦ x, ⇡ 1 (z) ↵ . Denote ⇡(h u,w,b )(x) = P k i=1 u i ( ⌦ ⇡(w (i) ), x ↵ + b i ). Denote ⇡(D) the distribution of ⇡(x) where x ⇠ D. Notice that since D is the uniform distribution, we have ⇡(D) = D. From all the above, for every permutation ⇡ with ⇡(j) = j we have: @ @w (i) j L ⇡(I) ,D (h u,w,b ) = E x⇠D h x j u i 0 ⇣D w (i) , x E + b i ⌘ ⇡(I) (x) i = E x⇠⇡(D) h x j u i 0 ⇣D w (i) , ⇡ 1 (x) E + b i ⌘ I (x) i = E x⇠D h x j u i 0 ⇣D ⇡(w (i) ), x E + b i ⌘ I (x) i = @ @w (i) j L I ,D (⇡(h u,w,b )) Fix some I ✓ [n] with |I| = k and j 2 [n] . Now, let S j be a set of permutations satisfying: 1. For all ⇡ 1 , ⇡ 2 2 S j with ⇡ 1 6 = ⇡ 2 we have ⇡ 1 (I) 6 = ⇡ 2 (I). 2. For all ⇡ 2 S j we have ⇡(j) = j. Note that if j / 2 I then the maximal size of such S j is n 1 k , and if j 2 I then the maximal size is n 1 k 1 . Denote g j (x) = x j u i 0 ( ⌦ w (i) , x ↵ + b i ). We denote the inner-product h , i D = E x⇠D [ (x) (x)] and the induced norm k k D = p h , i D . Since { I 0 } I 0 ✓[n] is an orthonormal basis w.r.p. to h•, •i D from Parseval's equality we have: X ⇡2Sj @ @w (i) j L I ,D (⇡(h u,w,b )) ! 2 = X ⇡2S @ @w (i) j L ⇡(I) ,D (h u,w,b ) ! 2 = X ⇡2S ⌦ g j , ⇡(I) ↵ 2 D  X I 0 ✓[n] hg j , I 0 i 2 D = kg j k 2 D  1 So, from the above we get that, taking S j of maximal size: E ⇡⇠Sj @ @w (i) j L I ,D (⇡(h u,w,b )) ! 2  |S j | 1  min ( ✓ n 1 k ◆ 1 , ✓ n 1 k 1 ◆ 1 ) Now, for some permutation invariant distribution of weights W we have: E w⇠W @ @w (i) j L I ,D (h u,w,b ) ! 2 = E w⇠W E ⇡⇠Sj @ @w (i) j L I ,D (⇡(h u,w,b )) ! 2  |S j | 1 Summing over all neurons we get: E w⇠W @ @W L I ,D (h u,w,b ) 2 2  qn • min ( ✓ n 1 k ◆ 1 , ✓ n 1 k 1 ◆ 1 ) We can use a similar argument to bound the gradient of u. We leave the details to the appendix. 

5. NEURAL ARCHITECTURE SEARCH

So far, we showed that while the (log n)-pattern problem can be solved efficiently using a CNN, this problem is hard for a FCN to solve. Since the CNN architecture is designed for processing consecutive patterns of the inputs, it can easily find the pattern that determines the label. The FCN, however, disregards the order of the input bits, and so it cannot enjoy from the fact that the bits which determine the label are consecutive. In other words, the FCN architecture needs to learn the order of the bits, while the CNN already encodes this order in the architecture. So, a FCN fails to recover the k-pattern since it does not assume anything about the order of the input bits. But, is it be possible to recover the order of the bits prior to training the network? Can we apply some algorithm that searches for an optimal architecture to solve the k-pattern problem? Such motivation stands behind the thriving research field of Neural Architecture Search algorithms (see Elsken et al. (2018) for a survey). Unfortunately, we claim that if the order of the bits is not known to the learner, no architecture search algorithm can help in solving the k-pattern problem. To see this, it is enough to observe that when the order of the bits is unknown, the k-pattern problem is equivalent to the k-Junta problem: learning a function that depends on an arbitrary (not necessarily consecutive) set of k bits from the input. Learning k-Juntas is a well-studied problem in the literature of learning theory (e.g., Mossel et al. (2003) ). The best algorithm for solving the (log n)-Junta problem runs in time n O(log n) , and no poly-time algorithm is known for solving this problem. Moreover, if we consider statistical-query algorithms (a wide family of algorithms, that only have access to estimations of query function on the distribution, e.g. Blum et al. (2003) ), then existing lower bounds show that the (log n)-Junta problem cannot be solved in polynomial time (Blum et al., 1994) .

6. EXPERIMENTS

In the previous sections we showed a simplistic learning problem that can be solved using CNNs and LCNs, but is hard to solve using FCNs. In this problem, the label is determined by a few consecutive bits of the input. In this section we show some experiments that validate our theoretical results. In these experiments, the input to the network is a sequence of n MNIST digits, where each digit is scaled and cropped to a size of 24 ⇥ 8. We then train three different network architectures: FCN, CNN and LCN. The CNN and LCN architectures have kernels of size 24 ⇥ 24, so that 3 MNIST digits fit in a single kernel. In all the architectures we use a single hidden-layer with 1024 neurons, and ReLU activation. The networks are trained with AdaDelta optimizer for 30 epochsfoot_0 . In the first experiment, the label of the example is set to be the parity of the sum of the 3 consecutive digits located in the middle of the sequence. So, as in our theoretical analysis, the label is determined by a small area of consecutive bits of the input. Figure 3 shows the results of this experiment. As can be clearly seen, the CNN and LCN architectures achieve good performance regardless of the choice of n, where the performance of the FCN architectures critically degrades for larger n, achieving only chance-level performance when n = 19. We also observe that LCN has a clear advantage over CNN in this task. As noted, our primary focus is on demonstrating the superiority of locality-based architectures, such as CNN and LCN, and we leave the comparison between the two to future work. Our second experiment is very similar to the first, but instead of taking the label to be the parity of 3 consecutive digits, we calculate the label based on 3 digits that are far apart. Namely, we take the parity of the first, middle and last digits of the sequence. The results of this experiment are shown in Figure 4 . As can be seen, for small n, FCN performs much better than CNN and LCN. This demonstrates that when we break the local structure, the advantage of CNN and LCN disappears, and using FCN becomes a better choice. However, for large n, all architectures perform poorly.



In each epoch we randomly shuffle the sequence of the digits.



Figure 1: Comparison between CNN and FCN of various depths (2/4/6) and widths, trained for 125 epochs with RMSprop optimizer.

0) ,b (x) = f (x), and so L f,D (h u ⇤ ,W (0) ,b ) = 0. Using Theorem 10, since L f,D (h u,W,b

Figure 3: Top: Performance of different architectures on a size-n MNIST sequences, where the label is determined by the parity of the central 3 digits. Bottom: MNIST sequences of varying length.

Figure 4: n-sequence MNIST with non-consecutive parity.

acknowledgement

Acknowledgements: This research is supported by the European Research Council (TheoryDL project). We thank Tomaso Poggio for raising the main question tackled in this paper and for valuable discussion and comments

