IS ATTENTION BETTER THAN MATRIX DECOMPOSITION?

Abstract

As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery. However, is hand-crafted attention irreplaceable when modeling the global context? Our intriguing finding is that self-attention is not better than the matrix decomposition (MD) model developed 20 years ago regarding the performance and computational cost for encoding the long-distance dependencies. We model the global context issue as a low-rank completion problem and show that its optimization algorithms can help design global information blocks. This paper then proposes a series of Hamburgers, in which we employ the optimization algorithms for solving MDs to factorize the input representations into sub-matrices and reconstruct a low-rank embedding. Hamburgers with different MDs can perform favorably against the popular global context module self-attention when carefully coping with gradients back-propagated through MDs. Comprehensive experiments are conducted in the vision tasks where it is crucial to learn the global context, including semantic segmentation and image generation, demonstrating significant improvements over self-attention and its variants. Code is available.

1. INTRODUCTION

Since self-attention and transformer (Vaswani et al., 2017) showed significant advantages over recurrent neural networks and convolutional neural networks in capturing long-distance dependencies, attention has been widely adopted by computer vision (Wang et al., 2018; Zhang et al., 2019a) and natural language processing (Devlin et al., 2019) for global information mining. However, is hand-crafted attention irreplaceable when modeling the global context? This paper focuses on a new approach to design global context modules. The key idea is, if we formulate the inductive bias like the global context into an objective function, the optimization algorithm to minimize the objective function can construct a computational graph, i.e., the architecture we need in the networks. We particularize this idea by developing a counterpart for the most representative global context module, self-attention. Considering extracting global information in the networks as finding a dictionary and the corresponding codes to capture the inherent correlation, we model the context discovery as low-rank completion of the input tensor and solve it via matrix decomposition. This paper then proposes a global correlation block, Hamburger, by employing matrix decomposition to factorize the learned representation into sub-matrices so as to recover the clean low-rank signal subspace. The iterative optimization algorithm to solve matrix decomposition defines the central computational graph, i.e., Hamburger's architecture. Our work takes advantage of the matrix decomposition models as the foundation of Hamburger, including Vector Quantization (VQ) (Gray & Neuhoff, 1998) , Concept Decomposition (CD) (Dhillon & Modha, 2001) , and Non-negative Matrix Factorization (NMF) (Lee & Seung, 1999) . Additionally, instead of directly applying Back-Propagation Through Time (BPTT) algorithm (Werbos et al., 1990) to differentiate the iterative optimization, we adopt a truncated BPTT algorithm, i.e., one-step gradient, to back-propagate the gradient effectively. We illustrate the advantages of Hamburger in

