REPRESENTATION LEARNING FOR SEQUENCE DATA WITH DEEP AUTOENCODING PREDICTIVE COMPO-NENTS

Abstract

We propose Deep Autoencoding Predictive Components (DAPC) -a selfsupervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space. We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between the past and future windows at each time step. In contrast to the mutual information lower bound commonly used by contrastive learning, the estimate of predictive information we adopt is exact under a Gaussian assumption. Additionally, it can be computed without negative sampling. To reduce the degeneracy of the latent space extracted by powerful encoders and keep useful information from the inputs, we regularize predictive information learning with a challenging masked reconstruction loss. We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data. 1 

1. INTRODUCTION

Self-supervised representation learning methods aim at learning useful and general representations from large amounts of unlabeled data, which can reduce sample complexity for downstream supervised learning. These methods have been widely applied to various domains such as computer vision (Oord et al., 2018; Hjelm et al., 2018; Chen et al., 2020; Grill et al., 2020) , natural language processing (Peters et al., 2018; Devlin et al., 2019; Brown et al., 2020) , and speech processing (Schneider et al., 2019; Pascual et al., 2019b; Chung & Glass, 2020; Wang et al., 2020; Baevski et al., 2020) . In the case of sequence data, representation learning may force the model to recover the underlying dynamics from the raw data, so that the learnt representations remove irrelevant variability in the inputs, embed rich context information and become predictive of future states. The effectiveness of the representations depends on the self-supervised task which injects inductive bias into learning. The design of self-supervision has become an active research area. One notable approach for self-supervised learning is based on maximizing mutual information between the learnt representations and inputs. The most commonly used estimate of mutual information is based on contrastive learning. A prominant example of this approach is CPC (Oord et al., 2018) , where the representation of each time step is trained to distinguish between positive samples which are inputs from the near future, and negative samples which are inputs from distant future or other sequences. The performance of contrastive learning heavily relies on the nontrivial selection of positive and negative samples, which lacks a universal principle across different scenarios (He et al., 2020; Chen et al., 2020; Misra & Maaten, 2020) . Recent works suspected that the mutual information lower bound estimate used by contrastive learning might be loose and may not be the sole reason for its success (Ozair et al., 2019; Tschannen et al., 2019) . In this paper, we leverage an estimate of information specific to sequence data, known as predictive information (PI, Bialek et al., 2001) , which measures the mutual information between the past and future windows in the latent space. The estimate is exact if the past and future windows have a joint Gaussian distribution, and is shown by prior work to be a good proxy for the true predictive information in practice (Clark et al., 2019) . We can thus compute the estimate with sample windows of the latent sequence (without sampling negative examples), and obtain a well-defined objective for learning the encoder for latent representations. However, simply using the mutual information as the learning objective may lead to degenerate representations, as PI emphasizes simple structures in the latent space and a powerful encoder could achieve this at the cost of ignoring information between latent representations and input features. To this end, we adopt a masked reconstruction task to enforce the latent representations to be informative of the observations as well. Similar to Wang et al. ( 2020), we mask input dimensions as well as time segments of the inputs, and use a decoder to reconstruct the masked portion from the learnt representations; we also propose variants of this approach to achieve superior performance. Our method, Deep Autoencoding Predictive Components (DAPC), is designed to capture the above intuitions. From a variational inference perspective, DAPC also has a natural probabilistic interpretation. We demonstrate DAPC on both synthetic and real datasets of different sizes from various domains. Experimental results show that DAPC can recover meaningful low dimensional dynamics from high dimensional noisy and nonlinear systems, extract predictive features for forecasting tasks, and obtain state-of-the-art accuracies for Automatic Speech Recognition (ASR) with a much lower cost, by pretraining encoders that are later finetuned with a limited amount of labeled data. The main intuition behind Deep Autoencoding Predictive Components is to maximize the predictive information of latent representation sequence.

2. METHOD

To ensure the learning process is tractable and non-degenerate, we make a Gaussian assumption and regularize the learning with masked reconstruction. In the following subsections, we elaborate on how we estimate the predictive information and how we design the masked reconstruction task. A probabilistic interpretation of DAPC is also provided to show the connection to deep generative models.

2.1. PREDICTIVE INFORMATION

Given a sequence of observations X = {x 1 , x 2 , ...} where x i ∈ R n , we extract the corresponding latent sequence Z = {z 1 , z 2 , ...} where z i ∈ R d with an encoder function e(X), e.g., recurrent neural nets or transformers (Vaswani et al., 2017) .foot_0 Let T > 0 be a fixed window size, and denote (1)



In this work, the latent sequence has the same length as the input sequence, but this is not an restriction; one can have a different time resolution for the latent sequence, using sub-sampling strategies such as that ofChan et al. (2016).



Figure 1: The overall framework of DAPC.

Z past t = {z t-T +1 , ..., z t }, Z f uture t = {z t+1 , ..., z t+T } for any time step t. The predictive information (PI) is defined as the mutual information (MI) between Z past t and Z f uture t : M I(Z past t , Z f uture t ) = H(Z past t ) + H(Z f uture t ) -H(Z past t , Z f uture t )

funding

* Work done during an internship at Salesforce Research. † Work done while Weiran Wang was with Salesforce Research.

