JOINTLY LEARNING VISUAL AND AUDITORY SPEECH REPRESENTATIONS FROM RAW DATA

Abstract

We present RAVEn, a self-supervised multi-modal approach to jointly learn visual and auditory speech representations. Our pre-training objective involves encoding masked inputs, and then predicting contextualised targets generated by slowly-evolving momentum encoders. Driven by the inherent differences between video and audio, our design is asymmetric w.r.t. the two modalities' pretext tasks: Whereas the auditory stream predicts both the visual and auditory targets, the visual one predicts only the auditory targets. We observe strong results in low-and high-resource labelled data settings when fine-tuning the visual and auditory encoders resulting from a single pre-training stage, in which the encoders are jointly trained. Notably, RAVEn surpasses all self-supervised methods on visual speech recognition (VSR) on LRS3, and combining RAVEn with self-training using only 30 hours of labelled data even outperforms a recent semi-supervised method trained on 90,000 hours of non-public data. At the same time, we achieve state-of-the-art results in the LRS3 low-resource setting for auditory speech recognition (as well as for VSR). Our findings point to the viability of learning powerful speech representations entirely from raw video and audio, i.e., without relying on handcrafted features. Code and models are available at https://github.com/ahaliassos/raven.

1. INTRODUCTION

The sound of someone articulating words coincides with the sight of movements in and around their mouth. Both a recording of a speech waveform and a corresponding silent video of mouth motion provide rich -but not identical -information on which words were uttered. Despite the difficulty of interpreting lip movements compared with an audio waveform, the task of visual speech recognition (VSR; also known as lipreading) has important applications, ranging from recognising utterances in a noisy environment (Ma et al., 2021b; Afouras et al., 2018a; Martinez et al., 2020; Makino et al., 2019) and aiding people suffering from aphonia (an inability to speak), to transcribing archival silent films and detecting DeepFake videos (Haliassos et al., 2021) . Auditory (also known as automatic) speech recognition (ASR) and VSR benefit greatly from the combination of high-capacity neural networks and large datasets. Rapid advances of modern hardware are enabling the use of ever-growing, data-hungry networks, but the effort required for transcription hinders the scaling of labelled data along with the models. One way to leverage unlabelled videos for VSR is to use an external ASR model for pseudo-labelling (Afouras et al., 2020; Ma et al., 2022) . However, this requires a large amount of labelled data to train a strong ASR model in the first place, and supervised VSR training with long sequences often poses optimisation problems, requiring costly curriculum learning strategies (Chung et al., 2017; Ma et al., 2022) or pre-training the feature extractor with isolated words (Afouras et al., 2018a; Ma et al., 2021b) . A solution is to first learn, in a self-supervised way, general representations from large corpora of unlabelled data, and then fine-tune them on smaller labelled datasets (Mohamed et al., 2022) . The fine-grained correspondence between the (synchronised) visual and auditory modalities provides a natural source of self-supervision, and can produce highly semantic representations invariant to noise not shared between the modalities. However, approaches leveraging this correspondence either (1) only work for word-level samples rather than continuous speech (Chung & Zisserman, 2016; Chung et al., 2019; 2020) ; (2) use handcrafted features (e.g., spectrograms or MFCCs) as their inputs or targets (Ma et al., 2021a; Shi et al., 2022) , which contain inductive biases that may influence the learned representations; (3) use multi-stage pre-training procedures (Ma et al., 2021a; Shi et al., 2022; Pan et al., 2022) ; and/or (4) use separate pre-training strategies for VSR and ASR (Shi et al., 2022) , complicating the process of obtaining representations suitable for both tasks. In this paper, we present a single-stage self-supervised approach that jointly learns visual and auditory speech representations from raw video and audio only. We dub our approach RAVEn (Raw Audio-Visual Speech Encoders). It involves a pair of student-teacher networks for each modality, whereby the students encode temporally-masked inputs, and, through the use of lightweight Transformer-based predictors, regress outputs of momentum-based teachers (Grill et al., 2020; Caron et al., 2021) that are presented with unmasked inputs. Further, given that audio contains more information relevant to speech than video, we propose a learning strategy that accounts for the expected difference in the quality of targets between the modalities. Namely, while the audio student predicts outputs from both video and audio teachers (cross-and within-modal learning), the video student predicts only auditory targets (cross-modal learning). As we show, this setup leads to better downstream performance for both VSR and ASR as opposed to other strategies. We conduct experiments with models and datasets of different sizes. We find that, when fine-tuning our pre-trained models for VSR and ASR with only 30 hours of labelled data from LRS3 (Afouras et al., 2018b) , RAVEn surpasses recent self-supervised methods by a large margin in most settings. Coupling pre-training with self-training reaches 23.8% WER for VSR on LRS3, even outperforming a method trained on 3000× more transcribed hours (Serdyuk et al., 2021) . At the same time, we are better than or on par with the recent AV-HuBERT method (Shi et al., 2022) on ASR, without using a task-dependent pre-training strategy nor handcrafted features. Using the full 433-hour LRS3 dataset for fine-tuning pushes the results even further, achieving 23.1% / 1.4% WER for VSR / ASR, respectively. Similarly strong performance is observed on the LRS2 dataset (Appendix B).

2. RELATED WORK

Masked prediction. The pre-training task of predicting missing content given masked inputs has proven successful in various domains, such as natural language processing (Devlin et al., 2019; Radford et al., 2018; 2019; Brown et al., 2020) , image recognition (He et al., 2021; Xie et al., 2022; Bao et al., 2021) , and speech recognition (Baevski et al., 2020; Hsu et al., 2021; Shi et al., 2022) . An important aspect of masked prediction is the nature of the targets. Some works (He et al., 2021; Xie et al., 2022) use pixels as targets; others use pre-trained tokenisers (Bao et al., 2021) or modalityspecific handcrafted features (Wei et al., 2021; Hsu et al., 2021; Shi et al., 2022) . Our method, in contrast, generates targets from raw video and audio using momentum encoders. Self-distillation for unsupervised representation learning. RAVEn is partly inspired by the success of self-distillation in self-supervised learning with visual data (Grill et al., 2020; Caron et al., 2021) ; such works target invariance w.r.t. image-specific augmentations. In contrast, RAVEn does not rely on domain-specific augmentations but rather on a combination of cross-modal learning and masked prediction to drive representation learning for visual and auditory speech signals. data2vec (Baevski et al., 2022) combines masked prediction with a momentum encoder, but, aside from being uni-modal, it is different methodologically in multiple ways. For example, it applies ad-hoc normalisation and averaging techniques to the targets to prevent representation collapse, while our targets are simply the outputs of the encoders, which are regressed via Transformer-based heads. Self-supervised audiovisual learning. Audiovisual correspondence has been used to learn global representations for action recognition through the use of clustering (Alwassel et al., 2020; Asano et al., 2020) , contrastive learning (Arandjelovic & Zisserman, 2017; 2018; Korbar et al., 2018; Patrick et al., 2020; Morgado et al., 2021; Ma et al., 2021c) , or representation matching (Recasens et al., 2021) . Cross-modal learning has also found uses in biometric matching (Nagrani et al., 2018a; b), emotion recognition (Shukla et al., 2021) , and DeepFake detection (Haliassos et al., 2022) . We employ cross-and within-modal losses to learn temporally-varying speech representations.

