SLTUNET: A SIMPLE UNIFIED MODEL FOR LANGUAGE TRANSLATION

Abstract

Despite recent successes with neural models for sign language translation (SLT), translation quality still lags behind spoken languages because of the data scarcity and modality gap between sign video and text. To address both problems, we investigate strategies for cross-modality representation sharing for SLT. We propose SLTUNET, a simple unified neural model designed to support multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and sign-to-text translation. Jointly modeling different tasks endows SLTUNET with the capability to explore the cross-task relatedness that could help narrow the modality gap. In addition, this allows us to leverage the knowledge from external resources, such as abundant parallel data used for spoken-language machine translation (MT). We show in experiments that SLTUNET achieves competitive and even state-of-theart performance on PHOENIX-2014T and CSL-Daily when augmented with MT data and equipped with a set of optimization techniques. We further use the DGS Corpus for end-to-end SLT for the first time. It covers broader domains with a significantly larger vocabulary, which is more challenging and which we consider to allow for a more realistic assessment of the current state of SLT than the former two. Still, SLTUNET obtains improved results on the DGS Corpus. Code is available at https://github.com/bzhangGo/sltunet.

1. INTRODUCTION

The rapid development of neural networks opens the path towards the ambitious goal of universal translation that allows converting information between any languages regardless of data modalities (text, audio or video) (Zhang, 2022) . While the translation for spoken languages (in text and speech) has gained wide attention (Aharoni et al., 2019; Inaguma et al., 2019; Jia et al., 2019) , the study of sign language translation (SLT) -a task translating from sign language videos to spoken language texts -still lags behind despite its significance in facilitating the communication between Deaf communities and spoken language communities (Camgoz et al., 2018; Yin et al., 2021) . SLT represents unique challenges: it demands the capability of video understanding and sequence generation. Unlike spoken language, sign language is expressed using hand gestures, body movements and facial expressions, and the visual signal varies greatly across signers, creating a tough modality gap for its translation into text. The lack of supervised training data further hinders us from developing neural SLT models of high complexity due to the danger of model overfitting. Addressing these challenges requires us to develop inductive biases (e.g., novel model architectures and training objectives) to enable knowledge transfer and induce universal representations for SLT. In the literature, a promising way is to design unified models that could support and be optimized via multiple tasks with data from different modalities. Such modeling could offer implicit regularization and facilitate the cross-task and cross-modality transfer learning that helps narrow the modality gap and improve model's generalization, such as unified vision-language modeling (Jaegle et al., 2022; Bao et al., 2022; Kaiser et al., 2017) , unified speech-text modeling (Zheng et al., 2021; Tang et al., 2022; Bapna et al., 2022 ), multilingual modeling (Devlin et al., 2019; Zhang et al., 2020; Xue et al., 2021) , and general data modeling (Liang et al., 2022; Baevski et al., 2022) . In SLT, different annotations could be paired into different tasks, including the sign-to-gloss (Sign2Gloss), the signto-text (Sign2Text), the gloss-to-text (Gloss2Text) and the text-to-gloss (Text2Gloss) task. These

