VISUALLY-AUGMENTED LANGUAGE MODELING

Abstract

Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VALM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VALM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VALM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VALM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VALM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM.

1. INTRODUCTION

Large-scale pre-trained language models (PLMs) have achieved great success in promoting state of the art on various natural language understanding and generation tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Yang et al., 2019; Brown et al., 2020; Wang et al., 2022) . PLM self-supervision training largely benefits from harvesting local context information in the pre-training corpus. To further strengthen such contextual self-supervision, recent seminal works, e.g. GPT-3 (Brown et al., 2020) and Megatron-LM (Narayanan et al., 2021) , focus on increasing the model size and the scale of pre-training corpus. With billions of parameters, these tremendous PLMs exhibit incredible ability as zero-shot or few-shot learners. More remarkably, PLMs can achieve human-parity performance on various downstream tasks, even without any task-specific supervision. Another major research line of PLMs is to enhance the language model with auxiliary knowledge (Wei et al., 2021) , including entity knowledge (Yu et al., 2020) , relational knowledge (Zhang et al., 2019; Qin et al., 2021 ), text chunk (Lewis et al., 2020; Wu et al., 2022; Borgeaud et al., 2021) , etc. The incorporation of various knowledge resources to PLMs mitigates the drawbacks of local contextual attention, bringing additional relevant global context that benefits both language understanding and generation tasks. Since current unimodal PLMs lack visual knowledge grounding, they inevitably suffer from the hallucination problem, which refers to the inconsistent or false statements generated by PLMs with respect to the world knowledge (Logan et al., 2019) . For instance, the PLMs may predict the color of the sky as red only due to the statistical contextual correlations between the token "color" and "red" in the pre-training corpus, neglecting the commonsense facts. In this paper, we propose a novel framework to enable language model pre-training to take full advantage of both local text context and corresponding visual knowledge. Recent work on joint visionlanguage model (VLM) pre-training (Su et al., 2020; Tan & Bansal, 2020) relies on explicit alignments between text and image, e.g. supervised image captioning data, which limits the cross-modality fusion during fine-tuning/inference over text without accompanying images. As a consequence, later in our experiments (section 3), those prominent VLMs are found to achieve unsatisfactory performance on visual knowledge-intensive commonsense reasoning tasks. Instead, we design a flexible text-image

