VISUALLY-AUGMENTED LANGUAGE MODELING

Abstract

Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VALM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VALM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VALM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VALM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VALM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM.

1. INTRODUCTION

Large-scale pre-trained language models (PLMs) have achieved great success in promoting state of the art on various natural language understanding and generation tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Yang et al., 2019; Brown et al., 2020; Wang et al., 2022) . PLM self-supervision training largely benefits from harvesting local context information in the pre-training corpus. To further strengthen such contextual self-supervision, recent seminal works, e.g. GPT-3 (Brown et al., 2020) and Megatron-LM (Narayanan et al., 2021) , focus on increasing the model size and the scale of pre-training corpus. With billions of parameters, these tremendous PLMs exhibit incredible ability as zero-shot or few-shot learners. More remarkably, PLMs can achieve human-parity performance on various downstream tasks, even without any task-specific supervision. Another major research line of PLMs is to enhance the language model with auxiliary knowledge (Wei et al., 2021) , including entity knowledge (Yu et al., 2020 ), relational knowledge (Zhang et al., 2019; Qin et al., 2021 ), text chunk (Lewis et al., 2020; Wu et al., 2022; Borgeaud et al., 2021) , etc. The incorporation of various knowledge resources to PLMs mitigates the drawbacks of local contextual attention, bringing additional relevant global context that benefits both language understanding and generation tasks. Since current unimodal PLMs lack visual knowledge grounding, they inevitably suffer from the hallucination problem, which refers to the inconsistent or false statements generated by PLMs with respect to the world knowledge (Logan et al., 2019) . For instance, the PLMs may predict the color of the sky as red only due to the statistical contextual correlations between the token "color" and "red" in the pre-training corpus, neglecting the commonsense facts. In this paper, we propose a novel framework to enable language model pre-training to take full advantage of both local text context and corresponding visual knowledge. Recent work on joint visionlanguage model (VLM) pre-training (Su et al., 2020; Tan & Bansal, 2020) relies on explicit alignments between text and image, e.g. supervised image captioning data, which limits the cross-modality fusion during fine-tuning/inference over text without accompanying images. As a consequence, later in our experiments (section 3), those prominent VLMs are found to achieve unsatisfactory performance on visual knowledge-intensive commonsense reasoning tasks. Instead, we design a flexible text-image alignment mechanism via an image retrieval module that gathers related images for each token as visual augmentation. To achieve better language-vision grounding, we propose a visual knowledge fusion layer to enable joint attention across visually-augmented context including both textual tokens and retrieved images. Based on this, we build up a Visually-augmented Language Model, VALM, with flexible on-the-fly visual knowledge enhancement. We evaluate the effectiveness of the proposed VALM on various commonsense reasoning and language-only benchmarks. Experimental results demonstrate that our model consistently outperforms the unimodal and multimodal baselines in terms of object commonsense reasoning. Remarkably, our method substantially improves +14.50%, +17.80%, and +11.68% accuracy on MEMORYCOLOR, RELATIVESIZE and OBJECTSHAPE datasets, respectively. Additional experiments on natural language understanding tasks also validate that the proposed visually-augmented language modeling framework could be helpful to improve the fundamental natural language understanding capability of PLMs. Our contributions are summarized as follows: • We propose a novel visually-augmented casual language model, VALM, to enable the language model to utilize visual knowledge flexibly and effectively. Through the proposed visual knowledge fused language modeling, VALM is capable of accomplishing tasks with the high demand of cross-modality knowledge, such as visual commonsense reasoning. • We design a framework to construct flexible on-the-fly text-image alignments and fuse augmented images into the context of language modeling. We implement an image retrieval module to query token-level representation in a large-scale cached image database and retrieve its nearest neighbors as the augmentation. With the proposed visual knowledge fusion layer, VALM can effectively take full advantage of both language information from local text context and visual information from retrieved images. • Experimental results demonstrate that VALM effectively alleviates the hallucination problem of PLMs via introducing visual knowledge in language model pre-training. VALM achieves significant performance improvements in inferring the commonsense object properties.

2. METHODS

We propose a novel multi-modal pre-trained language model, which is augmented with retrieved images, named VALM. The architecture of VALM is presented in Figure 1 . VALM augments each token in pre-training text corpus with k retrieved related images. VALM uses an image retrieval module to retrieve corresponding images for each token. The image retrieval module deploys a pre-trained CLIP model, which is capable of unifying the textual query and image candidates into a joint embedding space. VALM constructs a cached large-scale image knowledge base using image encoder of CLIP, and uses the contextual representation of each token as textual query to search its nearest neighbors in image knowledge base. With the help of the unified text and image embedding space provided by CLIP, the image nearest neighbors are taken as augmented images of each token to construct text and image alignments. We then propose a visual-knowledge fusion layer to enable learned hidden state to attend to both texts and augmented images.

2.1. VALM: VISUALLY-AUGMENTED LANGUAGE MODELING

Given an input text sequence {x i } N i=1 , the embedding layer first encodes input vector {x i } N i=1 into embedding space and outputs the initial hidden state H 0 to the successive Transformer decoder layers. Then the proposed VALM model encodes H 0 into visual knowledge fused contextual representations at difference levels H = {H l } L l=1 via L -1 Transformer decoder layers and one special visual knowledge fusion layer. Each Transformer decoder layer is identical to Vaswani et al. (2017) , which outputs the contextual representations at different semantic levels given the representation from the previous layer H l = Layer l (H l-1 ), l ∈ [1, L]. The visual knowledge fusion layer is proposed as a variant of the Transformer decoder layer to incorporate visual knowledge in contextual learning via joint attention on both text contexts and augmented images. The visual knowledge fusion layer is injected in the second-to-last layer of VALM. The visual knowledge is stored in corresponding augmented image representations, obtained from

