PRE-TRAINING TEXT-TO-TEXT TRANSFORMERS FOR CONCEPT-CENTRIC COMMON SENSE

Abstract

Pre-trained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks. However, current pre-training objectives such as masked token prediction (for BERT-style PTLMs) and masked span infilling (for T5-style PTLMs) do not explicitly model the relational commonsense knowledge about everyday concepts, which is crucial to many downstream tasks that need common sense to understand or generate. To augment PTLMs with concept-centric commonsense knowledge, in this paper, we propose both generative and contrastive objectives for learning common sense from the text, and use them as intermediate self-supervised learning tasks for incrementally pre-training PTLMs (before task-specific fine-tuning on downstream datasets). Furthermore, we develop a joint pre-training framework to unify generative and contrastive objectives so that they can mutually reinforce each other. Extensive experimental results show that our method, concept-aware language model (CALM) 1 , can pack more commonsense knowledge into the parameters of a pre-trained text-to-text transformer without relying on external knowledge graphs, yielding better performance on both NLU and NLG tasks. We show that while only incrementally pre-trained on a relatively small corpus for a few steps, CALM outperforms baseline methods by a consistent margin and even comparable with some larger PTLMs, which suggests that CALM can serve as a general, "plug-and-play" method for improving the commonsense reasoning ability of a PTLM.

1. INTRODUCTION

Pre-trained language models (PLTMs) such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2019) have revolutionized the field of NLP, yielding impressive performance on various conventional natural language understanding (NLU) and generation (NLG) tasks. BERT and its novel variants such as RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2019) capture syntactical and semantic knowledge mainly from the pre-training task of masked language modeling, while T5-style models such as BART (Lewis et al., 2019) instead focus on masked span infilling tasks. Though yielding better performance on many downstream tasks, these pre-training objectives, however, do not explicitly guide the models to reason with concept-centric commonsense knowledge from language, including the relation and composition of daily concepts in our lives. This leaves room for equipping current PTLMs with richer commonsense reasoning ability. For example, consider a multi-choice question "What do you fill with ink to write notes on a piece of copy paper? (A) fountain pen (B) pencil case (C) printer (D) notepad". The current state-of-the-art question answering model, UnifiedQA (Khashabi et al., 2020) , which was fine-tuned on T5-large with multiple datasets, still predicts '(C) printer' as its answer. The model may be overly sensitive to the co-occurrence between phrases in question sentence like 'ink' and 'copy paper' and the answer choice 'printer', but fails to reason with the concept-centric knowledge that 'fountain pen' is a writing instrument that needs to be filled with 'ink'. Such mistake in commonsense reasoning becomes a bottleneck for current PTLMs (Davis & Marcus, 2015) . Towards augmenting PTLMs with more knowledge, prior works mainly focus on training larger models (Brown et al., 2020) , adding specific architectures to exploit external knowledge (Peters et al., 2019) , or incorporating knowledge bases for pre-training (Xiong et al., 2020) . In this paper, we instead look to explicitly teach pre-trained models to write and reason with common concepts through novel pre-training strategies. We present two kinds of self-supervised pre-training tasks: concept-to-sentence generation (C2S) and concept order recovering (COR). C2S trains the pre-trained model to compose ("write") sentences given a set of concepts, and expects the generated sentences to be fluent and plausible in terms of commonsense. COR aims to teach models to detect and revise a corrupted sentence with incorrect ordering of concepts. As illustrated in Figure 1 , both tasks require a pre-trained model to recall relevant commonsense facts about the concepts and to understand the underlying commonsense relations between them. Both of the proposed objectives can explicitly encourage the model to capture the relational concept-centric commonsense knowledge and perform compositional reasoning. Specifically, we need a generative pre-training objective to encourage models to capture this generative commonsense reasoning ability, so that models can learn to generate sentences with commonsense knowledge for both C2S and COR. Also, to teach modes to distinguish truth sentences from less plausible ones, we need to teach models with discriminative commonsense through contrastive self-training. To unify both generative and contrastive objectives within a joint learning framework so that the model can learn both generative and discriminative commonsense knowledge at the same time, we propose to use the sentences generated by the model itself as the distractors and train the model to distinguish the generated sentences from real sentences. In this way, the model is forced to acquire new commonsense knowledge in order to distinguish the distractors generated by itself, which probably exploit the knowledge the model already possesses. Therefore, the model is trained to iteratively improve upon itself in a self-play fashion. We share all the parameters between the generator (trained with the generative objective) and the discriminator (trained with the contrastive objective), then train multiple objectives with different prefixes. Compared to previous works (Peters et al., 2019; Li et al., 2019; Xiong et al., 2020) that utilize external knowledge bases like Wikidata or ConceptNet, our approach can directly improve the generative and discriminative commonsense reasoning ability of PTLMs at the same time without relying on external knowledge bases. To evaluate the effectiveness of our proposed method, we apply our method in an intermediate-task transfer learning setting (Pruksachatkun et al., 2020) based on the pre-trained T5-base model to train a Concept-Aware Language Model (CALM). While only continually pre-trained on a small dataset for a relatively fewer number of updates (compared to conventional pre-training), CALM consistently outperforms T5-base on four commonsense-related NLU datasets (i.e., COMMONSENSEQA, OPEN-BOOKQA, PIQA, and ANLI) and COMMONGEN, a commonsense-related NLG dataset. Our results and careful ablation studies demonstrate the potential of our method to serve as a "plug-and-play" method for any pre-trained text-to-text transformer before fine-tuning on commonsense-related tasks. To the best of our knowledge, our work is the first to investigate concept-centric self-supervised objectives that improve both generative and discriminative commonsense reasoning ability of a pre-trained language model.

2. SELF-SUPERVISED OBJECTIVES FOR CONCEPT-CENTRIC LEARNING

In this section, we first describe the proposed generative and contrastive objectives used for improving the commonsense reasoning ability of pre-trained text-to-text transformers. Then, we introduce the joint learning framework which unifies the proposed self-supervised objectives and learn a unified text-to-text transformer based on pre-trained models such as T5.

2.1. GENERATIVE OBJECTIVES

Similar to many other pre-training tasks such as masked language modeling, we aim to teach models to recover original sentences from corrupted inputs, which is often regarded as a denoising process. We propose two generative self-supervised pre-training objectives: concept-to-sentence generation (C2S) and concept order recovering (COR). Concept Extraction. Given an input x = [x 1 , x 2 , . . . , x n ], we first conduct part-of-speech tagging with Spacy for the sentence and extract Verb, Noun, and Proper Nouns from the sentence to use as

funding

work was done when Wangchunshu was visiting USC.

