A MINIMALIST DATASET FOR SYSTEMATIC GENERAL-IZATION OF PERCEPTION, SYNTAX, AND SEMANTICS

Abstract

Inspired by humans' exceptional ability to master arithmetic and generalize to new problems, we present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts at three levels: perception, syntax, and semantics. In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images (i.e., perception), how multiple concepts are structurally combined to form a valid expression (i.e., syntax), and how concepts are realized to afford various reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing on systematic generalization, we carefully design a five-fold test set to evaluate both the interpolation and the extrapolation of learned concepts w.r.t. the three levels. Further, we design a few-shot learning split to determine whether or not models can rapidly learn new concepts and generalize them to more complex scenarios. To comprehend existing models' limitations, we undertake extensive experiments with various sequenceto-sequence models, including RNNs, Transformers, and GPT-3 (with the chain of thought prompting). The results indicate that current models struggle to extrapolate to long-range syntactic dependency and semantics. Models exhibit a considerable gap toward human-level generalization when evaluated with new concepts in a few-shot setting. Moreover, we discover that it is infeasible to solve HINT by merely scaling up the dataset and the model size; this strategy contributes little to the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3 experiments, the chain of thought prompting exhibits impressive results and significantly boosts the test accuracy. We believe the HINT dataset and the experimental findings are of great interest to the learning community on systematic generalization.

1. INTRODUCTION

Humans possess a versatile mechanism for learning concepts from data (Firestone & Scholl, 2016 ). Suppose, for example, that we were tasked with deciphering ancient Egyptian signs based on the examples in Table 1 . Given sufficient time, we may comprehend these signs by how to recognize them-what each sign looks like at the perceptual level, by how to compose them into valid sequence-at the syntactic level, and how to predict the results-at the semantic level. Learning concepts heavily rely on these three-level interweaving meanings. Such observation is also consistent with the classic view of human cognition, which postulates at least three distinct levels of organizations in computation systems (Pylyshyn, 1984) . Another appealing characteristic of human concept learning is its systematic compositionality (Chomsky, 1957; Montague, 1970) : the algebraic capacity to understand and construct an endless number of novel combinations from a finite set of known components, i.e., "infinite use of finite means" (Chomsky, 1965) . As illustrated in Table 1 , this form of compositionality is essential to the human ability to make strong generalizations from simple examples to complex ones. Various benchmarks (Lake & Baroni, 2018; Hupkes et al., 2020; Keysers et al., 2020) and methods (Lake, 2019; Gordon et al., 2019; Csordás et al., 2021) have been introduced by the emerging community of learning models that capture human-like systematic compositionality. As it is difficult to collect real data with systematic compositionality, the majority of existing benchmarks are derived from artificial domains using synthetic data and tasks, covering only a subset of the concept learning spectrum; see Table 2 for a detailed comparison. When evaluating systematic compositionality, prior datasets frequently conflate syntax and semantics. For instance, the SCAN dataset (Lake & Baroni, 2018) is a semantic parsing task from natural language commands to action sequences; when a model fails on a longer command than the ones in the training set, the root cause could stem from misinterpreting the complex syntactic relations in a long input sequence (command) or its inability to generate a long output sequence (actions) (e.g., as a result of the EOS decision problem (Newman et al., 2020) . In addition, previous benchmarks frequently incorporated simple semantics (e.g., a simple mapping or repetition), resulting in an undesired bias toward syntactic generalization. To expand systematic compositionality to a full-spectrum systematic generalization w.r.t. perception, syntax, and semantics, we draw inspiration from arithmetic and present a new benchmark called HINT, Handwritten arithmetic with INTegers. The HINT task is intuitive: Machines accept as input images of handwritten expressions and predict the final results of expressions, restricted in the integers. Since there is no intermediary supervision, the three-level meanings are apparently intertwined during learning, and models are expected to simultaneously acquire the three-level meanings to make correct predictions. To provide a comprehensive and rigorous test of how models generalize the learned concepts, we introduce a carefully structured evaluation scheme with five subsets, focusing on generalization patterns (i.e., interpolation and extrapolation) at various levels (i.e., perception, syntax, and semantics). In addition, we build a few-shot learning split to determine if models can rapidly learn new concepts from few examples and generalize them to more complicated scenarios. Being minimal yet comprehensive in terms of systematic generalization, HINT is fundamentally more difficult than earlier datasets because: (i) The images are of actual handwriting with considerable visual variation; (ii) The syntactic relations between the tokens in the expressions are more complex with long-range dependency. (iii) The semantics of arithmetic concepts are more complex than the simple mappings in prior datasets. To facilitate future research in this direction, we conduct extensive experiments of various sequenceto-sequence (seq2seq) models, including Recurrent Neural Networks (Hochreiter & Schmidhuber, 1997; Chung et al., 2014 ), Transformers (Vaswani et al., 2017 ), and GPT-3 (Brown et al., 2020) (with chain of thought prompting Wei et al. ( 2022)). Our experiments indicate that all models still struggle on HINT; even the state-of-the-art model, Universal Transformer (Dehghani et al., 2018) with relative positional encoding (Shaw et al., 2018; Dai et al., 2019) , achieves just 54% accuracy on HINT, although it achieves virtually perfect accuracy on prior datasets such as SCAN (Csordás et al., 2021) . An in-depth analysis of the results on each test subset reveals that current models still struggle with extrapolation to long-range syntactic dependency and semantics. In the GPT-3 experiments, the chain of thought prompting significantly increases the zero-shot test accuracy from 8.6% to 27.6%. By examining the scaling trends of the test accuracy w.r.t. the size of the model and the dataset, we find that it is impractical to solve HINT by simply scaling up the size of the dataset or the model, as is typically done in NLP tasks (Kaplan et al., 2020; Henighan et al., 2020) ; more data and parameters do not significantly improve the extrapolation over syntax and semantics. The fewshot learning experiments demonstrate that, despite the fact that the top-performing models exhibit decent capabilities for learning new concepts, they are still far from the human-level generalization that only requires the learning examples of a new concept in a primitive form and readily generalizes to more complex compositions of the learned concept. In short, we introduce the HINT dataset for investigating the systematic generalization across three levels-perception, syntax, and semantics. By benchmarking various seq2seq models on HINT, we uncover their primary weaknesses in systematic generalization. We hope the HINT dataset and our experimental findings will stimulate future developments of systematic generalization.



Can

