AUTOMATED CONCATENATION OF EMBEDDINGS FOR STRUCTURED PREDICTION

Abstract

Pretrained contextualized embeddings are powerful word representations for structured prediction tasks. Recent work found that better word representations can be obtained by concatenating different types of embeddings. However, the selection of embeddings to form the best concatenated representation usually varies depending on the task and the collection of candidate embeddings, and the everincreasing number of embedding types makes it a more difficult problem. In this paper, we propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks, based on a formulation inspired by recent progress on neural architecture search. Specifically, a controller alternately samples a concatenation of embeddings, according to its current belief of the effectiveness of individual embedding types in consideration for a task, and updates the belief based on a reward. We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model, which is fed with the sampled concatenation as input and trained on a task dataset. Empirical results on 6 tasks and 21 datasets show that our approach outperforms strong baselines and achieves state-of-the-art performance with fine-tuned embeddings in the vast majority of evaluations.

1. INTRODUCTION

Recent developments on pretrained contextualized embeddings have significantly improved the performance of structured prediction tasks in natural language processing. Approaches based on contextualized embeddings, such as ELMo (Peters et al., 2018 ), Flair (Akbik et al., 2018) , BERT (Devlin et al., 2019) , and XLM-R (Conneau et al., 2020) , have been consistently raising the state-of-the-art for various structured prediction tasks. Concurrently, research has also showed that word representations based on the concatenation of multiple pretrained contextualized embeddings and traditional non-contextualized embeddings (such as word2vec (Mikolov et al., 2013) and character embeddings (Santos & Zadrozny, 2014)) can further improve performance (Peters et al., 2018; Akbik et al., 2018; Straková et al., 2019; He & Choi, 2020) . Given the ever-increasing number of embedding learning methods that operate on different granularities (e.g., word, subword, or character level) and with different model architectures, choosing the best embeddings to concatenate for a specific task becomes non-trivial, and exploring all possible concatenations can be prohibitively demanding in computing resources. Neural architecture search (NAS) is an active area of research in deep learning to automatically search for better model architectures, and has achieved state-of-the-art performance on various tasks in computer vision, such as image classification (Real et al., 2019) , semantic segmentation (Liu et al., 2019a) , and object detection (Ghiasi et al., 2019) . In natural language processing, NAS has been successfully applied to find better RNN structures (Zoph & Le, 2017; Pham et al., 2018b) and recently better transformer structures (So et al., 2019; Zhu et al., 2020) . In this paper, we propose the Automated Concatenation of Embeddings (ACE) approach to automate the process of finding better concatenations of embeddings for structured prediction tasks, formulated as an NAS problem. In this approach, an iterative search process is guided by a controller based on its belief that models the effectiveness of individual embedding candidates in consideration for a specific task. At each step, the controller samples a concatenation of embeddings according to the belief model and feeds the concatenated word representations as inputs to a task model, which in turn is trained on the task dataset and returns the model accuracy as a reward signal to update the belief model. We use the policy gradient algorithm (Williams, 1992) in reinforcement learning (Sutton & Barto, 1992) to solve the optimization problem. In order to improve the efficiency of the search process, we also design a special reward function by accumulating all the rewards based on the transformation between the current concatenation and all previously sampled concatenations. Our approach is different from previous work on NAS in the following aspects: 1. Unlike most previous work, we focus on searching for better word representations rather than better model architectures. 2. We design a unique search space for the embedding concatenation search. Instead of using RNN as in previous work of Zoph & Le (2017), we design a more straightforward controller to generate the embedding concatenation. We design a novel reward function in the objective of optimization to better evaluate the effectiveness of each concatenated embeddings. 3. Our approach is efficient and practical. ACE can find a strong word representation on a single GPU with only a few GPU-hours for structured prediction tasks, while a lot of the NAS approaches require dozens of or even thousands of GPU-hours to search for good neural architecture. 4. The task model from ACE achieves high accuracy without the need for retraining, while in previous work of NAS the resulting neural network usually requires retraining from scratch. Empirical results show that ACE outperforms strong baselines. Furthermore, we show that when ACE is applied to concatenate pretrained contextualized embeddings which are already fine-tuned on specific tasks, we can achieve state-of-the-art or competitive accuracy on 6 structured prediction tasks including Named Entity Recognition (Sundheim, 1995), Part-Of-Speech tagging (DeRose, 1988) , chunking (Tjong Kim Sang & Buchholz, 2000) , aspect extraction (Hu & Liu, 2004) , syntactic dependency parsing (Tesnière, 1959) 

2.2. NEURAL ARCHITECTURE SEARCH

Recent progress on deep learning has shown that network architecture design is crucial to the model performance. However, designing a strong neural architecture for each task requires enormous efforts, high level of knowledge, and experiences over the task domain. Therefore, automatic design of neural architecture is desired. A crucial part of NAS is search space design, which defines



and semantic dependency parsing (Oepen et al., 2014) over 21 datasets. Besides, we also analyze the advantage of ACE and reward function design over the baselines and show the advantage of ACE over ensemble models.

