META AUXILIARY LABELS WITH CONSTITUENT-BASED TRANSFORMER FOR ASPECT-BASED SENTI-MENT ANALYSIS

Abstract

Aspect based sentiment analysis (ABSA) is a challenging natural language processing task that could benefit from syntactic information. Previous work exploit dependency parses to improve performance on the task, but this requires the existence of good dependency parsers. In this paper, we build a constituent-based transformer for ABSA that can induce constituents without constituent parsers. We also apply meta auxiliary learning to generate labels on edges between tokens, supervised by the objective of the ABSA task. Without input from dependency parsers, our models outperform previous work on three Twitter data sets and match previous work closely on two review data sets.

1. INTRODUCTION

Aspect-based Sentiment Analysis (ABSA) is the task of predicting sentiment polarity towards observed aspects in a sentence. Recent work (Bai et al., 2020; Huang & Carley, 2019; Sun et al., 2019; Wang et al., 2020) used syntactic information from dependency parses to achieve new stateof-the-art results on benchmark ABSA data sets. However, these works (i) assumed the existence of good dependency parsers, and (ii) could not further optimize the pre-defined dependency labels for downstream performance of ABSA. Motivated by these limitations, we propose to induce syntactic information with supervision from the ABSA task. To take syntax into account, we aim to induce the necessary syntactic information for the ABSA task with inductive biases. We first design a Constituent-based Transformer (ConsTrans) to group tokens into constituents supervised by the ABSA objective. We argue that the formation of constituents provides a hierarchical structure of the sentence that is suitable for sentiment analysis. For example, in the sentence "Chinese dumplings in this restaurant taste very good" with the aspect term "Chinese dumplings", it is important to accurately assign the phrase "taste very good" to the aspect. Next, as seen in Figure 1 , even though the dependency graph structures for both sentences are identical, the sentiment towards "Chelsea" is positive for the input sentence on the left and negative for the one on the right. Therefore, the type of syntactic relationship between tokens would be useful to identify the sentiment towards the aspect term. Hence, we further extend ConsTrans into a Relational Constituent-based Transformer (RelConsTrans) to learn relation embeddings between every pair of tokens in the input sentence. We find that simply adding relation embedding fails to outperform ConsTrans. Inspired by Liu et al. (2019) , we further extend RelConsTrans to supervise the relation embedding with an auxiliary label generator (RelConsTransLG). In previous work (e.g. Bai et al., 2020; Huang & Carley, 2019) , the dependency parser played the role of the auxiliary label generator. However, such dependency parsers were not trained to provide auxiliary labels meant to improve ABSA. RelConsTransLG enables us to train the auxiliary label generator alongside the primary task to generate auxiliary labels that could directly enhance the performance of ABSA. We evaluate our models on five data sets -restaurant and laptop reviews (Pontiki et al., 2014) , ACL14 Twitter14 data (Dong et al., 2014) , Twitter15 and Twitter17 from a multi-modal ABSA data set (Yu & Jiang, 2019) . Compared against previous work which used dependency parsers, our models outperform them on all the Twitter data sets and matched previous work closely on the review data sets even without the use of constituent or dependency parser.

