NEURAL TOPIC MODELING WITH EMBEDDING CLUSTERING REGULARIZATION

Abstract

Topic models have been prevalent for decades with various applications like automatic text analysis due to their effectiveness and interpretability. However, existing topic models commonly suffer from the notorious topic collapsing issue: the discovered topics semantically collapse towards each other, leading to highly repetitive topics, insufficient topic discovery, and damaged model interpretability. In this paper, we propose a new neural topic model, Embedding Clustering Regularization Topic Model (ECRTM), to solve the topic collapsing issue. In addition to the reconstruction error of existing work, we propose a novel Embedding Clustering Regularization (ECR), which forces each topic embedding to be the center of a separately aggregated word embedding cluster in the semantic space. Instead of collapsing together, this makes topic embeddings away from each other and cover different semantics of word embeddings. Thus our ECR enables each produced topic to contain distinct word semantics, which alleviates topic collapsing. Through jointly optimizing our ECR objective and the neural topic modeling objective, ECRTM generates diverse and coherent topics together with high-quality topic distributions of documents. Extensive experiments on benchmark datasets demonstrate that ECRTM effectively addresses the topic collapsing issue and consistently surpasses state-of-the-art baselines in terms of topic quality, topic distributions of documents, and downstream classification tasks.

1. INTRODUCTION

Topic models have achieved great success in document analysis via discovering latent semantics. They have various downstream applications (Boyd-Graber et al., 2017) , like content recommendation (McAuley & Leskovec, 2013 ), summarization (Ma et al., 2012) , and information retrieval (Wang et al., 2007) . Current topic models can be roughly classified as two lines: conventional topic models based on probabilistic graphical models (Blei et al., 2003) or matrix factorization (Kim et al., 2015; Shi et al., 2018) and neural topic models (Miao et al., 2016; 2017; Srivastava & Sutton, 2017) .



Figure 1: t-SNE visualization (van der Maaten & Hinton, 2008) of word embeddings (•) and topic embeddings ( ) under 50 topics. These show that while the topic embeddings mostly collapse together in previous state-of-the-art models (ETM (Dieng et al., 2020), NSTM (Zhao et al., 2021b), and WeTe (Wang et al., 2022)), our ECRTM successfully avoids the collapsing by forcing each topic embedding to be the center of a separately aggregated word embedding cluster.

