LEARNING A LATENT SIMPLEX IN INPUT-SPARSITY TIME

Abstract

We consider the problem of learning a latent k-vertex simplex K ⊂ R d , given access to A ∈ R d×n , which can be viewed as a data matrix with n points that are obtained by randomly perturbing latent points in the simplex K (potentially beyond K). A large class of latent variable models, such as adversarial clustering, mixed membership stochastic block models, and topic models can be cast as learning a latent simplex. Bhattacharyya and Kannan (SODA, 2020) give an algorithm for learning such a latent simplex in time roughly O(k • nnz(A)), where nnz(A) is the number of non-zeros in A. We show that the dependence on k in the running time is unnecessary given a natural assumption about the mass of the top k singular values of A, which holds in many of these applications. Further, we show this assumption is necessary, as otherwise an algorithm for learning a latent simplex would imply an algorithmic breakthrough for spectral low rank approximation. At a high level, Bhattacharyya and Kannan provide an adaptive algorithm that makes k matrix-vector product queries to A and each query is a function of all queries preceding it. Since each matrix-vector product requires nnz(A) time, their overall running time appears unavoidable. Instead, we obtain a low-rank approximation to A in input-sparsity time and show that the column space thus obtained has small sin Θ (angular) distance to the right top-k singular space of A. Our algorithm then selects k points in the low-rank subspace with the largest inner product (in absolute value) with k carefully chosen random vectors. By working in the low-rank subspace, we avoid reading the entire matrix in each iteration and thus circumvent the Θ(k • nnz(A)) running time.

1. INTRODUCTION

We study the problem of learning k vertices M * ,1 , . . . , M * ,k of a latent k-dimensional simplex K in R d using n data points generated from K and then possibly perturbed by a stochastic, deterministic, or adversarial source before given to the algorithm. In particular, the resulting points observed as input data could be heavily perturbed so that the initial points may no longer be discernible or they could be outside the simplex K. Recent work of Bhattacharyya & Kannan (2020) unifies several stochastic models for unsupervised learning problems, including k-means clustering, topic models (Blei, 2012), and mixed membership stochastic block models (Airoldi et al., 2014) under the problem of learning a latent simplex. In general, identifying the latent simplex can be computationally intractable. However many special applications do not require the full generality. For example, in a mixture model like Gaussian mixtures, the data is assumed to be generated from a convex combination of density functions. Thus, it may be possible to efficiently approximately learn the latent simplex given certain distributional properties in these models. Indeed, Bhattacharyya & Kannan (2020) showed that given certain reasonable geometric assumptions that are typically satisfied for real-world instances of Latent Dirichlet Allocation, Stochastic Block Models and Clustering, there exists an O(k • nnz(A))foot_0 time algorithm for recovering the vertices of the underlying simplex. We show that, given an additional natural assumption, we can remove the dependency on k and obtain a true input sparsity time algorithm. We begin by defining the model along with our new assumption: Definition 1.1 (Latent Simplex Model). Let M * ,1 , M * ,2 , . . . , M * ,k ∈ R d denote the vertices of a k-simplex, K. Let P * ,1 , P * ,2 . . . P * ,n ∈ R d be n points in the convex hull of K. Given σ > 0, we observe n points A * ,1 , A * ,2 . . . A * ,n ∈ R d such that A -P 2 ≤ σ √ n. Further, we make the following assumptions on the data generation process:  1. Well-Separateness. For all ∈ [k], M * i ∈ [k], σ i > φ • σ k+1 and A -A k 2 F ≤ φ A -A k 2 2 . These assumptions are natural across many interesting applications; see Section 2 for more details. Bhattacharyya & Kannan (2020) introduced the Well-Separateness (1), Proximate Latent Points (2) and Spectrally Bounded Perturbation (3) assumptions. We include an additional Significant Singular Values assumption (4), which is crucial for obtaining a faster running time; we discuss this in more detail below. Our main algorithmic result can then be stated as follows: Theorem 1.2 (Learning a Latent Simplex in Input-Sparsity Time). Given k ≥ 2 and A ∈ R d×n from the Latent Simplex Model (Definition 1.1), there exists an algorithm that runs in O (nnz(A) + (n + d)poly(k/φ)) time to output subsets A R1 , . . . , A R k such that upon permuting the columns of M, with probability at least 1 -1/Ω( √ k), for all ∈ [k], we have A R -M * , 2 ≤ 300k 4 σ/(α √ δ). Our result implies faster algorithms for various stochastic models that can be formulated as special cases of the Latent Simplex Model, including Latent Dirichlet Allocation for Topic Modeling, Mixed Membership Stochastic Block Models and Adversarial Clustering. We summarize the connections to these applications below. We describe our algorithm and provide an outline to our analysis; we defer all formal proofs to the supplementary material.

2. CONNECTION TO STOCHASTIC MODELS

We first formalize the connection between the Latent Simplex Model and numerous stochastic models. In particular, we show that topic models like Latent Dirichlet Allocation (LDA) and Stochastic Block Models can be viewed as special cases of the Latent Simplex Model; we defer discussion on Adversarial Clustering to the supplementary material.

2.1. TOPIC MODELS

Probabilistic Topic Models attempt to identify abstract topics in a collection of documents by discovering latent semantic structure (Blei & Jordan, 2003; Blei & Lafferty, 2006; Hoffman et al., 2010; Zhu et al., 2012; Blei, 2012) . Each document in the corpus is represented by a bag-of-words vectorization with the corresponding word frequencies. The standard statistical assumption is that the generative process for the corpus is a joint probability distribution over both the observed and hidden random variables. The hidden random variables can be interpreted as representative documents for each topic. The goal is to then design algorithms that can learn the underlying topics. The topics can



Throughout the paper we use the notation O to suppress poly-logarithmic factors.



, has non-trivial mass in the orthogonal complement of the span of the remaining vectors, i.e., for all ∈ [k], |Proj(M * , , Null(M \ M * , ))| ≥ α max M * , 2 where Proj(x, U ) denotes the orthogonal projection of x to the subspace U . 2. Proximate Latent Points. For all ∈ [k], there exists a set S ⊆ [n] such that |S | ≥ δn and for all j ∈ S , M * , -P * ,j 2 ≤ 4σ/δ.3. Spectrally Bounded Perturbation. The spectrum of A -P is bounded, i.e., for a sufficiently large constant c, σ/√ δ ≤ α 2 min M * , 2 /ck 9 . 4. Significant Singular Values. Let A = i∈[d] σ i u i v Ti be the singular value decomposition and let 0 < φ ≤ nnz(A)/(n • poly(k)). We assume that for all

