LEARNING A LATENT SIMPLEX IN INPUT-SPARSITY TIME

ABSTRACT

We consider the problem of learning a latent k-vertex simplex K ⊂ R d , given access to A ∈ R d×n , which can be viewed as a data matrix with n points that are obtained by randomly perturbing latent points in the simplex K (potentially beyond K). A large class of latent variable models, such as adversarial clustering, mixed membership stochastic block models, and topic models can be cast as learning a latent simplex. Bhattacharyya and Kannan (SODA, 2020) give an algorithm for learning such a latent simplex in time roughly O(k • nnz(A)), where nnz(A) is the number of non-zeros in A. We show that the dependence on k in the running time is unnecessary given a natural assumption about the mass of the top k singular values of A, which holds in many of these applications. Further, we show this assumption is necessary, as otherwise an algorithm for learning a latent simplex would imply an algorithmic breakthrough for spectral low rank approximation. At a high level, Bhattacharyya and Kannan provide an adaptive algorithm that makes k matrix-vector product queries to A and each query is a function of all queries preceding it. Since each matrix-vector product requires nnz(A) time, their overall running time appears unavoidable. Instead, we obtain a low-rank approximation to A in input-sparsity time and show that the column space thus obtained has small sin Θ (angular) distance to the right top-k singular space of A. Our algorithm then selects k points in the low-rank subspace with the largest inner product (in absolute value) with k carefully chosen random vectors. By working in the low-rank subspace, we avoid reading the entire matrix in each iteration and thus circumvent the Θ(k • nnz(A)) running time.

1. INTRODUCTION

We study the problem of learning k vertices M * ,1 , . . . , M * ,k of a latent k-dimensional simplex K in R d using n data points generated from K and then possibly perturbed by a stochastic, deterministic, or adversarial source before given to the algorithm. In particular, the resulting points observed as input data could be heavily perturbed so that the initial points may no longer be discernible or they could be outside the simplex K. Recent work of Bhattacharyya & Kannan (2020) unifies several stochastic models for unsupervised learning problems, including k-means clustering, topic models (Blei, 2012), and mixed membership stochastic block models (Airoldi et al., 2014) under the problem of learning a latent simplex. In general, identifying the latent simplex can be computationally intractable. However many special applications do not require the full generality. For example, in a mixture model like Gaussian mixtures, the data is assumed to be generated from a convex combination of density functions. Thus, it may be possible to efficiently approximately learn the latent simplex given certain distributional properties in these models. Indeed, Bhattacharyya & Kannan (2020) showed that given certain reasonable geometric assumptions that are typically satisfied for real-world instances of Latent Dirichlet Allocation, Stochastic Block 1

