SPARSE ENCODING FOR MORE-INTERPRETABLE FEATURE-SELECTING REPRESENTATIONS IN PROBA-BILISTIC MATRIX FACTORIZATION

Abstract

Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.

1. INTRODUCTION

For many inverse problems featuring high-dimensional count matrices, such as those found in healthcare, model interpretability is paramount. Building interpretable high-performing solutions is technically challenging and requires flexible frameworks. A general approach to these problems is to structure solutions into pipelines; if each step is interpretable, one can achieve interpretability of the overall larger model. A common first step in modeling high-dimensional data sets is to use dimensionality reduction to find tractable data representations (also called factors or embeddings), that are then fed into downstream analyses. Our goal is to develop a dimension reduction scheme for count matrices such that the reduced representation has an innate interpretation in terms of the original data features. Interpretability versus explainability. We seek latent data representations that are not only post-hoc explainable (Laugel et al., 2019; Caruana et al., 2020) , but also intrinsically interpretable (Rudin, 2019) . Our definition of intrinsic interpretability requires clarity in the relationship between predictors and prediction, and meaningfulness of interactions and latent variables. Post-hoc explanations are based on subjective examination of a solution through the lens of subjectmatter expertise. For black-box models that lack intrinsic interpretability, these explanations are produced using inexact simpler approximating models (typically local linear regressions). These explanations can be misleading (Laugel et al., 2019) . Disentangled autoencoders. Disentangled variational autoencoders (Higgins et al., 2016; Tomczak & Welling, 2017; Deng et al., 2017) are deep learning models that are inherently mindful of post-hoc model explainability. Like other autoencoders, these models are encoder-decoder structured (see Definitions 1 and 2), where the encoder generates dimensionally reduced representations. Definition 1. The encoder transformation maps input data features to latent representations Definition 2. The decoder transformation maps latent representations to predictions Disentangled autoencoders use a combination of penalties (Higgins et al., 2016; Hoffman et al., 2017) and structural constraints (Ainsworth et al., 2018) to encourage statistical independence in representations, facilitating explanation. These methods arose in computer vision and have demonstrated empirical utility in producing nonlinear factor models where the factors are conceptually sensible. Yet, due to the black-box nature of deep learning, explanations for how the factors are generated from the data, using local saliency maps for instance, are unreliable or imprecise (Laugel et al., 2019; Slack et al., 2020; Arun et al., 2020) . In imaging applications, where the features are raw pixels, this type of interpretability is unnecessary. However, when modeling structured data problems, one often wishes to learn the effects of the individual data features. Probabilistic matrix factorization. Probabilistic matrix factorization methods are related to autoencoders (Mnih & Salakhutdinov, 2008) . These methods are often presented in the context of recommender systems. In these cases, rows of the input matrix are attributed to users, and columns (features) are attributed to items. Probabilistic matrix factorization methods are bi-linear in item-and user-specific effects, de-convolving them in a manner similar to item response theory (Chang et al., 2019) . In applications with non-negative data, non-negative sparse matrix factorization methods further improve on interpretability by computing predictions using only additive terms (Lee & Seung, 1999) .

For count matrices, Gopalan et al. (2014) introduced hierarchical Poisson matrix factorization (HPF).

Suppose Y = (y ui ) is a U × I matrix of non-negative integers, where each row corresponds to a user and each column corresponds to an item (feature). Adopting their notation, Gopalan et al. (2014) formulated their model as y ui |Θ, B ∼ Poisson k θ uk β ki θ uk |ξu, a ∼ Gamma (a, ξu) β ki |ηi, c ∼ Gamma (c, ηi) , where Θ = (θ uk ) is a U × K matrix, and B = (β ki ) is the representation decoder matrix. Additional priors η i ∼ Gamma(c , c /d ) and ξ u ∼ Gamma(a , a /b ) model item and user-specific variability in the dataset, and a , b , c , d ∈ R + are hyper-parameters. The row vector θ u = (θ u1 , . . . , θ uK ) constitutes a K-dimensional representation of the user, and the matrix B = (β ki ) decodes the representation into predictions on the user's counts. In HPF, the gamma priors on the decoder matrix B = (β ki ) enforce non-negativity. Because the gamma distribution can have density at zero, these priors also allow for sparsity where only a few of the entries are far from zero. Sparsity, non-negativity, and the simple bi-linear structure of the likelihood in HPF combine to yield a simple interpretation of the model: in HPF, a predictive density for each matrix element is formed using a linear combination of a subset of representation elements, where the elements of B determine the relative additive contributions of each of the elements (Fig. 1a ). However, the composition of each latent factor in terms of the original items is not explicitly determined but arises from Bayesian inference (Fig. 1c ). Limitations of HPF. Classical non-negative matrix factorization (NMF) is often touted for having a column-clustering property (Ding et al., 2005) , where data features are grouped into coherent factors. The standard HPF of Eq. 1 lacks this property. In HPF, while each prediction is a linear combination of a subset of factors, each factor is not necessarily a linear combination of a subset of features (depicted in Fig. 1c ). The transformation matrix B defines a decoder (Def. 2) like a classic autoencoder. A corresponding encoding transformation (Def. 1) does not explicitly appear in the formulation of Eq. 1. Determining the composition of factors is not simply a matter of reading the decoding matrix row-wise. Mathemat-

