SPARSE ENCODING FOR MORE-INTERPRETABLE FEATURE-SELECTING REPRESENTATIONS IN PROBA-BILISTIC MATRIX FACTORIZATION

Abstract

Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.

1. INTRODUCTION

For many inverse problems featuring high-dimensional count matrices, such as those found in healthcare, model interpretability is paramount. Building interpretable high-performing solutions is technically challenging and requires flexible frameworks. A general approach to these problems is to structure solutions into pipelines; if each step is interpretable, one can achieve interpretability of the overall larger model. A common first step in modeling high-dimensional data sets is to use dimensionality reduction to find tractable data representations (also called factors or embeddings), that are then fed into downstream analyses. Our goal is to develop a dimension reduction scheme for count matrices such that the reduced representation has an innate interpretation in terms of the original data features. Interpretability versus explainability. We seek latent data representations that are not only post-hoc explainable (Laugel et al., 2019; Caruana et al., 2020) , but also intrinsically interpretable (Rudin, 2019) . Our definition of intrinsic interpretability requires clarity in the relationship between predictors and prediction, and meaningfulness of interactions and latent variables. Post-hoc explanations are based on subjective examination of a solution through the lens of subjectmatter expertise. For black-box models that lack intrinsic interpretability, these explanations are

