EFFICIENT NEURAL REPRESENTATION IN THE COGNI-TIVE NEUROSCIENCE DOMAIN: MANIFOLD CAPACITY IN ONE-VS-REST RECOGNITION LIMIT

Abstract

The structure in neural representations as manifolds has become a popular approach to study information encoding in neural populations. One particular interest is the connection between object recognition capability and the separability of neural representations for different objects, often called "object manifolds." In learning theory, separability has been studied under the notion of storage capacity, which refers to the number of patterns encoded in a feature dimension. Chung et al. (2018) extended the notion of capacity from discrete points to manifolds, where manifold capacity refers to the maximum number of object manifolds that can be linearly separated with high probability given random assignment of labels. Despite the use of manifold capacity in analyzing artificial neural networks (ANNs), its application to neuroscience has been limited. Due to the limited number of "features", such as neurons, available in neural experiments, manifold capacity cannot be verified empirically, unlike in ANNs. Additionally, the usage of random label assignment, while common in learning theory, is of limited relevance to the definition of object recognition tasks in cognitive science. To overcome these limits, we present the Sparse Replica Manifold analysis to study object recognition. Sparse manifold capacity measures how many object manifolds can be separated under one versus the rest classification, a form of task widely used in both in cognitive neuroscience experiments and machine learning applications. We demonstrate the application of sparse manifold capacity allows analysis of a wider class of neural data -in particular, neural data that has a limited number of neurons with empirical measurements. Furthermore, sparse manifold capacity requires less computations to evaluate underlying geometries and enables a connection to a measure of dimension, the participation ratio. We analyze the relationship between capacity and dimension, and demonstrate that both manifold intrinsic dimension and the ambient space dimension play a role in capacity.

1. INTRODUCTION

The approach to study neural populations as manifolds and their geometry has become a popular method to uncover important structural properties in neural encoding and understand the mechanisms behind the ventral stream, the motor cortex, and cognition (Kriegeskorte & Kievit, 2013) (Sengupta et al., 2018 ) (Gallego et al., 2017 ) (Sohn et al., 2019)  (Ebitz & Hayden, 2021)(Kriegeskorte & Wei, 2021) (Chung & Abbott, 2021 ). In the ventral stream, the invariant ability for humans and animals to recognize an object despite changes in pose, position, and orientation has motivated a definition of object manifold as the underlying representation of neural responses to a distinct object class. A long-standing hypothesis in visual neuroscience posits that the visual cortex untangles these object manifolds for invariant object recognition (Dicarlo & Cox, 2007) , relating object recognition to the separation of manifolds by some linear hyperplane. There is a well developed theory of linear separability given by Gardner (1988) that studies the separation of points by a perceptron. The theory quantifies a capacity load that describes the maximum number of points that can be linearly separated given a random dichotomy (a random assignment of binary labels to the manifolds). The capacity load also encodes the number of points stored per feature dimension required to have linear separability. This theory of separation, however, does not connect to the geometries of the underlying representations.

