SHALLOW LEARNING IN MATERIO

Abstract

We introduce Shallow Learning In Materio (SLIM) as a resource-efficient method to realize closed-loop higher-order perceptrons. Our SLIM method provides a rebuttal to the Minsky school's disputes with the Rosenblatt school about the efficacy of learning representations in shallow perceptrons. As a proof-of-concept, here we devise a physically-scalable realization of the parity function. Our findings are relevant to artificial intelligence engineers, as well as neuroscientists and biologists.

1. Introduction

How do we best learn representations? We do not yet fully understand how cognition is manifested in any brain, not even in those of a worm (Rankin, 2004) . It is an open question if the shallow brain of a worm is capable of working memory, but if it were then it certainly must depart from the mechanistic models of large-scale brains (Eliasmith et al., 2012) . Nevertheless, worm-brain inspired learning combined with "scalable" deep learning architectures have been employed in self-driving cars (Lechner et al., 2020) . At present, by scalable we refer to TPU-based architectures (Jouppi et al., 2017) trained by gradient-descent (Rumelhart et al., 1986 ). However, one could envision a super-scalable future that is less synthetic and based on self-organized nanomaterial systems (Bose et al., 2015; Chen et al., 2020; Mirigliano et al., 2021) that natively realize higher-order (Lawrence, 2022a) and recurrent neural networks. In this short communication, we shall lay yet another brick towards such a future by providing theoretical arguments. 



Figure 1: A typology of cognitive material systems.

