CIRCUITNET: A GENERIC NEURAL NETWORK TO RE-ALIZE UNIVERSAL CIRCUIT MOTIF MODELING

Abstract

The successes of artificial neural networks (ANNs) are largely attributed to mimicking the human brain structures. Recent advances in neuroscience revealed that neurons interact with each other through various kinds of connectivity patterns to process information, in which the common connectivity patterns are also called circuit motifs. However, many existing ANNs can only model one or two circuit motifs in their architectures, so that their performance may drastically vary among different types of machine learning tasks. In this paper, we propose a new type of neural network inspired by the architectures of neuronal circuits, namely Circuit Neural Network (CircuitNet). In CircuitNet, a group of densely connected neurons, namely circuit motif unit (CMU), form the basic unit of the network, which is capable of modeling universal circuit motifs by adjusting the weights within the CMUs. Compared with traditional feed-forward networks, CircuitNet has the ability to model more types of neuron connections such as feed-back and lateral motifs. Inspired by the locally dense and globally sparse structure of the human brain, several iterations of signal transmission among different CMUs are achieved by sparse connections through the input ports and output ports of different CMUs. Experiments have demonstrated that CircuitNet can outperform popular neural network architectures in function approximation, reinforcement learning, image classification, and time series forecasting tasks.

1. INTRODUCTION

In the past decades, artificial neural networks (ANN) (McCulloch & Pitts, 1943) have been widely used as function estimators to solve regression and classification problems, which massively push deep learning forward in vastly different fields such as computer vision (He et al., 2016) , natural language processing (Vaswani et al., 2017) , deep reinforcement learning (Mnih et al., 2015) , etc. The successes of ANN are largely attributed to mimicking the simplified human brain structures. For example, the original multi-layer perceptrons (MLP) are collections of neurons organized as layers (Minsky & Papert, 1969; Rosenblatt, 1958) , and signals are controlled and transmitted between layers via linear transformations and non-linear activation functions, just like the synapses in a biological brain. Recently, the network architectures become rather complex, but their basic units such as convolutional layers and recurrent layers are still different abstractions of human nervous systems (Fukushima & Miyake, 1982; Hubel & Wiesel, 1968; Lindsay, 2021; Kietzmann et al., 2019; van Bergen & Kriegeskorte, 2020) . In recent years, understanding on both deep learning and neuroscience has made great advance, and it is time to rethink how artificial neural network designs can be further inspired by neuroscience. Following previous ANNs, we simplify the signal from a neuron as a real number, and focus more on how to model the signal transmission and connectivity patterns among neurons. Recent findings in neuroscience (Luo, 2021; Peters, 1991; Standring, 2021; Swanson, 2012) emphasized the role of specific patterns of synaptic connectivity in neuron communication across different brain regions. These patterns are analogous to the connection of neurons in ANNs, and in the rest of this section we will introduce how these findings can inspire the ANN design. One line of advances in neuroscience revealed that neurons interact with each other through various kinds of connectivity patterns, namely the circuit motifs (Luo, 2021). There are four types of most common circuit motifs, including feed-forward excitation and inhibition, feed-back inhibition,

