SIMPLICIAL HOPFIELD NETWORKS

Abstract

Hopfield networks are artificial neural networks which store memory patterns on the states of their neurons by choosing recurrent connection weights and update rules such that the energy landscape of the network forms attractors around the memories. How many stable, sufficiently-attracting memory patterns can we store in such a network using N neurons? The answer depends on the choice of weights and update rule. Inspired by setwise connectivity in biology, we extend Hopfield networks by adding setwise connections and embedding these connections in a simplicial complex. Simplicial complexes are higher dimensional analogues of graphs which naturally represent collections of pairwise and setwise relationships. We show that our simplicial Hopfield networks increase memory storage capacity. Surprisingly, even when connections are limited to a small random subset of equivalent size to an all-pairwise network, our networks still outperform their pairwise counterparts. Such scenarios include non-trivial simplicial topology. We also test analogous modern continuous Hopfield networks, offering a potentially promising avenue for improving the attention mechanism in Transformer models.

1. INTRODUCTION

Hopfield networks (Hopfield, 1982) foot_0 store memory patterns in the weights of connections between neurons. In the case of pairwise connections, these weights translate to the synaptic strength between pairs of neurons in biological neural networks. In such a Hopfield network with N neurons, there will be N 2 of these pairwise connections, forming a complete graph. Each edge is weighted by a procedure which considers P memory patterns and which, based on these patterns, seeks to minimise a defined energy function such that the network's dynamics are attracted to and ideally exactly settles in the memory pattern which is nearest to the current states of the neurons. The network therefore acts as a content addressable memory -given a partial or noise-corrupted memory, the network can update its states through recurrent dynamics to retrieve the full memory. Since its introduction, the Hopfield network has been extended and studied widely by neuroscientists (Griniasty et al., 1993; Schneidman et al., 2006; Sridhar et al., 2021; Burns et al., 2022 ), physicists (Amit et al., 1985; Agliari et al., 2013; Leonetti et al., 2021) , and computer scientists (Widrich et al., 2020; Millidge et al., 2022) . Of particular interest to the machine learning community is the recent development of modern Hopfield networks (Krotov & Hopfield, 2016) and their close correspondence (Ramsauer et al., 2021) to the attention mechanism of Transformers (Vaswani et al., 2017 ). An early (Amit et al., 1985; McEliece et al., 1987) and ongoing (Hillar & Tran, 2018) theme in the study of Hopfield networks has been their memory storage capacity, i.e., determining the number of memory patterns which can be reliably stored and later recalled via the dynamics. As discussed in Appendix A.1, this theoretical and computational exercise serves two purposes: (i) improving the memory capacity of such models for theoretical purposes and computational applications; and (ii) gaining an abstract understanding of neurobiological mechanisms and their implications for biological memory systems. Traditional Hopfield networks with binary neuron states, in the limit of N → ∞ and P → ∞, maintain associative memories for up to approximately 0.14N patterns (Amit et al.,



After the proposal ofMarr (1971), many similar models of associative memory were proposed, e.g., those ofNakano (1972), Amari (1972), Little (1974), and Stanley (1976) -all before Hopfield (1982). Nevertheless, much of the research literature refers to and seems more proximally inspired byHopfield (1982). Many of these models can also be considered instances of the Lenz-Ising model(Brush, 1967) with infinite-range interactions.

