HOPFIELD NETWORKS IS ALL YOU NEED

Abstract

We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-ofthe-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: https://github.com/ml-jku/hopfield-layers 

1. INTRODUCTION

The deep learning community has been looking for alternatives to recurrent neural networks (RNNs) for storing information. For example, linear memory networks use a linear autoencoder for sequences as a memory (Carta et al., 2020) . Additional memories for RNNs like holographic reduced representations (Danihelka et al., 2016) , tensor product representations (Schlag & Schmidhuber, 2018; Schlag et al., 2019) and classical associative memories (extended to fast weight approaches) (Schmidhuber, 1992; Ba et al., 2016a; b; Zhang & Zhou, 2017; Schlag et al., 2021) have been suggested. Most approaches to new memories are based on attention. The neural Turing machine (NTM) is equipped with an external memory and an attention process (Graves et al., 2014) . Memory networks (Weston et al., 2014) use an arg max attention by first mapping a query and patterns into a space and then retrieving the pattern with the largest dot product. End to end memory networks (EMN) make this attention scheme differentiable by replacing arg max through a softmax (Sukhbaatar et al., 2015a; b) . EMN with dot products became very popular and implement a key-value attention (Daniluk et al., 2017) for self-attention. An enhancement of EMN is the transformer (Vaswani et al., 2017a; b) and its extensions (Dehghani et al., 2018) . The transformer has had a great impact on the natural language processing (NLP) community, in particular via the BERT models (Devlin et al., 2018; 2019) . Contribution of this work: (i) introducing novel deep learning layers that are equipped with a memory via modern Hopfield networks, (ii) introducing a novel energy function and a novel update rule for continuous modern Hopfield networks that are differentiable and typically retrieve patterns after one update. Differentiability is required for gradient descent parameter updates and retrieval with one update is compatible with activating the layers of deep networks. We suggest using modern Hopfield networks to store information or learned prototypes in different layers of neural networks. Binary Hopfield networks were introduced as associative memories that can store and retrieve patterns (Hopfield, 1982) . A query pattern can retrieve the pattern to which it is most similar or an average over similar patterns. Hopfield networks seem to be an ancient technique, however, new energy functions improved their properties. The stability of spurious states or metastable states was sensibly reduced (Barra et al., 2018) . The largest and most impactful successes are reported on increasing the storage capacity of Hopfield networks. In a d-dimensional space, the standard Hopfield model can store d uncorrelated patterns without errors but only Cd/ log(d) random patterns with C < 1/2 for a fixed stable pattern or C < 1/4 if all patterns are stable (McEliece et al., 1987) . The same bound holds for nonlinear learning rules (Mazza, 1997) . Using tricks-of-trade and allowing small retrieval errors, the storage capacity is about 0.138d (Crisanti et al., 1986; Hertz et al., 1991; Torres et al., 2002) . If the learning rule is not related to the Hebb rule, then up to d patterns can be stored (Abu-Mostafa & StJacques, 1985) . For Hopfield networks with non-zero diagonal matrices, the storage can be increased to Cd log(d) (Folli et al., 2017) . In contrast to the storage capacity, the number of energy minima (spurious states, stable states) of Hopfield networks is exponential in d (Tanaka & Edwards, 1980; Bruck & Roychowdhury, 1990; Wainrib & Touboul, 2013) . The standard binary Hopfield network has an energy function that can be expressed as the sum of interaction functions F with F (x) = x 2 . Modern Hopfield networks, also called "dense associative memory" (DAM) models, use an energy function with interaction functions of the form F (x) = x n and, thereby, achieve a storage capacity proportional to d n-1 (Krotov & Hopfield, 2016; 2018) . The energy function of modern Hopfield networks makes them robust against adversarial attacks (Krotov & Hopfield, 2018) . Modern binary Hopfield networks with energy functions based on interaction functions of the form F (x) = exp(x) even lead to storage capacity of 2 d/2 , where all stored binary patterns are fixed points but the radius of attraction vanishes (Demircigil et al., 2017) . However, in order to integrate Hopfield networks into deep learning architectures, it is necessary to make them differentiable, that is, we require continuous Hopfield networks (Hopfield, 1984; Koiran, 1994) . Therefore, we generalize the energy function of Demircigil et al. (2017) that builds on exponential interaction functions to continuous patterns and states and obtain a new modern Hopfield network. We also propose a new update rule which ensures global convergence to stationary points of the energy (local minima or saddle points). We prove that our new modern Hopfield network typically retrieves patterns in one update step ( -close to the fixed point) with an exponentially low error and has a storage capacity proportional to c d-1 4 (reasonable settings for c = 1.37 and c = 3.15 are given in Theorem 3). The retrieval of patterns with one update is important to integrate Hopfield networks in deep learning architectures, where layers are activated only once. Surprisingly, our new update rule is also the key-value attention as used in transformer and BERT models (see Fig. 1 ). Our modern Hopfield networks can be integrated as a new layer in deep learning architectures for pooling, memory, prototype learning, and attention. We test these new layers on different benchmark datasets and tasks like immune repertoire classification. Figure 1 : We generalize the energy of binary modern Hopfield networks to continuous states while keeping fast convergence and storage capacity properties. We also propose a new update rule that minimizes the energy. The new update rule is the attention mechanism of the transformer. Formulae are modified to express softmax as row vector. "="-sign means "keeps the properties".

2. MODERN HOPFIELD NETS WITH CONTINUOUS STATES

New energy function for continuous state Hopfield networks. In order to integrate modern Hopfield networks into deep learning architectures, we have to make them continuous. To allow for continuous states, we propose a new energy function that is a modification of the energy of modern Hopfield networks (Demircigil et al., 2017) . We also propose a new update rule which can be proven to converge to stationary points of the energy (local minima or saddle points). We have N stored (key) patterns x i ∈ R d represented by the matrix X = (x 1 , . . . , x N ) with the largest pattern M = max i x i . The state (query) pattern is ξ ∈ R d . For exponential interaction functions, we need the log-sum-exp function (lse) for 0 < β lse(β, x) = β -1 log N i=1 exp(βx i ) , which is convex (see appendix Eq. ( 461), and Lemma A22). The energy function E of the modern Hopfield networks for binary patterns x i and a binary state pattern ξ is E = -N i=1 F ξ T x i (Krotov & Hopfield, 2016) . Here, F (x) = x n is the interaction function, where n = 2 gives the classical Hopfield network. The storage capacity is proportional to d n-1 (Krotov & Hopfield, 2016) . This model was generalized by Demircigil et al. (2017) to exponential interaction functions F (x) = exp(x) which gives the energy E = -exp(lse(1, X T ξ)). This energy leads to an exponential storage capacity of N = 2 d/2 for binary patterns. Furthermore, with a single update, the fixed point is recovered with high probability for random patterns. However, still this modern Hopfield network has binary states. We generalize this energy function to continuous-valued patterns while keeping the properties of the modern Hopfield networks like the exponential storage capacity and the extremely fast convergence (see Fig. 1 ). For the new energy we take the logarithm of the negative energy of modern Hopfield networks and add a quadratic term of the current state. The quadratic term ensures that the norm of the state vector ξ remains finite and the energy is bounded. Classical Hopfield networks do not require to bound the norm of their state vector, since it is binary and has fixed length. We define the novel energy function E as E = -lse(β, X T ξ) + 1 2 ξ T ξ + β -1 log N + 1 2 M 2 . (2) We have 0 E 2M 2 (see appendix Lemma A1). Using p = softmax(βX T ξ), we define a novel update rule (see Fig. 1 ): ξ new = f (ξ) = Xp = Xsoftmax(βX T ξ) . The next theorem states that the update rule Eq. (3) converges globally. The proof uses the Concave-Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003) , which is equivalent to Legendre minimization (Rangarajan et al., 1996; 1999) algorithms (Yuille & Rangarajan, 2003) . Theorem 1. The update rule Eq. (3) converges globally: For ξ t+1 = f (ξ t ), the energy E(ξ t ) → E(ξ * ) for t → ∞ and a fixed point ξ * . Proof. The update rule in Eq. ( 3) is the CCCP for minimizing the energy E, which is the sum of the convex 1/2ξ T ξ and concave -lse (see details in appendix Theorem 1). Theorem 2 in Yuille & Rangarajan (2002) yields the global convergence property. Also, in Theorem 2 in Sriperumbudur & Lanckriet (2009) the global convergence of CCCP is proven via a rigorous analysis using Zangwill's global convergence theory of iterative algorithms. The global convergence theorem only assures that for the energy E(ξ t ) → E(ξ * ) for t → ∞ but not ξ t → ξ * . The next theorem strengthens Zangwill's global convergence theorem (Meyer, 1976) and gives convergence results similar to those known for expectation maximization (Wu, 1983) . Theorem 2. For the iteration Eq. (3) we have E (ξ t ) → E (ξ * ) = E * as t → ∞, for some stationary point ξ * . Furthermore, ξ t+1 -ξ t → 0 and either {ξ t } ∞ t=0 converges or, in the other case, the set of limit points of {ξ t } ∞ t=0 is a connected and compact subset of L (E * ), where L (a) = {ξ ∈ L | E (ξ) = a} and L is the set of stationary points of the iteration Eq. (3). If L (E * ) is finite, then any sequence {ξ t } ∞ t=0 generated by the iteration Eq. (3) converges to some ξ * ∈ L (E * ). For a proof, see appendix Theorem 2. Therefore, all the limit points of any sequence generated by the iteration Eq. ( 3) are stationary points (local minima or saddle points) of the energy function E. Either the iteration converges or, otherwise, the set of limit points is a connected and compact set. The next theorem gives the results on the storage capacity of our new continuous state modern Hopfield network. We first define what we mean by storing and retrieving patterns using a modern Hopfield network with continuous states. Definition 1 (Pattern Stored and Retrieved). We assume that around every pattern x i a sphere S i is given. We say x i is stored if there is a single fixed point x * i ∈ S i to which all points ξ ∈ S i converge, and S i ∩ S j = ∅ for i = j. We say x i is retrieved for a given if iteration (update rule) Eq. (3) gives a point xi that is at least -close to the single fixed point x * i ∈ S i . The retrieval error is xix i . As with classical Hopfield networks, we consider patterns on the sphere, i.e. patterns with a fixed norm. For randomly chosen patterns, the number of patterns that can be stored is exponential in the dimension d of the space of the patterns (x i ∈ R d ). Theorem 3. We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M := K √ d -1. We define a := , where W 0 is the upper branch of the Lambert W function (Olver et al., 2010, (4.13 )), and ensure c ≥ 2 √ p 4 d-1 . Then with probability 1 -p, the number of random patterns that can be stored is N ≥ √ p c d-1 4 . ( ) Therefore it is proven for c ≥ 3.1546 with β = 1, K = 3, d = 20 and p = 0.001 (a + ln(b) > 1.27) and proven for c ≥ 1.3718 with β = 1, K = 1, d = 75, and p = 0.001 (a + ln(b) < -0.94). For a proof, see appendix Theorem A5. The next theorem states that the update rule typically retrieves patterns after one update. Retrieval of a pattern x i for fixed point x * i and query ξ is defined via an by f (ξ) -x * i < , that is, the update is -close to the fixed point. Retrieval with one update is crucial to integrate modern Hopfield networks into deep learning architectures, where layers are activated only once. First we need the concept of separation of a pattern. For pattern x i we define its separation ∆ i to other patterns by: ∆ i := min j,j =i x T i x i -x T i x j = x T i x i -max j,j =i x T i x j . (5) The update rule retrieves patterns with one update for well separated patterns, that is, patterns with large ∆ i . Theorem 4. With query ξ, after one update the distance of the new point f (ξ) to the fixed point x * i is exponentially small in the separation ∆ i . The precise bounds using the Jacobian J = ∂f (ξ) ∂ξ and its value J m in the mean value theorem are: f (ξ) -x * i J m 2 ξ -x * i , J m 2 2 β N M 2 (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) . For given and sufficient large ∆ i , we have f (ξ) -x * i < , that is, retrieval with one update. See proof in appendix Theorem A8. At the same time, the retrieval error decreases exponentially with the separation ∆ i . Theorem 5 (Exponentially Small Retrieval Error). The retrieval error f (ξ) -x i of pattern x i is bounded by f (ξ) -x i 2 (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) M (8) and for x i -x * i 1 2 β M together with x i -ξ 1 2 β M by x i -x * i 2 e (N -1) M exp(-β ∆ i ) . See proof in appendix Theorem A9. Metastable states and one global fixed point. So far, we considered patterns x i that are well separated and the iteration converges to a fixed point which is near a pattern x i . If no pattern x i is well separated from the others, then the iteration converges to a global fixed point close to the arithmetic mean of the vectors. In this case the softmax vector p is close to uniform, that is, p i = 1/N . If some vectors are similar to each other and well separated from all other vectors, then a metastable state near the similar vectors exists. Iterations that start near the metastable state converge to this metastable state, also if initialized by one of the similar patterns. For convergence proofs to one global fixed point and to metastable states see appendix Lemma A7 and Lemma A12, respectively. Hopfield update rule is attention of the transformer. The Hopfield network update rule is the attention mechanism used in transformer and BERT models (see Fig. 1 ). To see this, we assume N stored (key) patterns y i and S state (query) patterns r i that are mapped to the Hopfield space of dimension d k . We set x i = W T K y i , ξ i = W T Q r i , and multiply the result of our update rule with W V . The matrices Y = (y 1 , . . . , y N ) T and R = (r 1 , . . . , r S ) T combine the y i and r i as row vectors. We define the matrices X T = K = Y W K , Ξ T = Q = RW Q , and V = Y W K W V = X T W V , where W K ∈ R dy×d k , W Q ∈ R dr×d k , W V ∈ R d k ×dv . If β = 1/ √ d k and softmax ∈ R N is changed to a row vector, we obtain for the update rule Eq. (3) multiplied by W V : Z = softmax 1/ d k Q K T V = softmax β R W Q W T K Y T Y W K W V . ( ) The left part of Eq. ( 10) is the transformer attention. In the transformer self-attention R = Y , and W K W V replaced by just W V . Besides the attention mechanism, Hopfield networks allow for other functionalities in deep network architectures, which we introduce via specific layers in the next section. The right part of Eq. ( 10) serves to explain these specific layers.

3. NEW HOPFIELD LAYERS FOR DEEP LEARNING

Modern Hopfield networks with continuous states can be integrated into deep learning architectures, because they are continuous and differentiable with respect to their parameters. Furthermore, they typically retrieve patterns with one update, which is conform to deep learning layers that are activated only once. For these two reasons, modern Hopfield networks can serve as specialized layers in deep networks to equip them with memories. Below, we introduce three types of Hopfield layers: Hopfield, HopfieldPooling, and HopfieldLayer. Possible applications of Hopfield layers in deep network architectures comprise: • multiple instance learning (MIL) (Dietterich et al., 1997) , • processing of and learning with point sets (Qi et al., 2017a; b; Xu et al., 2018) , • set-based and permutation invariant learning (Guttenberg et al., 2016; Ravanbakhsh et al., 2016; Zaheer et al., 2017; Korshunova et al., 2018; Ilse et al., 2018; Zhai et al., 2020) , • attention-based learning (Vaswani et al., 2017a) , • deep learning with associative memories (Graves et al., 2014; Weston et al., 2014; Ba et al., 2016a; b; Schlag & Schmidhuber, 2018; Schlag et al., 2019) , • natural language processing (Devlin et al., 2018; 2019) , • sequence analysis and time series prediction (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997; Cho et al., 2014) , and • storing and retrieving reference data, e.g. the training data, outliers, high error data points, prototypes or cluster centers, support vectors & border cases. Hopfield network layers can substitute existing layers like pooling layers, permutation equivariant layers (Guttenberg et al., 2016; Ravanbakhsh et al., 2016) , GRU (Cho et al., 2014) & LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997) layers, and attention layers (Vaswani et al., 2017a; b; Bahdanau et al., 2014) . 10). The memory of the Hopfield layer can be filled with sets from the input or previous layers, see Fig. 3 . The memory may be filled with a reference set, which is covered by providing the reference set as additional input. Thus, the layer Hopfield allows the association of two sets. A prominent example of a layer that performs such association is the transformer attention mechanism, which associates keys and queries, e.g. two point sets that have to be compared. This layer allows for different kinds of sequence-to-sequence learning, point set operations, and retrieval-based methods. The layer Hopfield with skip connections in a ResNet architecture is identical to the popular transformer and BERT models. In the experiments, we analyzed these Hopfield layers in transformer architectures. In our experiments in which we compare machine learning methods on small datasets of the UCI benchmark collection the layer Hopfield is also used.

= softmax ( )

= softmax ( ) (2) Layer HopfieldPooling for networks that propagate patterns via the stored (key) patterns Y . This layer performs a pooling or summarization of sets Y obtained from queries in previous layers or the input. The memory of the HopfieldPooling layer is filled with sets from the input or previous layers. The HopfieldPooling layer uses the queries to search for patterns in the memory, the stored set. If more patterns are similar to a particular search pattern (query), then the result is an average over these patterns. to hidden neurons that have a softmax activation function (Krotov & Hopfield, 2020) . The layer HopfieldLayer can substitute a fully connected layer, see Fig. 5 . A single HopfieldLayer layer also allows for approaches similar to support vector machines (SVMs), approaches similar to k-nearest neighbor, approaches similar to learning vector quantization, and pattern search. For classification, the raw data y i = (z i , t i ) can be the concatenation of input z i and target t i . In this case, the matrices W K and W V can be designed such that inside the softmax the input z i is used and outside the softmax the target t i . Thus, the softmax provides a weighted average of the target vectors based on the similarity between the query and the inputs. Also SVM models, k-nearest neighbor, and learning vector quantization can be considered as weighted averages of the targets. The encoder-decoder attention layer of the transformers are a HopfieldLayer layer, where the memory is filled with the encoder output set. In our experiments with the drug design benchmark datasets, the layer HopfieldLayer has been applied and compared to other machine learning methods. = softmax ( ) Additional functionality of new Hopfield layers. The insights about energy, convergence, and storage properties provide all new Hopfield layers with additional functionalities: i) multiple updates to control how precise fixed points are found without additional parameters needed. ii) variable β to determine the kind of fixed points such as the size of metastable states. The variable β controls over how many patterns is averaged. As observed in the experiments, the variable is relevant in combination with the learning rate to steer the learning dynamics. The parameter β governs the fixed point dynamics and can be learned, too. iii) controlling the storage capacity via the dimension of the associative space. The storage capacity can be relevant for tasks with a huge number of instances as in the immune repertoire classification experiment. iv) pattern normalization controls, like the layernorm, the fixed point dynamics by the norm and shift of the patterns. For more details see appendix, Section A.6.

4. EXPERIMENTS

We show that our proposed Hopfield layers can be applied successfully to a wide range of tasks. The tasks are from natural language processing, contain multiple instance learning problems, a collection of small classification tasks, and drug design problems. Analysis of transformer and BERT models. (Toneva & Wehbe, 2019a; b; Tay et al., 2020) . Operating class (III) (medium metastable states) is predominant in the last layers. Multiple Instance Learning Datasets. For multiple instance learning (MIL) (Dietterich et al., 1997) , we integrate our new Hopfield network via the layer HopfieldPooling into deep learning architectures. Recently, deep learning methods have been applied to MIL problems (Ilse et al., 2018) , but still the performance on many datasets lacks improvement. Thus, MIL datasets still pose an interesting challenge, in which Hopfield layers equipped with memory are a promising approach. •Immune Repertoire Classification. The first MIL task is immune repertoire classification, where a deep learning architecture with HopfieldPooling (DeepRC) was used (Widrich et al., 2020a; b) . Immune repertoire classification (Emerson et al., 2017) typically requires to extract few patterns from a large set of sequences, the repertoire, that are indicative for the respective immune status. The datasets contain ≈ 300,000 instances per immune repertoire, which represents one of the largest multiple instance learning experiments ever conducted (Carbonneau et al., 2018) . Most MIL methods fail due the large number of instances. This experiment comprises real-world and simulated datasets. Simulated datasets are generated by implanting sequence motifs (Akbar et al., 2019; Weber et al., 2020) with low frequency into simulated or experimentally-observed immune receptor sequences. The performance of DeepRC was compared with other machine learning methods: (i) known motif, (ii) SVM using k-mers and MinMax or Jaccard kernel, (iii) K-Nearest Neighbor (KNN) with kmers, (iv) logistic regression with k-mers, (v) burden test with k-mers, and (vi) logistic multiple Method tiger fox elephant UCSB Hopfield (ours) 91.3 ± 0.5 64.05 ± 0.4 94.9 ± 0.3 89.5 ± 0.8 Path encoding (Küçükaşcı & Baydogan, 2018) 91.0 ± 1.0 a 71.2 ± 1.4 a 94.4 ± 0.7 a 88.0 ± 2.2 a MInD (Cheplygina et al., 2016) 85.3 ± 1.1 a 70.4 ± 1.6 a 93.6 ± 0.9 a 83.1 ± 2.7 a MILES (Chen et al., 2006) 87.2 ± 1.7 b 73.8 ± 1.6 a 92.7 ± 0.7 a 83.3 ± 2.6 a APR (Dietterich et al., 1997) 77.8 ± 0.7 b 54.1 ± 0.9 b 55.0 ± 1.0 b -Citation-kNN (Wang, 2000) 85.5 ± 0.9 b 63.5 ± 1.5 b 89.6 ± 0.9 b 70.6 ± 3.2 a DD (Maron & Lozano-Pérez, 1998) 84 (Carbonneau et al., 2016) , depending on which reports the higher AUC. instance learning (lMIL). On the real-world dataset DeepRC achieved an AUC of 0.832 ± 0.022, followed by the SVM with MinMax kernel (AUC 0.825 ± 0.022) and the burden test with an AUC of 0.699 ± 0.041. Across datasets, DeepRC outperformed all competing methods with respect to average AUC (Widrich et al., 2020a; b) . •MIL benchmark datasets. We apply Hopfield layers to further MIL datasets (Ilse et al., 2018; Küçükaşcı & Baydogan, 2018; Cheplygina et al., 2016) : Elephant, Fox and Tiger for image annotation (Andrews et al., 2003) . These datasets consist of color images from the Corel dataset that have been preprocessed and segmented. An image consists of a set of segments (or blobs), each characterized by color, texture and shape descriptors. The datasets have 100 positive and 100 negative example images. The latter have been randomly drawn from a pool of photos of other animals. Elephant comprises 1,391 instances and 230 features, Fox 1,320 instances and 230 features, and Tiger has 1,220 instances and 230 features. Furthermore, we use the UCSB breast cancer classification (Kandemir et al., 2014) dataset, which consists of 2,002 instances across 58 input objects. An instance represents a patch of a histopathological image of cancerous or normal tissue. The layer HopfieldPooling is used, which allows for computing a per-input-object representation by extracting an average of instances that are indicative for one of the two classes. The input to the layer HopfieldPooling is a set of embedded instances Y . A trainable but fixed state (query) pattern Q is used for averaging over class-indicative instances. This averaging enables a compression of variable-sized bags to a fixedsized representation to discriminate the bags. More details in appendix Sec. A.5.2. Our approach has set a new state-of-the-art and has outperformed other methods (Küçükaşcı & Baydogan, 2018; Carbonneau et al., 2016) on the datasets Tiger, Elephant and UCSB Breast Cancer (see Table 1 ). UCI Benchmark Collection. So far deep learning struggled with small datasets. However, Hopfield networks are promising for handling small datasets, since they can store the training data points or their representations to perform similarity-based, nearest neighbor, or learning vector quantization methods. Therefore, we test the Hopfield layer Hopfield on the small datasets of the UC Irvine (UCI) Machine Learning Repository that have been used to benchmark supervised learning methods (Fernández-Delgado et al., 2014; Wainberg et al., 2016; Khan et al., 2018) and also feed-forward neural networks (Klambauer et al., 2017a; Wu et al., 2018) , where our Hopfield networks could exploit their memory. The whole 121 datasets in the collection vary strongly with respect to their size, number of features, and difficulties (Fernández-Delgado et al., 2014) , such that they have been divided into 75 "small datasets" with less than 1,000 samples and 45 "large datasets" with more than or equal to 1,000 samples in Klambauer et al. (2017a) On the 75 small datasets, Random Forests (RFs) and Support Vector Machines (SVM) are highly accurate, whereas on the large datasets, deep learning methods and neural networks are in the lead (Klambauer et al., 2017a; b; Wu et al., 2018) . We applied a modern Hopfield network via the layer HopfieldLayer, where a selfnormalizing net (SNN) maps the input vector to Y and R. The output Z of HopfieldLayer enters a softmax output. We compared our modern Hopfield networks against deep learning methods (e.g. SNNs, resnet), RFs, SVMs, boosting, bagging, and many other machine learning methods of Fernández-Delgado et al. (2014) . Since for each method, multiple variants and implementations had been included, we used method groups and representatives as defined by Klambauer et al. (2017a) . For each dataset, a ranking of the methods was calculated which is presented in Table 2 . We found that Hopfield networks outperform all other methods on the small datasets, setting a new state-of-the-art for 10 datasets. The difference is significant except for the first three runner-up methods (Wilcoxon signed rank test). See appendix Section A.5.3 for details.  X = (x 1 , . . . , x N ) . The largest norm of a pattern is M = max i x i . ( ) The query or state of the Hopfield network is ξ. The energy function E in the new type of Hopfield models of Krotov and Hopfield is E = -N i=1 F ξ T x i for binary patterns x i and binary state ξ with interaction function F (x) = x n , where n = 2 gives classical Hopfield model (Krotov & Hopfield, 2016) . The storage capacity is proportional to d n-1 (Krotov & Hopfield, 2016) . This model was generalized by Demircigil et al. (Demircigil et al., 2017) to exponential interaction functions F (x) = exp(x), which gives the energy E = -exp(lse(1, X T ξ)). This energy leads to an exponential storage capacity of N = 2 d/2 for binary patterns. Furthermore, with a single update the fixed point is recovered with high probability. See more details in Section A.3. In contrast to the these binary modern Hopfield networks, we focus on modern Hopfield networks with continuous states that can store continuous patterns. We generalize the energy of Demircigil et al. (Demircigil et al., 2017) to continuous states while keeping the lse properties which ensure high storage capacity and fast convergence. Our new energy E for a continuous query or state ξ is defined as E = -lse(β, X T ξ) + 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 (13) = -β -1 ln N i=1 exp(βx T i ξ) + β -1 ln N + 1 2 ξ T ξ + 1 2 M 2 (14) = -β -1 ln 1 N N i=1 exp - 1 2 β M 2 -x i 2 exp - 1 2 β x i -ξ 2 . ( ) First let us collect and prove some properties of E. The next lemma gives bounds on the energy E. Lemma A1. The energy E is larger than zero: 0 E . (16) For ξ in the simplex defined by the patterns, the energy E is upper bounded by: E β -1 ln N + 1 2 M 2 , ( ) E 2 M 2 . ( ) Proof. We start by deriving the lower bound of zero. The pattern most similar to query or state ξ is x ξ : x ξ = x k , k = arg max i ξ T x i . We obtain E = -β -1 ln N i=1 exp(βx T i ξ) + β -1 ln N + 1 2 ξ T ξ + 1 2 M 2 (20) = -β -1 ln 1 N N i=1 exp(βx T i ξ) + 1 2 ξ T ξ + 1 2 M 2 ≥ -β -1 ln 1 N N i=1 exp(βx T i ξ) + 1 2 ξ T ξ + 1 2 x T ξ x ξ ≥ -β -1 ln exp(βx T ξ ξ) + 1 2 ξ T ξ + 1 2 x T ξ x ξ = -x T ξ ξ + 1 2 ξ T ξ + 1 2 x T ξ x ξ = 1 2 (ξ -x ξ ) T (ξ -x ξ ) = 1 2 ξ -x ξ 2 ≥ 0 . The energy is zero and, therefore, the bound attained, if all x i are equal, that is, x i = x for all i and ξ = x. For deriving upper bounds on the energy E, we require the the query ξ to be in the simplex defined by the patterns, that is, ξ = N i=1 p i x i , N i=1 p i = 1 , ∀ i : 0 p i . The first upper bound is. E = -β -1 ln N i=1 exp(βx T i ξ) + 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 (22) - N i=1 p i (x T i ξ) + 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 = - 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 β -1 ln N + 1 2 M 2 . For the first inequality we applied Lemma A19 to -lse(β, X T ξ) with z = p giving -lse(β, X T ξ) - N i=1 p i (x T i ξ) + β -1 N i=1 p i ln p i - N i=1 p i (x T i ξ) , as the term involving the logarithm is non-positive. Next we derive the second upper bound, for which we need the mean m x of the patterns m x = 1 N N i=1 x i . We obtain E = -β -1 ln N i=1 exp(βx T i ξ) + 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 (25) - N i=1 1 N x T i ξ + 1 2 ξ T ξ + 1 2 M 2 = -m T x ξ + 1 2 ξ T ξ + 1 2 M 2 m x ξ + 1 2 ξ 2 + 1 2 M 2 2 M 2 , where for the first inequality we again applied Lemma A19 with z = (1/N, . . . , 1/N ) and β -1 i 1/N ln(1/N ) = -β -1 ln(N ) . This inequality also follows from Jensen's inequality. The second inequality uses the Cauchy-Schwarz inequality. The last inequality uses ξ = i p i x i i p i x i i p i M = M and m x = i (1/N ) x i i (1/N ) x i i (1/N ) M = M . A.1.3 NEW UPDATE RULE We now introduce an update rule for minimizing the energy function E. The new update rule is ξ new = Xp = Xsoftmax(βX T ξ) , where we used p = softmax(βX T ξ) . ( ) The new state ξ new is in the simplex defined by the patterns, no matter what the previous state ξ was. For comparison, the synchronous update rule for the classical Hopfield network with threshold zero is ξ new = sgn (XX T ξ) . Therefore, instead of using the vector X T ξ as in the classical Hopfield network, its softmax version softmax(βX T ξ) is used. In the next section (Section A.1.4) we show that the update rule Eq. ( 28) ensures global convergence. We show that all the limit points of any sequence generated by the update rule are the stationary points (local minima or saddle points) of the energy function E. In Section A.1.5 we consider the local convergence of the update rule Eq. ( 28) and see that patterns are retrieved with one update.

A.1.4 GLOBAL CONVERGENCE OF THE UPDATE RULE

We are interested in the global convergence, that is, convergence from each initial point, of the iteration ξ new = f (ξ) = Xp = Xsoftmax(βX T ξ) , where we used p = softmax(βX T ξ) . ( ) We defined the energy function E = -lse(β, X T ξ) + 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 (33) = -β -1 ln N i=1 exp(βx T i ξ) + β -1 ln N + 1 2 ξ T ξ + 1 2 M 2 . ( ) We will show that the update rule in Eq. ( 31) is the Concave-Convex Procedure (CCCP) for minimizing the energy E. The CCCP is proven to converge globally. Theorem A1 (Global Convergence (Zangwill): Energy). The update rule Eq. (31) converges globally: For ξ t+1 = f (ξ t ), the energy E(ξ t ) → E(ξ * ) for t → ∞ and a fixed point ξ * . Proof. The Concave-Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003) minimizes a function that is the sum of a concave function and a convex function. CCCP is equivalent to Legendre minimization (Rangarajan et al., 1996; 1999) algorithms (Yuille & Rangarajan, 2003) . The Jacobian of the softmax is positive semi-definite according to Lemma A22. The Jacobian of the softmax is the Hessian of the lse, therefore lse is a convex and -lse a concave function. Therefore, the energy function E(ξ) is the sum of the convex function E 1 (ξ) = 1/2ξ T ξ + C 1 and the concave function E 2 (ξ) = -lse: E(ξ) = E 1 (ξ) + E 2 (ξ) , E 1 (ξ) = 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 = 1 2 ξ T ξ + C 1 , E 2 (ξ) = -lse(β, X T ξ) , where C 1 does not depend on ξ. The Concave-Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003) applied to E is ∇ ξ E 1 ξ t+1 = -∇ ξ E 2 ξ t , which is ∇ ξ 1 2 ξ T ξ + C 1 ξ t+1 = ∇ ξ lse(β, X T ξ t ) . ( ) The resulting update rule is ξ t+1 = Xp t = Xsoftmax(βX T ξ t ) using p t = softmax(βX T ξ t ) . This is the update rule in Eq. ( 31). Theorem 2 in Yuille & Rangarajan (2002) and Theorem 2 in Yuille & Rangarajan (2003) state that the update rule Eq. ( 31) is guaranteed to monotonically decrease the energy E as a function of time. See also Theorem 2 in Sriperumbudur & Lanckriet (2009) . Although the objective converges in all cases, it does not necessarily converge to a local minimum (Lipp & Boyd, 2016) . However the convergence proof of CCCP in Yuille & Rangarajan (2002; 2003) was not as rigorous as required. In Sriperumbudur & Lanckriet (2009) a rigorous analysis of the convergence of CCCP is performed using Zangwill's global convergence theory of iterative algorithms. In Sriperumbudur & Lanckriet (2009) the minimization problem  min ξ E 1 + E 2 (42) s.t. c(ξ) 0 , d(ξ) = 0 is considered with E 1 convex, -E 2 convex, c component- ξ t+1 ∈ arg min ξ E 1 (ξ) + ξ T ∇ ξ E 2 ξ t (43) s.t. c(ξ) 0 , d(ξ) = 0 . We define the upper bound E C on the energy: E C ξ, ξ t := E 1 (ξ) + E 2 ξ t + ξ -ξ t T ∇ ξ E 2 ξ t . ( ) E C is equal to the energy E (ξ t ) for ξ = ξ t : E C ξ t , ξ t = E 1 ξ t + E 2 ξ t = E ξ t . ( ) Since -E 2 is convex, the first order characterization of convexity holds (Eq. 3.2 in Boyd & Vandenberghe ( 2009)): -E 2 (ξ) ≥ -E 2 ξ t -ξ -ξ t T ∇ ξ E 2 ξ t , that is E 2 (ξ) E 2 ξ t + ξ -ξ t T ∇ ξ E 2 ξ t . Therefore, for ξ = ξ t the function E C is an upper bound on the energy: E (ξ) E C ξ, ξ t = E 1 (ξ) + E 2 ξ t + ξ -ξ t T ∇ ξ E 2 ξ t (48) = E 1 (ξ) + ξ T ∇ ξ E 2 ξ t + C 2 , where C 2 does not depend on ξ. Since we do not have constraints, ξ t+1 is defined as ξ t+1 ∈ arg min ξ E C ξ, ξ t , hence E C ξ t+1 , ξ t E C (ξ t , ξ t ). Combining the inequalities gives: E ξ t+1 E C ξ t+1 , ξ t E C ξ t , ξ t = E ξ t . ( ) Since we do not have constraints, ξ t+1 is the minimum of E C ξ, ξ t = E 1 (ξ) + ξ T ∇ ξ E 2 ξ t + C 2 (51) as a function of ξ. For a minimum not at the border, the derivative has to be the zero vector ∂E C (ξ, ξ t ) ∂ξ = ξ + ∇ ξ E 2 ξ t = ξ -Xsoftmax(βX T ξ t ) = 0 (52) and the Hessian must be positive semi-definite ∂ 2 E C (ξ, ξ t ) ∂ξ 2 = I . ( ) The Hessian is strict positive definite everywhere, therefore the optimization problem is strict convex (if the domain is convex) and there exist only one minimum, which is a global minimum. E C can even be written as a quadratic form: E C ξ, ξ t = 1 2 ξ + ∇ ξ E 2 ξ t T ξ + ∇ ξ E 2 ξ t + C 3 , where C 3 does not depend on ξ. Therefore, the minimum is ξ t+1 = -∇ ξ E 2 ξ t = Xsoftmax(βX T ξ t ) ( ) if it is in the domain as we assume. Using M = max i x i , ξ t+1 is in the sphere S = {x | x M } which is a convex and compact set. Hence, if ξ 0 ∈ S, then the iteration is a mapping from S to S. Therefore, the point-set-map defined by the iteration Eq. ( 55) is uniformly compact on S according to Remark 7 in Sriperumbudur & Lanckriet (2009) . Theorem 2 and Theorem 4 in (Sriperumbudur & Lanckriet, 2009) states that all the limit points of the iteration Eq. ( 55) are stationary points. These theorems follow from Zangwill's global convergence theorem: Convergence Theorem A, page 91 in Zangwill (1969) and page 3 in Wu (1983) . The global convergence theorem only assures that for the sequence ξ t+1 = f (ξ t ) and a function Φ we have Φ(ξ t ) → Φ(ξ * ) for t → ∞ but not ξ t → ξ * . However, if f is strictly monotone with respect to Φ, then we can strengthen Zangwill's global convergence theorem (Meyer, 1976) . We set Φ = E and show E(ξ t+1 ) < E(ξ t ) if ξ t is not a stationary point of E, that is, f is strictly monotone with respect to E. The following theorem is similar to the convergence results for the expectation maximization (EM) algorithm in Wu (1983) which are given in theorems 1 to 6 in Wu (1983) . The following theorem is also very similar to Theorem 8 in Sriperumbudur & Lanckriet (2009) . Theorem A2 (Global Convergence: Stationary Points). For the iteration Eq. (55) we have E (ξ t ) → E (ξ * ) = E * as t → ∞, for some stationary point ξ * . Furthermore ξ t+1 -ξ t → 0 and either {ξ t } ∞ t=0 converges or, in the other case, the set of limit points of {ξ t } ∞ t=0 is a connected and compact subset of L (E * ), where L (a) = {ξ ∈ L | E (ξ) = a} and L is the set of stationary points of the iteration Eq. (55). If L (E * ) is finite, then any sequence {ξ t } ∞ t=0 generated by the iteration Eq. (55) converges to some ξ * ∈ L (E * ). 51) has minimum in the sphere S, which is a convex and compact set. If ξ t+1 = ξ t , then ξ t was not the minimum of Eq. ( 48) as the derivative at ξ t is not equal to zero. Eq. ( 53) shows that the optimization problem Eq. ( 48) is strict convex, hence it has only one minimum, which is a global minimum. Eq. ( 54) shows that the optimization problem Eq. ( 48) is even a quadratic form. Therefore, we have Proof. We have E (ξ t ) = E 1 (ξ t ) + E 2 (ξ t ). The gradient ∇ ξ E 2 (ξ t ) = -∇ ξ lse(β, X T ξ) is continuous. Therefore, Eq. ( E ξ t+1 E C ξ t+1 , ξ t < E C ξ t , ξ t = E ξ t . Therefore, the point-set-map defined by the iteration Eq. ( 55) (for definitions see (Sriperumbudur & Lanckriet, 2009) ) is strictly monotonic with respect to E. Therefore, we can apply Theorem 3 in Sriperumbudur & Lanckriet (2009) or Theorem 3.1 and Corollary 3.2 in Meyer (1976) , which give the statements of the theorem. We showed global convergence of the iteration Eq. ( 31). We have shown that all the limit points of any sequence generated by the iteration Eq. ( 31) are the stationary points (critical points; local minima or saddle points) of the energy function E. Local maxima as stationary points are only possible if the iterations exactly hits a local maximum. However, convergence to a local maximum without being there is not possible because Eq. ( 56) ensures a strict decrease of the energy E. Therefore, almost sure local maxima are not obtained as stationary points. Either the iteration converges or, in the second case, the set of limit points is a connected and compact set. But what happens if ξ 0 is in an -neighborhood around a local minimum ξ * ? Will the iteration Eq. ( 31) converge to ξ * ? What is the rate of convergence? These questions are about local convergence which will be treated in detail in next section.

A.1.5 LOCAL CONVERGENCE OF THE UPDATE RULE: FIXED POINT ITERATION

For the proof of local convergence to a fixed point we will apply Banach fixed point theorem. For the rate of convergence we will rely on properties of a contraction mapping. A.1.5.1 General Bound on the Jacobian of the Iteration. We consider the iteration ξ new = f (ξ) = Xp = Xsoftmax(βX T ξ) (57) using p = softmax(βX T ξ) . ( ) The Jacobian J is symmetric and has the following form: J = ∂f (ξ) ∂ξ = β X diag(p) -pp T X T = XJ s X T , where J s is Jacobian of the softmax. To analyze the local convergence of the iteration, we distinguish between the following three cases (see also Fig. A.1 ). Here we only provide an informal discussion to give the reader some intuition. A rigorous formulation of the results can be found in the corresponding subsections. a) If the patterns x i are not well separated, the iteration goes to a fixed point close to the arithmetic mean of the vectors. In this case p is close to p i = 1/N . b) If the patterns x i are well separated, then the iteration goes to the pattern to which the initial ξ is similar. If the initial ξ is similar to a vector x i then it will converge to a vector close to x i and p will converge to a vector close to e i . c) If some vectors are similar to each other but well separated from all other vectors, then a so called metastable state between the similar vectors exists. Iterations that start near the metastable state converge to this metastable state. x is a metastable state that is close to the mean m x of the similar patterns. c) Global fixed point (fixed point is average of all patterns): no pattern is well separated from the others. A single global fixed point m * x exists that is close to the arithmetic mean m x of all patterns. We begin with a bound on the Jacobian of the iteration, thereby heavily relying on the Jacobian of the softmax from Lemma A24. Lemma A2. For N patterns X = (x 1 , . . . , x N ), p = softmax(βX T ξ), M = max i x i , and m = max i p i (1 -p i ), the spectral norm of the Jacobian J of the fixed point iteration is bounded: J 2 2 β X 2 2 m 2 β N M 2 m . ( ) If p max = max i p i ≥ 1 -, then for the spectral norm of the Jacobian holds J 2 2 β N M 2 -2 2 β N M 2 < 2 β N M 2 . ( ) Proof. With p = softmax(βX T ξ) , the symmetric Jacobian J is J = ∂f (ξ) ∂ξ = β X diag(p) -pp T X T = XJ s X T , where J s is Jacobian of the softmax. With m = max i p i (1 -p i ), Eq. ( 476) from Lemma A24 is J s 2 = β diag(p) -pp T 2 2 m β . ( ) Using this bound on J s 2 , we obtain J 2 β X T 2 J s 2 X 2 2 m β X 2 2 . ( ) The spectral norm . 2 is bounded by the Frobenius norm . F which can be expressed by the norm squared of its column vectors: X 2 X F = i x i 2 . ( ) Therefore, we obtain the first statement of the lemma: J 2 2 β X 2 2 m 2 β N M 2 m . With p max = max i p i ≥ 1 -Eq. ( 480) in Lemma A24 is J s 2 2 β -2 2 β < 2 β . Using this inequality, we obtain the second statement of the lemma: J 2 2 β N M 2 -2 2 β N M 2 < 2 β N M 2 . ( ) We now define the "separation" ∆ i of a pattern x i from data X = (x 1 , . . . , x N ) here, since it has an important role for the convergence properties of the iteration. Definition 2 (Separation of Patterns). We define ∆ i , i.e. the separation of pattern x i from data X = (x 1 , . . . , x N ) as: ∆ i = min j,j =i x T i x i -x T i x j = x T i x i -max j,j =i x T i x j . ( ) The pattern is separated from the other data if 0 < ∆ i . Using the parallelogram identity, ∆ i can also be expressed as ∆ i = min j,j =i 1 2 x i 2 -x j 2 + x i -x j 2 (71) = 1 2 x i 2 - 1 2 max j,j =i x j 2 -x i -x j 2 . For x i = x j we have ∆ i = 1/2 min j,j =i x i -x j 2 . Analog we say for a query ξ and data X = (x 1 , . . . , x N ), that x i is least separated from ξ while being separated from other x j with j = i if i = arg max k min j,j =k ξ T x k -ξ T x j = arg max k ξ T x k -max j,j =k ξ T x j (72) 0 c = max k min j,j =k ξ T x k -ξ T x j = max k ξ T x k -max j,j =k ξ T x j . ( ) Next we consider the case where the iteration has only one stable fixed point. A.1.5.2 One Stable State: Fixed Point Near the Mean of the Patterns. We start with the case where no pattern is well separated from the others. •Global fixed point near the global mean: Analysis using the data center. We revisit the bound on the Jacobian of the iteration by utilizing properties of pattern distributions. We begin with a probabilistic interpretation where we consider p i as the probability of selecting the vector x i . Consequently, we define expectations as E p [f (x)] = N i=1 p i f (x i ). In this setting the matrix X diag(p) -pp T X T (74) is the covariance matrix of data X when its vectors are selected according to the probability p: X diag(p) -pp T X T = Xdiag(p)X T -Xpp T X T (75) = N i=1 p i x i x T i - N i=1 p i x i N i=1 p i x i T (76) = E p [x x T ] -E p [x] E p [x] T = Var p [x] , therefore we have J = β Var p [x] . The largest eigenvalue of the covariance matrix (equal to the largest singular value) is the variance in the direction of the eigenvector associated with the largest eigenvalue. We define: m x = 1 N N i=1 x i , m max = max 1 i N x i -m x 2 . ( ) m x is the arithmetic mean (the center) of the patterns. m max is the maximal distance of the patterns to the center m x . The variance of the patterns is Var p [x] = N i=1 p i x i x T i - N i=1 p i x i N i=1 p i x i T (81) = N i=1 p i x i - N i=1 p i x i x i - N i=1 p i x i T . The maximal distance to the center m max allows the derivation of a bound on the norm of the Jacobian. Next lemma gives a condition for a global fixed point. Lemma A3. The following bound on the norm J 2 of the Jacobian of the fixed point iteration f holds independent of p or the query ξ. J 2 β m 2 max . ( ) For β m 2 max < 1 there exists a unique fixed point (global fixed point) of iteration f in each compact set. Proof. In order to bound the variance we compute the vector a that minimizes f (a) = N i=1 p i x i -a 2 = N i=1 p i (x i -a) T (x i -a) . The solution to ∂f (a) ∂a = 2 N i=1 p i (a -x i ) = 0 (84) is a = N i=1 p i x i . ( ) The Hessian of f is positive definite since ∂ 2 f (a) ∂a 2 = 2 N i=1 p i I = 2 I ( ) and f is a convex function. Hence, the mean x := N i=1 p i x i (87) minimizes N i=1 p i x i -a 2 . Therefore, we have N i=1 p i x i -x 2 N i=1 p i x i -m x 2 m 2 max . Let us quickly recall that the spectral norm of an outer product of two vectors is the product of the Euclidean norms of the vectors: ab T 2 = λ max (ba T ab T ) = a λ max (bb T ) = a b , since bb T has eigenvector b/ b with eigenvalue b 2 and otherwise zero eigenvalues. We now bound the variance of the patterns: Var p [x] 2 N i=1 p i (x i -x) (x i -x) T 2 (90) = N i=1 p i x i -x 2 N i=1 p i x i -m x 2 m 2 max . The bound of the lemma on J 2 follows from Eq. ( 78). For J 2 β m 2 max < 1 we have a contraction mapping on each compact set. Banach fixed point theorem says there is a unique fixed point in the compact set. Now let us further investigate the tightness of the bound on Var p [x] 2 via x i -x 2 : we consider the trace, which is the sum d k=1 e k of the w.l.o.g. ordered nonnegative eigenvalues e k of Var p [x] The spectral norm is equal to the largest eigenvalue e 1 , which is equal to the largest singular value, as we have positive semidefinite matrices. We obtain: Var p [x] 2 = Tr N i=1 p i (x i -x) (x i -x) T - d k=2 e k (91) = N i=1 p i Tr (x i -x) (x i -x) T - d k=2 e k = N i=1 p i x i -x 2 - d k=2 e k . Therefore, the tightness of the bound depends on eigenvalues which are not the largest. Hence variations which are not along the largest variation weaken the bound. Next we investigate the location of fixed points which existence is ensured by the global convergence stated in Theorem A2. For N patterns X = (x 1 , . . . , x N ), we consider the iteration ξ new = f (ξ) = Xp = Xsoftmax(βX T ξ) (92) using p = softmax(βX T ξ) . ( ) ξ new is in the simplex of the patterns, that is, ξ new = i p i x i with i p i = 1 and 0 p i . Hence, after one update ξ is in the simplex of the pattern and stays there. If the center m x is the zero vector m x = 0, that is, the data is centered, then the mean is a fixed point of the iteration. For ξ = m x = 0 we have p = 1/N 1 and ξ new = 1/N X 1 = m x = ξ . ( ) In particular normalization methods like batch normalization would promote the mean as a fixed point. We consider the differences of dot products for x i : x T i x i -x T i x j = x T i (x i -x j ), for fixed point m * x : (m * x ) T x i -(m * x ) T x j = (m * x ) T (x i -x j ) , and for the center m x : m T x x i -m T x x j = m T x (x i -x j ). Using the Cauchy-Schwarz inequality, we get ξ T (x i -x j ) ξ x i -x j ξ ( x i -m x + x j -m x ) (96) 2 m max ξ . This inequality gives: ξ T (x i -x j ) 2 m max (m max + m x ) , ξ T (x i -x j ) 2 m max M , where we used ξ -0 ξ -m x + m x -0 , ξ -m x = i p i x i -m x i p i x i -m x m max , and M = max i x i . In particular β m T x (x i -x j ) 2 β m max m x , β (m * x ) T (x i -x j ) 2 β m max m * x 2 β m max (m max + m x ) , β x T i (x i -x j ) 2 β m max x i 2 β m max (m max + m x ) . ( ) Let i = arg max j ξ T x j , therefore the maximal softmax component is i. For the maximal softmax component i we have: [softmax(β X T ξ)] i = 1 1 + j =i exp(-β (ξ T x i -ξ T x j )) (101) 1 1 + j =i exp(-2 β m max (m max + m x )) = 1 1 + (N -1) exp(-2 β m max (m max + m x )) = exp(2 β m max (m max + m x )) exp(2 β m max (m max + m x )) + (N -1) 1/N exp(2 β m max (m max + m x )) . Analogously we obtain for i = arg max j m T x x j , a bound on the maximal softmax component i if the center is put into the iteration: [softmax(β X T m x )] i 1/N exp(2 β m max m x ) . Analog we obtain a bound for i = arg max j (m * x ) T x j on the maximal softmax component i of the fixed point: [softmax(β X T m * x )] i 1/N exp(2 β m max m * x ) (103) 1/N exp(2 β m max (m max + m x )) . The two important terms are m max , the variance or spread of the data and m x , which tells how well the data is centered. For a contraction mapping we already required βm 2 max < 1, therefore the first term in the exponent is 2βm 2 max < 2. The second term 2βm max m x is small if the data is centered. •Global fixed point near the global mean: Analysis using softmax values. If ξ T x i ≈ ξ T x j for all i and j, then p i ≈ 1/N and we have m = max i p i (1 -p i ) < 1/N . For M 1/ √ 2β we obtain from Lemma A2: J 2 < 1 . ( ) The local fixed point is m * x ≈ m x = (1/N ) N i=1 x i with p i ≈ 1/N . We now treat this case more formally. First we discuss conditions that ensure that the iteration is a contraction mapping. We consider the iteration Eq. ( 57) in the variable p: p new = g(p) = softmax(βX T Xp) . ( ) The Jacobian is J(p) = ∂g(p) ∂p = X T X J s (106) with J s (p new ) = β diag(p new ) -p new (p new ) T . ( ) The version of the mean value theorem in Lemma A32 states for J m = 1 0 J(λp) dλ = X T XJ m s with the symmetric matrix J m s = 1 0 J s (λp) dλ: p new = g(p) = g(0) + (J m ) T p = g(0) + J m s X T X p = 1/N 1 + J m s X T X p . ( ) With m = max i p i (1 -p i ), Eq. ( 476) from Lemma A24 is J s (p) 2 = β diag(p) -pp T 2 2 m β . ( ) First observe that λp i (1 -λp i ) p i (1 -p i ) for p i 0.5 and λ ∈ [0, 1], since p i (1 -p i ) -λp i (1 - λp i ) = (1 -λ)p i (1 -(1 + λ)p i ) ≥ 0. For max i p i 0.5 this observation leads to the following bound for J m s : J m s 2 2 m β . Eq. ( 479) in Lemma A24 states that every J s is bounded by 1/2β, therefore also the mean: J m s 2 0.5 β . Since m = max i p i (1 -p i ) < max i p i = p max , the previous bounds can be combined as follows: J m s 2 2 min{0.25, p max } β . Consequently, J m 2 N M 2 2 min{0.25, p max } β , where we used Eq. ( 170). X T X 2 = XX T 2 , therefore X T X 2 is N times the maximal second moment of the data squared. Obviously, g(p) is a contraction mapping in compact sets, where N M 2 2 min{0.25, p max } β < 1 . ( ) S is the sphere around the origin 0 with radius one. For p new = g(p) = 1/N 1 + J m p , we have p p 1 = 1 and p new p new 1 = 1. Therefore, g maps points from S into S. g is a contraction mapping for J m 2 N M 2 2 min{0.25, p max } β = c < 1 . ( ) According to Banach fixed point theorem g has a fixed point in the sphere S. Hölder's inequality gives: p 2 = p T p p 1 p ∞ = p ∞ = p max . ( ) Alternatively: p 2 = i p 2 i = p max i p i p max p i p max i p i = p max . Let now S be the sphere around the origin 0 with radius 1/ √ N + √ p max and let J m (p) 2 c < 1 for p ∈ S. The old p is in the sphere S (p ∈ S) since p max < √ p max for p max < 1. We have p new 1/ √ N + J m 2 p 1/ √ N + √ p max . Therefore, g is a mapping from S into S and a contraction mapping. According to Banach fixed point theorem, a fixed point exists in S. For the 1-norm, we use Lemma A24 and p 1 = 1 to obtain from Eq. ( 115): p new -1/N 1 1 J m 1 2 β m X ∞ M 1 , (120) p new -1/N 1 1 J m 1 2 β m N M ∞ M 1 , p new -1/N 1 1 J m 1 2 β m N M 2 , ( ) where m = max i p i (1 -p i ), M 1 = X 1 = max i x i 1 , M = max i x i , X ∞ = X T 1 = max i [X T ] i 1 (maximal absolute row sum norm), and M ∞ = max i x i ∞ . Let us quickly mention some auxiliary estimates related to X T X: X T X 1 = max i N j=1 x T i x j max i N j=1 x i ∞ x j 1 (123) M ∞ N j=1 M 1 = N M ∞ M 1 , where the first inequaltiy is from Hölder's inequality. We used X T X 1 = max i N j=1 x T i x j max i N j=1 x i x j (124) M N j=1 M = N M 2 , where the first inequality is from Hölder's inequality (here the same as the Cauchy-Schwarz inequality). See proof of Lemma A24 for the 1-norm bound on J s . Everything else follows from the fact that the 1-norm is sub-multiplicative as induced matrix norm. We consider the minimal p .  ∀ i : p i ≥ 0 . The solution to this minimization problem is p = (1/N )1. Therefore, we have 1/ √ N p and 1/N p 2 Using Eq. ( 119) we obtain 1/ √ N p new 1/ √ N + √ p max . ( ) Moreover p new 2 = (p new ) T p new = 1/N + (p new ) T J m p 1/N + J m 2 p (127) 1/N + J m 2 , since p new ∈ S and p ∈ S. For the fixed point, we have p * 2 = (p * ) T p * = 1/N + (p * ) T J m p * 1/N + J m 2 p * 2 , (128) and hence 1/N p * 2 1/N 1 1 -J m 2 = 1/N (1 + J m 2 1 -J m 2 ) . Therefore, for small J m 2 we have p * ≈ (1/N )1. A.1.5.3 Many Stable States: Fixed Points Near Stored Patterns. We move on to the next case, where the patterns x i are well separated. In this case the iteration goes to the pattern to which the initial ξ is most similar. If the initial ξ is similar to a vector x i then it will converge to x i and p will be e i . The main ingredients are again Banach's Theorem and estimates on the Jacobian norm. •Proof of a fixed point by Banach Fixed Point Theorem. → Mapped Vectors Stay in a Compact Environment. We show that if x i is sufficient dissimilar to other x j then there is an compact environment of x i (a sphere) where the fixed point iteration maps this environment into itself. The idea of the proof is to define a sphere around x i for which points from the sphere are mapped by f into the sphere. We first need following lemma which bounds the distance x i -f (ξ) , where x i is the pattern that is least separated from ξ but separated from other patterns. Lemma A4. For a query ξ and data X = (x 1 , . . . , x N ), there exists a x i that is least separated from ξ while being separated from other x j with j = i: i = arg max k min j,j =k ξ T x k -ξ T x j = arg max k ξ T x k -max j,j =k ξ T x j (130) 0 c = max k min j,j =k ξ T x k -ξ T x j = max k ξ T x k -max j,j =k ξ T x j . ( ) For x i , the following holds: x i -f (ξ) 2 M , ( ) where M = max i x i , = (N -1) exp(-β c) . ( ) Proof. For the softmax component i we have: [softmax(β X T ξ)] i = 1 1 + j =i exp(β (ξ T x j -ξ T x i )) ≥ 1 1 + j =i exp(-β c) (135) = 1 1 + (N -1) exp(-β c) = 1 - (N -1) exp(-β c) 1 + (N -1) exp(-β c) ≥ 1 -(N -1) exp(-β c) = 1 - For softmax components k = i we have [softmax(βX T ξ)] k = exp(β (ξ T x k -ξ T x i )) 1 + j =i exp(β (ξ T x j -ξ T x i )) exp(-β c) = N -1 . ( ) The iteration f can be written as f (ξ) = Xsoftmax(βX T ξ) = N j=1 x j [softmax(βX T ξ)] j . ( ) We now can bound x i -f (ξ) : x i -f (ξ) = x i - N j=1 [softmax(βX T ξ)] j x j (138) = (1 -[softmax(βX T ξ)] i ) x i - N j=1,j =i [softmax(βX T ξ)] j x j x i + N -1 N j=1,j =i x j M + N -1 N j=1,j =i M = 2 M . We define ∆ i , i.e. the separation of pattern x i from data X = (x 1 , . . . , x N ) as: ∆ i = min j,j =i x T i x i -x T i x j = x T i x i -max j,j =i x T i x j . ( ) The pattern is separated from the other data if 0 < ∆ i . Using the parallelogram identity, ∆ i can also be expressed as ∆ i = min j,j =i 1 2 x i 2 -x j 2 + x i -x j 2 (140) = 1 2 x i 2 - 1 2 max j,j =i x j 2 -x i -x j 2 . For x i = x j we have ∆ i = 1/2 min j,j =i x i -x j 2 . Next we define the sphere where we want to apply Banach fixed point theorem. Definition 3 (Sphere S i ). The sphere S i is defined as S i := ξ | ξ -x i 1 β N M . ( ) Lemma A5. With ξ given, if the assumptions A1: ξ is inside sphere: ξ ∈ S i , A2: data point x i is well separated from the other data: ∆ i ≥ 2 β N + 1 β ln 2 (N -1) N β M 2 (142) hold, then f (ξ) is inside the sphere: f (ξ) ∈ S i . Therefore, with assumption (A2), f is a mapping from S i into S i . Proof. We need the separation ∆i of ξ from the data. ∆i = min j,j =i ξ T x i -ξ T x j . ( ) Using the Cauchy-Schwarz inequality, we obtain for 1 j N : ξ T x j -x T i x j ξ -x i x j ξ -x i M . ( ) We have the lower bound ∆i ≥ min j,j =i x T i x i -ξ -x i M -x T i x j + ξ -x i M (145) = -2 ξ -x i M + min j,j =i x T i x i -x T i x j = ∆ i -2 ξ -x i M ≥ ∆ i - 2 β N , where we used the assumption (A1) of the lemma. From the proof in Lemma A4 we have p max = [softmax(βX T ξ)] i ≥ 1 -(N -1) exp(-β ∆i ) = 1 -˜ . Lemma A4 states that x i -f (ξ) 2 ˜ M = 2 (N -1) exp(-β ∆i ) M (147) 2 (N -1) exp(-β (∆ i - 2 β N )) M . We have x i -f (ξ) (148) 2 (N -1) exp(-β ( 2 β N + 1 β ln 2 (N -1) N β M 2 - 2 β N )) M = 2 (N -1) exp(-ln 2 (N -1) N β M 2 ) M = 1 N β M , where we used assumption (A2) of the lemma. Therefore, f (ξ) is a mapping from the sphere S i into the sphere S i : If ξ ∈ S i then f (ξ) ∈ S i . •Contraction mapping. For applying Banach fixed point theorem we need to show that f is contraction in the compact environment S i . Lemma A6. Assume that A1: ∆ i ≥ 2 β N + 1 β ln 2 (N -1) N β M 2 , ( ) then f is a contraction mapping in S i . Proof. The version of the mean value theorem Lemma A32 states for J m = 1 0 J(λξ +(1-λ)x i ) dλ: f (ξ) = f (x i ) + J m (ξ -x i ) . Therefore f (ξ) -f (x i ) J m 2 ξ -x i . We define ξ = λξ + (1 -λ)x i for some λ ∈ [0, 1]. From the proof in Lemma A4 we have p max ( ξ) = [softmax(β X T ξ)] i ≥ 1 -(N -1) exp(-β ∆i ) = 1 -˜ , ˜ = (N -1) exp(-β ∆i ) , ( ) ∆i = min j,j =i ξT x i -ξT x j . ( ) First we compute an upper bound on ˜ . We need the separation ∆i of ξ from the data. Using the Cauchy-Schwarz inequality, we obtain for 1 j N : ξT x j -x T i x j ξ -x i x j ξ -x i M . ( ) We have the lower bound on ∆i : ∆i ≥ min j,j =i x T i x i -ξ -x i M -x T i x j + ξ -x i M (156) = -2 ξ -x i M + min j,j =i x T i x i -x T i x j = ∆ i -2 ξ -x i M ≥ ∆ i -2 ξ -x i M , where we used ξ - x i = λ ξ -x i ξ -x i . From the definition of ˜ in Eq. ( 152) we have ˜ = (N -1) exp(-β ∆i ) (157) (N -1) exp (-β (∆ i -2 ξ -x i M )) (N -1) exp -β ∆ i - 2 β N , where we used ξ ∈ S i , therefore ξ - x i 1 β N M . Next we compute an lower bound on ˜ . We start with an upper on ∆i : ∆i min j,j =i x T i x i + ξ -x i M -x T i x j -ξ -x i M (158) = 2 ξ -x i M + min j,j =i x T i x i -x T i x j = ∆ i + 2 ξ -x i M ∆ i + 2 ξ -x i M , where we used ξ - x i = λ ξ -x i ξ -x i . From the definition of ˜ in Eq. ( 152) we have ˜ = (N -1) exp(-β ∆i ) (159) ≥ (N -1) exp (-β (∆ i + 2 ξ -x i M )) ≥ (N -1) exp -β ∆ i + 2 β N , where we used ξ ∈ S i , therefore ξx i 1 β N M . Now we bound the Jacobian. We can assume ˜ 0.5 otherwise (1 -˜ ) 0.5 in the following. From the proof of Lemma A24 we know for p max ( ξ) ≥ 1 -˜ , then p i ( ξ) ˜ for p i ( ξ) = p max ( ξ). Therefore, p i ( ξ)(1 -p i ( ξ)) m ˜ (1 -˜ ) for all i. Next we use the derived upper and lower bound on ˜ in previous Eq. ( 61) in Lemma A2: J( ξ) 2 2 β N M 2 ˜ -2 ˜ 2 β N M 2 (160) 2 β N M 2 (N -1) exp -β ∆ i - 2 β N - 2 (N -1) 2 exp -2 β ∆ i + 2 β N β N M 2 . The bound Eq. ( 160) holds for the mean J m , too, since it averages over J( ξ): J m 2 2 β N M 2 (N -1) exp -β ∆ i - 2 β N - (161) 2 (N -1) 2 exp -2 β ∆ i + 2 β N β N M 2 . The assumption of the lemma is ∆ i ≥ 2 β N + 1 β ln 2 (N -1) N β M 2 , ( ) This is ∆ i - 2 β N ≥ 1 β ln 2 (N -1) N β M 2 , Therefore, the spectral norm J 2 can be bounded by: J m 2 2 β (N -1) exp -β 1 β ln 2 (N -1) N β M 2 N M 2 - (164) 2 (N -1) 2 exp -2 β ∆ i + 2 β N β N M 2 = 2 β (N -1) 1 2 (N -1) N β M 2 N M 2 - 2 (N -1) 2 exp -2 β ∆ i + 2 β N β N M 2 = 1 -2 (N -1) 2 exp -2 β ∆ i + 2 β N β N M 2 < 1 . Therefore, f is a contraction mapping in S i . •Banach Fixed Point Theorem. Now we have all ingredients to apply Banach fixed point theorem. Lemma A7. Assume that A1: ∆ i ≥ 2 β N + 1 β ln 2 (N -1) N β M 2 , ( ) then f has a fixed point in S i . Proof. We use Banach fixed point theorem: Lemma A5 says that f maps from S i into S i . Lemma A6 says that f is a contraction mapping in S i . •Contraction mapping with a fixed point. We have shown that a fixed point exists. We want to know how fast the iteration converges to the fixed point. Let x * i be the fixed point of the iteration f in the sphere S i . Using the mean value theorem Lemma A32, we have with J m = 1 0 J(λξ + (1 -λ)x * i ) dλ: f (ξ) -x * i = f (ξ) -f (x * i ) J m 2 ξ -x * i (166) According to Lemma A24, if p max = max i p i ≥ 1for all x = λξ + (1 -λ)x * i , then the spectral norm of the Jacobian is bounded by J s ( x) 2 < 2 β . (167) The norm of Jacobian at x is bounded J( x) 2 2 β X 2 2 2 β N M 2 . (168) We used that the spectral norm . 2 is bounded by the Frobenius norm . F which can be expressed by the norm squared of its column vectors: X 2 X F = i x i 2 . ( ) Therefore X 2 2 N M 2 . (170) The norm of Jacobian of the fixed point iteration is bounded J m 2 2 β X 2 2 2 β N M 2 . ( ) The separation of pattern x i from data X = (x 1 , . . . , x N ) is ∆ i = min j,j =i x T i x i -x T i x j = x T i x i -max j,j =i x T i x j . We need the separation ∆i of x = λξ + (1 -λ)x * i from the data: ∆i = min j,j =i xT x i -xT x j . We compute a lower bound on ∆i . Using the Cauchy-Schwarz inequality, we obtain for 1 j N : xT x j -x T i x j x -x i x j x -x i M . (174) We have the lower bound ∆i ≥ min j,j =i x T i x i -x -x i M -x T i x j + x -x i M (175) = -2 x -x i M + min j,j =i x T i x i -x T i x j = ∆ i -2 x -x i M . Since x -x i = λξ + (1 -λ)x * i -x i (176) λ ξ -x i + (1 -λ) x * i -x i max{ ξ -x i , x * i -x i } , we have ∆i ≥ ∆ i -2 max{ ξ -x i , x * i -x i } M . ( ) For the softmax component i we have: [softmax(β X T ξ)] i = 1 1 + j =i exp(β ( ξT x j -ξT x i )) (178) ≥ 1 1 + j =i exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) = 1 1 + (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) = 1 - (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) 1 + (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) ≥ 1 -(N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) = 1 -. Therefore = (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) . ( ) We can bound the spectral norm of the Jacobian, which upper bounds the Lipschitz constant: J m 2 2 β N M 2 (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) . ( ) For a contraction mapping we require J m 2 < 1 , which can be ensured by 2 β N M 2 (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) < 1 . ( ) Solving this inequality for ∆ i gives ∆ i > 2 max{ ξ -x i , x * i -x i } M + 1 β ln 2 (N -1) N β M 2 . ( ) In an environment around x * i in which Eq. ( 183) holds, f is a contraction mapping and every point converges under the iteration f to x * i when the iteration stays in the environment. After every iteration the mapped point f (ξ) is closer to the fixed point x * i than the original point x i : f (ξ) -x * i J m 2 ξ -x * i < ξ -x * i . Using f (ξ) -x * i J m 2 ξ -x * i J m 2 ξ -f (ξ) + J m 2 f (ξ) -x * i , (185) we obtain f (ξ) -x * i J m 2 1 -J m 2 ξ -f (ξ) . For large ∆ i the iteration is close to the fixed point even after one update. This has been confirmed in several experiments. A.1.5.4 Metastable States: Fixed Points Near Mean of Similar Patterns. The proof concept is the same as for a single pattern but now for the arithmetic mean of similar patterns. •Bound on the Jacobian. The Jacobian of the fixed point iteration is J = β X diag(p) -pp T X T = XJ s X T . If we consider p i as the probability of selecting the vector x i , then we can define expectations as E p [f (x)] = N i=1 p i f (x i ). In this setting the matrix X diag(p) -pp T X T is the covariance matrix of data X when its vectors are selected according to the probability p: X diag(p) -pp T X T = Xdiag(p)X T -Xpp T X T (189) = N i=1 p i x i x T i - N i=1 p i x i N i=1 p i x i T (190) = E p [x x T ] -E p [x] E p [x] T = Var p [x] , therefore we have J = β Var p [x] . We now elaborate more on this interpretation as variance. Specifically the singular values of J (or in other words: the covariance) should be reasonably small. The singular values are the key to ensure convergence of the iteration Eq. ( 57). Next we present some thoughts. 1. It's clear that the largest eigenvalue of the covariance matrix (equal to the largest singular value) is the variance in the direction of the eigenvector associated with the largest eigenvalue. 2. Furthermore the variance goes to zero as one p i goes to one, since only one pattern is chosen and there is no variance. 3. The variance is reasonable small if all patterns are chosen with equal probability. 4. The variance is small if few similar patterns are chosen with high probability. If the patterns are sufficient similar, then the spectral norm of the covariance matrix is smaller than one. The first three issues have already been adressed. Now we focus on the last one in greater detail. We assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define: M := max i x i , γ = N i=l+1 p i , ( ) 1 -γ = l i=1 p i ≥ 1 -, ( ) pi := p i 1 -γ p i /(1 -) , l i=1 pi = 1 , m x = 1 l l i=1 x i , m max = max 1 i l x i -m x . M is an upper bound on the Euclidean norm of the patterns, which are vectors. is an upper bound on the probability γ of not choosing one of the first l patterns, while 1is a lower bound the probability (1 -γ) of choosing one of the first l patterns. m x is the arithmetic mean (the center) of the first l patterns. m max is the maximal distance of the patterns to the center m x . p is the probability p normalized for the first l patterns. The variance of the first l patterns is Var p[x 1:l ] = l i=1 pi x i x T i - l i=1 pi x i l i=1 pi x i T (200) = l i=1 pi x i - l i=1 pi x i x i - l i=1 pi x i T . Lemma A8. With the definitions in Eq. (193) to Eq. ( 200), the following bounds on the norm J 2 of the Jacobian of the fixed point iteration hold. The γ-bound for J 2 is J 2 β (1 -γ) m 2 max + γ 2 (2 -γ) M 2 (201) and the -bound for J 2 is: J 2 β m 2 max + 2 (2 -) M 2 . ( ) Proof. The variance Var p[x 1:l ] can be expressed as: (1 -γ) Var p[x 1:l ] = l i=1 p i x i - 1 1 -γ l i=1 p i x i x i - 1 1 -γ l i=1 p i x i T (203) = l i=1 p i x i x T i - l i=1 p i x i 1 1 -γ l i=1 p i x i T - 1 1 -γ l i=1 p i x i l i=1 p i x i T + l i=1 p i (1 -γ) 2 l i=1 p i x i l i=1 p i x i T = l i=1 p i x i x T i - 1 1 -γ l i=1 p i x i l i=1 p i x i T = l i=1 p i x i x T i - l i=1 p i x i l i=1 p i x i T + 1 - 1 1 -γ l i=1 p i x i l i=1 p i x i T = l i=1 p i x i x T i - l i=1 p i x i l i=1 p i x i T - γ 1 -γ l i=1 p i x i l i=1 p i x i T . Therefore, we have l i=1 p i x i x T i - l i=1 p i x i l i=1 p i x i T (204) = (1 -γ) Var p[x 1:l ] + γ 1 -γ l i=1 p i x i l i=1 p i x i T . We now can reformulate the Jacobian J: J = β l i=1 p i x i x T i + N i=l+1 p i x i x T i (205) - l i=1 p i x i + N i=l+1 p i x i l i=1 p i x i + N i=l+1 p i x i T   = β   l i=1 p i x i x T i - l i=1 p i x i l i=1 p i x i T + N i=l+1 p i x i x T i - N i=l+1 p i x i N i=l+1 p i x i T - l i=1 p i x i N i=l+1 p i x i T - N i=l+1 p i x i l i=1 p i x i T   = β   (1 -γ) Var p[x 1:l ] + γ 1 -γ l i=1 p i x i l i=1 p i x i T + N i=l+1 p i x i x T i - N i=l+1 p i x i N i=l+1 p i x i T - l i=1 p i x i N i=l+1 p i x i T - N i=l+1 p i x i l i=1 p i x i T   . The spectral norm of an outer product of two vectors is the product of the Euclidean norms of the vectors: ab T 2 = λ max (ba T ab T ) = a λ max (bb T ) = a b , since bb T has eigenvector b/ b with eigenvalue b 2 and otherwise zero eigenvalues. We now bound the norms of some matrices and vectors: l i=1 p i x i l i=1 p i x i (1 -γ) M , N i=l+1 p i x i N i=l+1 p i x i γ M , N i=l+1 p i x i x T i 2 N i=l+1 p i x i x T i 2 = N i=l+1 p i x i 2 N i=l+1 p i M 2 = γ M 2 . ( ) In order to bound the variance of the first l patterns, we compute the vector a that minimizes f (a) = l i=1 p i x i -a 2 = l i=1 p i (x i -a) T (x i -a) . ( ) The solution to ∂f (a) ∂a = 2 N i=1 p i (a -x i ) = 0 (211) is a = N i=1 p i x i . ( ) The Hessian of f is positive definite since ∂ 2 f (a) ∂a 2 = 2 N i=1 p i I = 2 I ( ) and f is a convex function. Hence, the mean x := N i=1 p i x i (214) minimizes N i=1 p i x i -a 2 . Therefore, we have l i=1 p i x i -x 2 l i=1 p i x i -m x 2 (1 -γ) m 2 max . ( ) We now bound the variance on the first l patterns: (1 -γ) Var p[x 1:l ] 2 l i=1 p i (x i -x) (x i -x) T 2 (216) = l i=1 p i x i -x 2 l i=1 p i x i -m x 2 (1 -γ) m 2 max . We obtain for the spectral norm of J: J 2 β (1 -γ) Var p[x 1:l ] 2 (217) + γ 1 -γ l i=1 p i x i l i=1 p i x i T 2 + N i=l+1 p i x i x T i 2 + N i=l+1 p i x i N i=l+1 p i x i T 2 + l i=1 p i x i N i=l+1 p i x i T 2 + N i=l+1 p i x i l i=1 p i x i T 2   β (1 -γ) Var p[x 1:l ] 2 + γ (1 -γ) M 2 + γ M 2 + γ 2 M 2 + γ (1 -γ) M 2 + γ (1 -γ) M 2 = β (1 -γ) Var p[x 1:l ] 2 + γ 2 (2 -γ) M 2 . Combining the previous two estimates immediately leads to Eq. ( 201).

The function h

(x) = x2(2 -x) has the derivative h (x) = 4(1 -x). Therefore, h(x) is monotone increasing for x < 1. For 0 γ < 1, we can immediately deduce that γ2(2 -γ) 2(2 -). Since is larger than γ, we obtain the following -bound for J 2 : J 2 β m 2 max + 2 (2 -) M 2 . ( ) We revisit the bound on (1 -γ) Var p[x 1:l ]. The trace d k=1 e k is the sum of the eigenvalues e k . The spectral norm is equal to the largest eigenvalue e 1 , that is, the largest singular value. We obtain: Var p[x 1:l ] 2 = Tr l i=1 p i (x i -x) (x i -x) T - d k=2 e k (219) = l i=1 p i Tr (x i -x) (x i -x) T - d k=2 e k = l i=1 p i x i -x 2 - d k=2 e k . Therefore, the tightness of the bound depends on eigenvalues which are not the largest. That is variations which are not along the strongest variation weaken the bound. •Proof of a fixed point by Banach Fixed Point Theorem. Without restricting the generality, we assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define: M := max i x i , γ = N i=l+1 p i , 1 -γ = l i=1 p i ≥ 1 -, ( ) pi := p i 1 -γ p i /(1 -) , ( ) l i=1 pi = 1 , m x = 1 l l i=1 x i , m max = max 1 i l x i -m x . M is an upper bound on the Euclidean norm of the patterns, which are vectors. is an upper bound on the probability γ of not choosing one of the first l patterns, while 1is a lower bound the probability (1 -γ) of choosing one of the first l patterns. m x is the arithmetic mean (the center) of the first l patterns. m max is the maximal distance of the patterns to the center m x . p is the probability p normalized for the first l patterns. •Mapped vectors stay in a compact environment. We show that if m x is sufficient dissimilar to other x j with l < j then there is an compact environment of m x (a sphere) where the fixed point iteration maps this environment into itself. The idea of the proof is to define a sphere around m x for which the points from the sphere are mapped by f into the sphere. We first need following lemma which bounds the distance m x -f (ξ) of a ξ which is close to m x . Lemma A9. For a query ξ and data X = (x 1 , . . . , x N ), we define 0 c = min j,l<j ξ T m x -ξ T x j = ξ T m x -max j,l<j ξ T x j . ( ) The following holds: m x -f (ξ) m max + 2 γ M m max + 2 M , where M = max i x i , = (N -l) exp(-β c) . ( ) Proof. Let s = arg max j,j l ξ T x j , therefore ξ T m x = 1 l l i=1 ξ T x i 1 l l i=1 ξ T x s = ξ T x s . For softmax components j with l < j we have [softmax(βX T ξ)] j = exp(β (ξ T x j -ξ T x s )) 1 + k,k =s exp(β (ξ T x k -ξ T x s )) exp(-β c) = N -l , since ξ T x s -ξ T x j ≥ ξ T m x -ξ T x j for each j with l < j, therefore ξ T x s -ξ T x j ≥ c The iteration f can be written as f (ξ) = Xsoftmax(βX T ξ) = N j=1 x j [softmax(βX T ξ)] j . We set p i = [softmax(βX T ξ)] i , therefore l i=1 p i = 1 -γ ≥ 1 -and N i=l+1 p i = γ . Therefore m x - l j=1 p j 1 -γ x j 2 = l j=1 p j 1 -γ (m x -x j ) 2 (233) = l j=1,k=1 p j 1 -γ p k 1 -γ (m x -x j ) T (m x -x k ) = 1 2 l j=1,k=1 p j 1 -γ p k 1 -γ m x -x j 2 + m x -x k 2 -x j -x k 2 = l j=1 p j 1 -γ m x -x j 2 - 1 2 l j=1,k=1 p j 1 -γ p k 1 -γ x j -x k 2 l j=1 p j 1 -γ m x -x j 2 m 2 max . It follows that m x - l j=1 p j 1 -γ x j m max We now can bound m x -f (ξ) : m x -f (ξ) = m x - N j=1 p j x j (235) = m x - l j=1 p j x j - N j=l+1 p j x j = m x - l j=1 p j 1 -γ x j + γ 1 -γ l j=1 p j x j - N j=l+1 p j x j m x - l j=1 p j 1 -γ x j + γ 1 -γ l j=1 p j x j + N j=l+1 p j x j m x - l j=1 p j 1 -γ x j + γ 1 -γ l j=1 p j M + N j=l+1 p j M m x - l j=1 p j 1 -γ x j + 2 γ M m max + 2 γ M m max + 2 M , where we applied Eq. ( 233) in the penultimate inequality. This is the statement of the lemma. The separation of the center (the arithmetic mean) m x of the first l from data X = (x l+1 , . . . , x N ) is ∆ m , defined as ∆ m = min j,l<j m T x m x -m T x x j = m T x m x -max j,l<j m T x x j . ( ) The center is separated from the other data x j with l < j if 0 < ∆ m . By the same arguments as in Eq. ( 140), ∆ m can also be expressed as ∆ m = min j,l<j 1 2 m x 2 -x j 2 + m x -x j 2 (237) = 1 2 m x 2 - 1 2 max j,l<j x j 2 -m x -x j 2 . For m x = x j we have ∆ m = 1/2 min j,l<j m x -x j 2 . Next we define the sphere where we want to apply Banach fixed point theorem. Definition 4 (Sphere S m ). The sphere S m is defined as S m := ξ | ξ -m x 1 β m max . ( ) Lemma A10. With ξ given, if the assumptions A1: ξ is inside sphere: ξ ∈ S m , A2: the center m x is well separated from other data x j with l < j: ∆ m ≥ 2 M β m max - 1 β ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } , A3: the distance m max of similar patterns to the center is sufficient small: β m 2 max 1 hold, then f (ξ) ∈ S m . Therefore, under conditions (A2) and (A3), f is a mapping from S m into S m . Proof. We need the separation ∆m of ξ from the rest of the data, which is the last N -l data points X = (x l+1 , . . . , x N ). ∆m = min j,l<j ξ T m x -ξ T x j . ( ) Using the Cauchy-Schwarz inequality, we obtain for l + 1 j N : ξ T x j -m T x x j ξ -m x x j ξ -m x M . ( ) We have the lower bound ∆m ≥ min j,l<j m T x m x -ξ -m x M -m T x x j + ξ -m x M (243) = -2 ξ -m x M + min j,l<j m T x m x -m T x x j = ∆ m -2 ξ -m x M ≥ ∆ m -2 M β m max , where we used the assumption (A1) of the lemma. From the proof in Lemma A9 we have l i=1 p i ≥ 1 -(N -l) exp(-β ∆m ) = 1 -˜ , ( ) N i=l+1 p i (N -l) exp(-β ∆m ) = ˜ . ( ) Lemma A9 states that m x -f (ξ) m max + 2 ˜ M (246) m max + 2 (N -l) exp(-β ∆m ) M . m max + 2 (N -l) exp(-β (∆ m -2 M β m max )) M . Therefore, we have m x -f (ξ) m max + 2 (N -l) exp -β (∆ m -2 M β m max ) M (247) m max + 2 (N -l) exp -β 2 M β m max - 1 β ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } -2 M β m max M = m max + 2 (N -l) 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } M m max + 1 -β m 2 max β m max = 1 β m max , where we used assumption (A2) of the lemma. Therefore, f (ξ) is a mapping from the sphere S m into the sphere S m . m max = max 1 i l x i -m x = max 1 i l x i -1/l l j=1 x j (249) = max 1 i l 1/l l j=1 (x i -x j ) max 1 i,j l x i -x j (251) max 1 i l x i + max 1 j l x i (252) 2M •Contraction mapping. For applying Banach fixed point theorem we need to show that f is contraction in the compact environment S m . Lemma A11. Assume that A1: ∆ m ≥ 2 M β m max - 1 β ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } , : β m 2 max 1 , ( ) then f is a contraction mapping in S m . Proof. The version of the mean value theorem Lemma A32 states for the symmetric J m = 1 0 J(λξ + (1 -λ)m x ) dλ: f (ξ) = f (m x ) + J m (ξ -m x ) . In complete analogy to Lemma A6, we get: f (ξ) -f (m x ) J m 2 ξ -m x . We define ξ = λξ + (1 -λ)m x for some λ ∈ [0, 1]. We need the separation ∆m of ξ from the rest of the data, which is the last N -l data points X = (x l+1 , . . . , x N ). ∆m = min j,l<j ξT m x -ξT x j . (258) From the proof in Lemma A9 we have ˜ = (N -l) exp(-β ∆m ) , l i=1 p i ( ξ) ≥ 1 -(N -l) exp(-β ∆m ) = 1 -˜ , ( ) N i=l+1 p i ( ξ) (N -l) exp(-β ∆m ) = ˜ . ( ) We first compute an upper bound on ˜ . Using the Cauchy-Schwarz inequality, we obtain for l + 1 j N : ξT x j -m T x x j ξ -m x x j ξ -m x M . ( ) We have the lower bound on ∆m : ∆m ≥ min j,l<j m T x m x -ξ -m x M -m T x x j + ξ -m x M (263) = -2 ξ -m x M + min j,l<j m T x m x -m T x x j = ∆ m -2 ξ -m x M ≥ ∆ m -2 ξ -m x M . where we used ξm x = λ ξm x ξm x . We obtain the upper bound on ˜ : ˜ (N -l) exp (-β (∆ m -2 ξ -m x M )) (264) (N -l) exp -β ∆ m - 2 M β m max . where we used that in the sphere S i holds: ξ -m x 1 β m max , therefore 2 ξ -m x M 2 M β m max . Next we compute a lower bound on ˜ and to this end start with the upper bound on ∆m using the same arguments as in Eq. ( 158) in combination with Eq. ( 266). ∆m ≥ min j,l<j m T x m x + ξ -m x M -m T x x j -ξ -m x M (267) = 2 ξ -m x M + min j,l<j m T x m x -m T x x j = ∆ m + 2 ξ -m x M ≥ ∆ m + 2 ξ -m x M . where we used ξm x = λ ξm x ξm x . We obtain the lower bound on ˜ : ˜ ≥ (N -l) exp -β ∆ m + 2 M β m max , where we used that in the sphere S i holds: ξ -m x 1 β m max , ( ) therefore 2 ξ -m x M 2 M β m max . ( ) From Lemma A8 we have J( ξ) 2 β m 2 max + ˜ 2 (2 -˜ ) M 2 (271) = β m 2 max + ˜ 4 M 2 -2 ˜ 2 M 2 β m 2 max + (N -l) exp -β ∆ m - 2 M β m max 4 M 2 - 2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 . The bound Eq. ( 271) holds for the mean J m , too, since it averages over J( ξ): J m 2 β m 2 max + (N -l) exp -β ∆ m - 2 M β m max 4 M 2 - (272) 2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 . The assumption of the lemma is ∆ m ≥ 2 M β m max - 1 β ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } , Therefore, we have ∆ m - 2 M β m max ≥ - 1 β ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } . Therefore, the spectral norm J m 2 can be bounded by: J m 2 (275) β m 2 max + (N -l) exp -β - 1 β ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } 4 M 2 -2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 = β m 2 max + (N -l) exp ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } 4 M 2 -2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 = β m 2 max + (N -l) 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } 4 M 2 - 2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 = βm 2 max + 1 -β m 2 max max{m max , 2 M } 2 M - β 2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 βm 2 max + 1 -β m 2 max -β 2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 = 1 -β 2 (N -l) 2 exp -2 β ∆ m + 2 M β m max M 2 < 1 . For the last but one inequality we used 2M max{m max , 2M }. Therefore, f is a contraction mapping in S m . •Banach Fixed Point Theorem. Now we have all ingredients to apply Banach fixed point theorem. Lemma A12. Assume that A1: ∆ m ≥ 2 M β m max - 1 β ln 1 -β m 2 max 2 β (N -l) M max{m max , 2 M } , : β m 2 max 1 , then f has a fixed point in S m . Proof. We use Banach fixed point theorem: Lemma A10 says that f maps from the compact set S m into the same compact set S m . Lemma A11 says that f is a contraction mapping in S m . •Contraction mapping with a fixed point. We assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define: M := max i x i , = N i=l+1 p i , ( ) 1 -γ = l i=1 p i ≥ 1 -, := p i 1 -γ p i /(1 -) , l i=1 pi = 1 , m x = 1 l l i=1 x i , m max = max 1 i l x i -m x . M is an upper bound on the Euclidean norm of the patterns, which are vectors. is an upper bound on the probability γ of not choosing one of the first l patterns, while 1is a lower bound the probability (1 -γ) of choosing one of the first l patterns. m x is the arithmetic mean (the center) of the first l patterns. m max is the maximal distance of the patterns to the center m x . p is the probability p normalized for the first l patterns. The variance of the first l patterns is Var p[x 1:l ] = l i=1 pi x i x T i - l i=1 pi x i l i=1 pi x i T (285) = l i=1 pi x i - l i=1 pi x i x i - l i=1 pi x i T . We have shown that a fixed point exists. We want to know how fast the iteration converges to the fixed point. Let m * x be the fixed point of the iteration f in the sphere S m . Using the mean value theorem Lemma A32, we have with J m = 1 0 J(λξ + (1 -λ)m * x ) dλ: f (ξ) -m * x = f (ξ) -f (m * x ) J m 2 ξ -m * x (286) According to Lemma A8 the following bounds on the norm J 2 of the Jacobian of the fixed point iteration hold. The γ-bound for J 2 is J 2 β (1 -γ) m 2 max + γ 2 (2 -γ) M 2 , ( ) while the -bound for J 2 is: J 2 β m 2 max + 2 (2 -) M 2 . ( ) From the last condition we require for a contraction mapping: β m 2 max < 1 . ( ) We want to see how large is. The separation of center m x from data X = (x l+1 , . . . , x N ) is ∆ m = min j,l<j m T x m x -m T x x j = m T x m x -max j,l<j m T x x j . ( ) We need the separation ∆m of x = λξ + (1 -λ)m * x from the data. ∆m = min j,l<j xT m x -xT x j . We compute a lower bound on ∆m . Using the Cauchy-Schwarz inequality, we obtain for 1 j N : xT x j -m T x x j x -m x x j x -m x M . We have the lower bound ∆m ≥ min j,l<j m T x m x -x -m x M -m T x x j + x -m x M (293) = -2 x -m x M + min j,l<j m T x m x -m T x x j = ∆ m -2 x -m x M . Since x -m x = λξ + (1 -λ)m * x -m x (294) λ ξ -m x + (1 -λ) m * x -m x max{ ξ -m x , m * x -m x } , we have ∆m ≥ ∆ m -2 max{ ξ -m x , m * x -m x } M . (295) = (N -l) exp(-β (∆ m -2 max{ ξ -m x , m * x -m x } M )) . A.1.6 PROPERTIES OF FIXED POINTS NEAR STORED PATTERN In Subsection A.1.5.3 many stable states that are fixed points near the stored patterns are considered. We now consider this case. In the fist subsection we investigate the storage capacity if all patterns are sufficiently separated so that metastable states do not appear. In the next subsection we look into the updates required and error when retrieving the stored patterns. For metastable states we can do the same analyses if each metastable state is treated as one state like one pattern. We see a trade-off that is known from classical Hopfield networks and for modern Hopfield networks. Small separation ∆ i of the pattern x i from the other patterns gives high storage capacity. However the convergence speed is lower and the retrieval error higher. In contrast, large separation ∆ i of the pattern x i from the other pattern allows the retrieval of patterns with one update step and exponentially low error. A.1.6.1 Exponentially Many Patterns can be Stored. From Subsection A.1.5.3 need some definitions. We assume to have N patterns, the separation of pattern x i from the other patterns {x 1 , . . . , x i-1 , x i+1 , . . . , x N } is ∆ i , defined as ∆ i = min j,j =i x T i x i -x T i x j = x T i x i -max j,j =i x T i x j . ( ) The pattern is separated from the other data if 0 < ∆ i . The separation ∆ i can also be expressed as ∆ i = min j,j =i 1 2 x i 2 -x j 2 + x i -x j 2 (298) = 1 2 x i 2 - 1 2 max j,j =i x j 2 -x i -x j 2 . For x i = x j we have ∆ i = 1/2 min j,j =i x i -x j 2 . The sphere S i with center x i is defined as S i = ξ | ξ -x i 1 β N M . ( ) The maximal length of a pattern is M = max i x i . We next define what we mean with storing and retrieving a pattern. Definition 5 (Pattern Stored and Retrieved). We assume that around every pattern x i a sphere S i is given. We say x i is stored if there is a single fixed point x * i ∈ S i to which all points ξ ∈ S i converge, and S i ∩ S j = ∅ for i = j. We say x i is retrieved for a given if iteration (update rule) Eq. (92) gives a point xi that is at least -close to the single fixed point x * i ∈ S i . The retrieval error is xix i . The sphere S i around pattern x i can be any a sphere and do not have the specific sphere defined in Def. 3. For a query ξ ∈ S i to converge to a fixed point x * i ∈ S i we required for the application of Banach fixed point theorem and for ensuring a contraction mapping the following inequality: ∆ i ≥ 2 β N + 1 β ln 2 (N -1) N β M 2 . ( ) This is the assumption in Lemma A7 to ensure a fixed point in sphere S i . Since replacing (N -1)N by N 2 gives 2 β N + 1 β ln 2 N 2 β M 2 > 2 β N + 1 β ln 2 (N -1) N β M 2 , the inequality follows from following master inequality ∆ i ≥ 2 β N + 1 β ln 2 N 2 β M 2 , If we assume that S i ∩S j = ∅ with i = j, then the triangle inequality with a point from the intersection gives x i -x j 2 β N M . ( ) Therefore, we have using the Cauchy-Schwarz inequality: ∆ i x T i (x i -x j ) x i x i -x j M 2 β N M = 2 β N . ( ) The last inequality is a contraction to Eq. ( 302) if we assume that 1 < 2 (N -1) N β M 2 . ( ) With this assumption, the spheres S i and S j do not intersect. Therefore, each x i has its separate fixed point in S i . We define ∆ min = min 1 i N ∆ i (306) to obtain the master inequality ∆ min ≥ 2 β N + 1 β ln 2 N 2 β M 2 . ( ) •Patterns on a sphere. For simplicity and in accordance with the results of the classical Hopfield network, we assume all patterns being on a sphere with radius M : ∀ i : x i = M . Under assumption Eq. ( 305) we have only to show that the master inequality Eq. ( 307) is fulfilled for each x i to have a separate fixed point near each x i . We defined α ij as the angle between x i and x j . The minimal angle α min between two data points is α min = min 1 i<j N α ij . ( ) On the sphere with radius M we have ∆ min = min 1 i<j N M 2 (1 -cos(α ij )) = M 2 (1 -cos(α min )) , ( ) therefore it is sufficient to show the master inequality on the sphere: M 2 (1 -cos(α min )) ≥ 2 β N + 1 β ln 2 N 2 β M 2 . Under assumption Eq. ( 305) we have only to show that the master inequality Eq. ( 307) is fulfilled for ∆ min . We consider patterns on the sphere, therefore the master inequality Eq. ( 307) becomes Eq. ( 311). First we show results when pattern positions on the sphere are constructed and ∆ min is ensured. Then we move on to random patterns on a sphere, where ∆ min becomes a random variable. •Storage capacity for patterns placed on the sphere. Next theorem says how many patterns we can stored (fixed point with attraction basin near pattern) if we are allowed to place them on the sphere. Theorem A3 (Storage Capacity (M=2): Placed Patterns). We assume β = 1 and patterns on the sphere with radius M . If M = 2 √ d -1 and the dimension d of the space is d ≥ 4 or if M = 1.7 √ d -1 and the dimension d of the space is d ≥ 50, then the number of patterns N that can be stored (fixed point with attraction basin near pattern) is at least N = 2 2(d-1) . ( ) Proof. For random patterns on the sphere, we have to show that the master inequality Eq. ( 311) holds: M 2 (1 -cos(α min )) ≥ 2 β N + 1 β ln 2 N 2 β M 2 . ( ) We now place the patterns equidistant on the sphere where the pattern are separated by an angle α min : ∀ i : min j,j =i α ij = α min , In a d-dimensional space we can place N = 2π α min d-1 points on the sphere. In a spherical coordinate system a pattern differs from its most closest patterns by an angle α min and there are d -1 angles. Solving for α min gives α min = 2π N 1/(d-1) . ( ) The number of patterns that can be stored is determined by the largest N that fulfils M 2 1 -cos 2π N 1/(d-1) ≥ 2 β N + 1 β ln 2 N 2 β M 2 . ( ) We set N = 2 2(d-1) and obtain for Eq. ( 317): M 2 1 -cos π 2 ≥ 2 β 2 3(d-1) + 1 β ln 2 β M 2 + 1 β 4 (d -1) ln 2 . ( ) This inequality is equivalent to β M 2 ≥ 1 2 2(d-1)-1 + ln 2 β M 2 + 4 (d -1) ln 2 . ( ) The last inequality can be fulfilled with M = K √ d -1 and proper K. For β = 1, d = 4 and K = 2 the inequality is fulfilled. The left hand side minus the right hand side is 4(d -1) -1/2 2(d-1)-1ln(8(d -1)) -4(d -1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 4. For β = 1, d = 50 and K = 1.7 the inequality is fulfilled. The left hand side minus the right hand side is 2.89(d -1) -1/2 2(d-1)-1 -ln(5.78(d -1)) -4(d -1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 50. If we want to store considerably more patterns, then we have to increase the length of the vectors or the dimension of the space where the vectors live. The next theorem shows results for the number of patterns N with N = 2 3(d-1) . Theorem A4 (Storage Capacity (M=5): Placed Patterns). We assume β = 1 and patterns on the sphere with radius M . If M = 5 √ d -1 and the dimension d of the space is d ≥ 3 or if M = 4 √ d -1 and the dimension d of the space is d ≥ 13, then the number of patterns N that can be stored (fixed point with attraction basin near pattern) is at least N = 2 3(d-1) . ( ) Proof. We set N = 2 3(d-1) and obtain for Eq. ( 317): M 2 1 -cos π 4 ≥ 2 β 2 3(d-1) + 1 β ln 2 β M 2 + 1 β 6 (d -1) ln 2 . ( ) This inequality is equivalent to β M 2 1 - √ 2 2 ≥ 1 2 3(d-1)-1 + ln 2 β M 2 + 6 (d -1) ln 2 . ( ) The last inequality can be fulfilled with M = K √ d -1 and proper K. For β = 1, d = 13 and K = 4 the inequality is fulfilled. The left hand side minus the right hand side is 4. 686292(d -1 ) -1/2 3(d-1)-1 -ln(32(d -1)) -6(d -1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 13. For β = 1, d = 3 and K = 5 the inequality is fulfilled. The left hand side minus the right hand side is 7.32233(d -1) -1/2 3(d-1)-1 -ln(50(d -1)) -6(d -1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 3. •Storage capacity for random patterns on the sphere. Next we investigate random points on the sphere. Under assumption Eq. ( 305) we have to show that the master inequality Eq. ( 311) is fulfilled for α min , where now α min is now a random variable. We use results on the distribution of the minimal angles between random patterns on a sphere according to Cai et al. (2013) and Brauchart et al. (2018) . Theorem 2 in Cai et al. (2013) gives the distribution of the minimal angle for random patterns on the unit sphere. Proposition 3.5 in Brauchart et al. (2018) gives a lower bound on the probability of the minimal angle being larger than a given constant. We require this proposition to derive the probability of pattern having a minimal angle α min . Proposition 3.6 in Brauchart et al. (2018) gives the expectation of the minimal angle. We will prove high probability bounds for the expected storage capacity. We need the following tail-bound on α min (the minimal angle of random patterns on a sphere): Lemma A13 ( (Brauchart et al., 2018) ). Let d be the dimension of the pattern space, κ d := 1 d √ π Γ((d + 1)/2) Γ(d/2) . ( ) and δ > 0 such that κ d-1 2 δ (d-1) 1. Then Pr(N 2 d-1 α min ≥ δ) ≥ 1 - κ d-1 2 δ d-1 . ( ) Proof. The statement of the lemma is Eq. (3-6) from Proposition 3.5 in Brauchart et al. (2018) . Next we derive upper and lower bounds on the constant κ d since we require them later for proving storage capacity bounds. Lemma A14. For κ d defined in Eq. ( 323) we have the following bounds for every d ≥ 1: 1 exp(1/6) √ e π d κ d exp(1/12) √ 2 π d < 1 . ( ) Proof. We use for x > 0 the following bound related to Stirling's approximation formula for the gamma function, c.f. (Olver et al., 2010, (5.6 .1)): 1 < Γ(x) (2 π) -1 2 x 1 2 -x exp(x) < exp 1 12 x . ( ) Using Stirling's formula Eq. ( 326), we upper bound κ d : κ d = 1 d √ π Γ((d + 1)/2) Γ(d/2) < 1 d √ π exp 1 6(d+1) exp -d+1 2 d+1 2 d 2 exp -d 2 d 2 d 2 -1 2 (327) = 1 d √ π e exp 1 6(d + 1) 1 + 1 d d 2 d 2 exp 1 12 √ 2 π √ d . For the first inequality, we applied Eq. ( 326), while for the second we used (1 + 1 d ) d < e for d ≥ 1. Next, we lower bound κ d by again applying Stirling's formula Eq. ( 326): κ d = 1 d √ π Γ((d + 1)/2) Γ(d/2) > 1 d √ π exp -d+1 2 d+1 2 d 2 exp 1 6 d exp -d 2 d 2 d 2 -1 2 (328) = 1 d √ π e exp 1 6 d 1 + 1 d d 2 d 2 ≥ 1 exp 1 6 √ e π d , where the last inequality holds because of monotonicity of (1 + 1 d ) d and using the fact that for d = 1 it takes on the value 2. We require a bound on cos to bound the master inequality Eq. ( 311). Using Eq. ( 329), we have: 1 -cos(α min ) ≥ 1 5 α 2 min . Therefore, with probability 1 -p the storage capacity is largest N that fulfills Pr M 2 α 2 min 5 ≥ 2 β N + 1 β ln 2 N 2 β M 2 ≥ 1 -p . ( ) This inequality is equivalent to Pr N 2 d-1 α min ≥ √ 5 N 2 d-1 M 2 β N + 1 β ln 2 N 2 β M 2 1 2 ≥ 1 -p . ( ) We use Eq. ( 324) to obtain: Pr N 2 d-1 α min ≥ √ 5 N 2 d-1 M 2 β N + 1 β ln 2 N 2 β M 2 1 2 (340) ≥ 1 - κ d-1 2 5 d-1 2 N 2 M -(d-1) 2 β N + 1 β ln 2 N 2 β M 2 d-1 2 . For Eq. ( 339) to be fulfilled, it is sufficient that κ d-1 2 5 d-1 2 N 2 M -(d-1) 2 β N + 1 β ln 2 N 2 βM 2 d-1 2 -p 0 . ( ) If we insert the assumption Eq. ( 334) of the theorem into Eq. ( 335), then we obtain N ≥ 2. We now apply the upper bound κ d-1 /2 < κ d-1 < 1 from Eq. ( 325) and the upper bound 2 βN 1 β from N ≥ 2 to inequality Eq. ( 341). In the resulting inequality we insert N = √ pc d-1 4 to check whether it is fulfilled with this special value of N and obtain: 5 d-1 2 p c d-1 2 M -(d-1) 1 β + 1 β ln 2 p c d-1 2 βM 2 d-1 2 p . ( ) Dividing by p, inserting M = K √ d -1, and exponentiation of the left and right side by 2 d-1 gives: 5 c K 2 (d -1) 1 β + 1 β ln 2 β c d-1 2 p K 2 (d -1) -1 0 . ( ) After some algebraic manipulation, this inequality can be written as a c + c ln(c) -b 0 , where we used a := 2 d -1 (1 + ln(2 β K 2 p (d -1))) , b := 2 K 2 β 5 . We determine the value ĉ of c which makes the inequality Eq. ( 344) equal to zero. We solve a ĉ + ĉ ln(ĉ) -b = 0 (345) for ĉ: a ĉ + ĉ ln(ĉ) -b = 0 (346) ⇔ a + ln(ĉ) = b/ĉ ⇔ a + ln(b) + ln(ĉ/b) = b/ĉ ⇔ b/ĉ + ln(b/ĉ) = a + ln(b) ⇔ b/ĉ exp(b/ĉ) = exp(a + ln(b)) ⇔ b/ĉ = W 0 (exp(a + ln(b))) ⇔ ĉ = b W 0 (exp(a + ln(b)) , where W 0 is the upper branch of the Lambert W function (see Def. A6). Hence, the solution is ĉ = b W 0 (exp(a + ln(b)) . ( ) The solution exist, since the Lambert function W 0 (x) (Olver et al., 2010, (4.13 )) is defined for -1/e < x and we have 0 < exp(a + ln(b). Since ĉ fulfills inequality Eq. ( 344) and therefore also Eq. ( 342), we have a lower bound on the storage capacity N : N ≥ √ p ĉ d-1 4 . ( ) Next we aim at a lower bound on c which does not use the Lambert W function (Olver et al., 2010, (4.13) ). Therefore, we upper bound W 0 (exp(a + ln(b)) to obtain a lower bound on c, therefore, also a lower bound on the storage capacity N . The lower bound is given in the next corollary. Corollary A1. We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M = K √ d -1. We define a := 2 d -1 (1 + ln(2 β K 2 p (d -1))) , b := 2 K 2 β 5 . Using the omega constant Ω ≈ 0.56714329 we set c =    b ln Ω exp(a + ln(b)) + 1 Ω (1 + Ω) -1 for a + ln(b) 0 , b (a + ln(b)) -a + ln(b) a + ln(b) + 1 for a + ln(b) > 0 (349) and ensure c ≥ 2 √ p 4 d-1 . ( ) Then with probability 1 -p, the number of random patterns that can be stored is Proof. We lower bound the c defined in Theorem A5. According to (Hoorfar & Hassani, 2008, Theorem 2 .3) we have for any real u and y > 1 e : N ≥ √ p c d-1 4 . ( W 0 (exp(u)) ln exp(u) + y 1 + ln(y) . ( ) To upper bound W 0 (x) for x ∈ [0, 1], we set y = 1/W 0 (1) = 1/Ω = exp Ω = -1/ ln Ω ≈ 1.76322 , ( ) where the Omega constant Ω is Ω = ∞ -∞ dt (e t -t) 2 + π 2 -1 -1 ≈ 0.56714329 . ( ) See for these equations the special values of the Lambert W function in Lemma A31. We have the upper bound on W 0 : W 0 (exp(u)) ln exp(u) + 1/Ω 1 + ln(1/Ω) = ln Ω exp(u) + 1 Ω(1 + Ω) . ( ) At the right hand side of interval [0, 1], we have u = 0 and exp(u) = 1 and get: ln Ω 1 + 1 Ω(1 + Ω) = ln 1 Ω = -ln (Ω) = Ω = W 0 (1) . (356) Therefore, the bound is tight at the right hand side of of interval [0, 1], that is for exp(u) = 1, i.e. u = 0. We have derived an bound for W 0 (exp(u)) with exp(u) ∈ [0, 1] or, equivalently, u ∈ [-∞, 0]. We obtain from Hoorfar & Hassani (2008, Corollary 2.6 ) the following bound on W 0 (exp(u)) for 1 < exp(u), or, equivalently 0 < u: W 0 (exp(u)) u u 1 + u . A lower bound on ĉ is obtained via the upper bounds Eq. ( 357) and Eq. ( 355) on W 0 as W 0 > 0. We set u = a + ln(b) and obtain W 0 (exp(a + ln(b)))    ln Ω exp(a + ln(b)) + 1 Ω (1 + Ω) -1 for a + ln(b) 0 , (a + ln(b)) -a + ln(b) a + ln(b) + 1 for a + ln(b) > 0 (358) We insert this bound into Eq. ( 347), the solution for ĉ, to obtain the statement of the theorem. •Exponential storage capacity: the dimension d of the space as a function of the parameter β, the radius of the sphere M , and the probability p. We express the number N of stored patterns by an exponential function with base c > 1 and an exponent linear in d. We derive constraints on the dimension d of the space as a function of β, the radius of the sphere M , the probability p that all patterns can be stored, and the base of the exponential storage capacity. The following theorem gives this result. Theorem A6 (Storage Capacity (d computed): Random Patterns). We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M = K √ d -1. We define a := ln(c) 2 - K 2 β 5 c , b := 1 + ln 2 p β K 2 , d = 1 + 1 a W (a exp(-b)) for a = 0 , 1 + exp(-b) for a = 0 , ( ) where W is the Lambert W function (Olver et al., 2010, (4.13) ). For 0 < a the function W is the upper branch W 0 and for a < 0 we use the lower branch W -1 . If we ensure that c ≥ 2 √ p 4 d-1 , - 1 e a exp(-b) , ( ) then with probability 1 -p, the number of random patterns that can be stored is N ≥ √ p c d-1 4 . ( ) Proof. We consider the probability that the master inequality Eq. ( 311) is fulfilled: Pr M 2 (1 -cos(α min ))) ≥ 2 β N + 1 β ln 2 N 2 β M 2 ≥ 1 -p . ( ) Using Eq. ( 329), we have: 1 -cos(α min ) ≥ 1 5 α 2 min . Therefore, with probability 1 -p the storage capacity is largest N that fulfills Pr M 2 α 2 min 5 ≥ 2 β N + 1 β ln 2 N 2 β M 2 ≥ 1 -p . ( ) This inequality is equivalent to Pr N 2 d-1 α min ≥ √ 5 N 2 d-1 M 2 β N + 1 β ln 2 N 2 β M 2 1 2 ≥ 1 -p . We use Eq. ( 324) to obtain: Pr N 2 d-1 α min ≥ √ 5 N 2 d-1 M 2 β N + 1 β ln 2 N 2 β M 2 1 2 (366) ≥ 1 - κ d-1 2 5 d-1 2 N 2 M -(d-1) 2 β N + 1 β ln 2 N 2 β M 2 d-1 2 . For Eq. ( 365) to be fulfilled, it is sufficient that κ d-1 2 5 d-1 2 N 2 M -(d-1) 2 β N + 1 β ln 2 N 2 βM 2 d-1 2 -p 0 . ( ) If we insert the assumption Eq. ( 360) of the theorem into Eq. ( 361), then we obtain N ≥ 2. We now apply the upper bound κ d-1 /2 < κ d-1 < 1 from Eq. ( 325) and the upper bound 2 βN 1 β from N ≥ 2 to inequality Eq. ( 367). In the resulting inequality we insert N = √ pc d-1 4 to check whether it is fulfilled with this special value of N and obtain: 5 d-1 2 p c d-1 2 M -(d-1) 1 β + 1 β ln 2 p c d-1 2 βM 2 d-1 2 p . Dividing by p, inserting M = K √ d -1, and exponentiation of the left and right side by 2 d-1 gives: 5 c K 2 (d -1) 1 β + 1 β ln 2 β c d-1 2 p K 2 (d -1) -1 0 . This inequality Eq. ( 369) can be reformulated as: 1 + ln 2 p β c d-1 2 K 2 (d -1) - (d -1) K 2 β 5 c 0 . ( ) Using a := ln(c) 2 - K 2 β 5 c , b := 1 + ln 2 p β K 2 , we write inequality Eq. ( 370) as ln(d -1) + a (d -1) + b 0 . ( ) We determine the value d of d which makes the inequality Eq. ( 372) equal to zero. We solve ln( d -1) + a ( d -1) + b = 0 . ( ) for d For a = 0 we have ln( d -1) + a ( d -1) + b = 0 (374) ⇔ a ( d -1) + ln( d -1) = -b ⇔ ( d -1) exp(a ( d -1)) = exp(-b) ⇔ a ( d -1) exp(a ( d -1)) = a exp(-b) ⇔ a ( d -1) = W (a exp(-b)) ⇔ d -1 = 1 a W (a exp(-b)) ⇔ d = 1 + 1 a W (a exp(-b)) , where W is the Lambert W function (see Def. A6). For a > 0 we have to use the upper branch W 0 of the Lambert W function and for a < 0 we use the lower branch W -1 of the Lambert W function (Olver et al., 2010, (4.13) ). We have to ensure that -1/e a exp(-b) for a solution to exist. For a = 0 we have d = 1 + exp(-b). Hence, the solution is d = 1 + 1 a W (a exp(-b)) . ( ) Since d fulfills inequality Eq. ( 369) and therefore also Eq. ( 368), we have a lower bound on the storage capacity N : N ≥ √ p ĉ d-1 4 . ( ) Corollary A2. We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M = K √ d -1. We define a := ln(c) 2 - K 2 β 5 c , b := 1 + ln 2 p β K 2 , d = 1 + 1 a (-ln(-a) + b) , and ensure c ≥ 2 √ p 4 d-1 , - 1 e a exp(-b) , a < 0 , then with probability 1 -p, the number of random patterns that can be stored is N ≥ √ p c d-1 4 . ( ) Setting β = 1, K = 3, c = 2 and p = 0.001 yields d < 24. Proof. For a < 0 the Eq. ( 359) from Theorem (A6) can be written as d = 1 + W -1 (a exp(-b)) a = 1 + W -1 (-exp (-(-ln(-a) + b -1) -1)) a From Alzahrani & Salem (2018, Theorem 3.1) we get the following bound on W -1 : - e e -1 (u + 1) < W -1 (-exp(-u -1)) < -(u + 1) . ( ) for u > 0. We apply Eq. ( 381) to Eq. ( 380) with u = -ln(-a) + b -1. Since a < 0 we get d > 1 + -ln(-a) + b a . ( ) •Storage capacity for the expected minimal separation instead of the probability that all patterns can be stored. In contrast to the previous paragraph, we want to argue about the storage capacity for the expected minimal separation. Therefore, we will use the following bound on the expectation of α min (minimal angle), which gives also a bound on the expected of ∆ min (minimal separation): Lemma A16 (Proposition 3.6 in Brauchart et al. (2018) ). We have the following lower bound on the expectation of α min : E N 2 d-1 α min ≥ Γ( d 2 ) 2(d -1) √ π Γ( d-1 2 ) -1 d-1 Γ(1 + 1 d -1 ) d -1 d-1 Γ(2 + 1 d-1 ) := C d-1 . ( ) The bound is valid for all N ≥ 2 and d ≥ 2. Let us start with some preliminary estimates. First of all we need some asymptotics for the constant C d-1 in Eq. ( 383): Lemma A17. The following estimate holds for d ≥ 2: C d ≥ 1 - ln(d + 1) d . ( ) Proof. The recursion formula for the Gamma function is (Olver et al., 2010, (5.5 .1)): Γ(x + 1) = x Γ(x) . ( ) We use Eq. ( 325) and the fact that d 1 d ≥ 1 for d ≥ 1 to obtain: C d ≥ (2 √ d) 1 d Γ(1 + 1 d ) (d + 1) -1 d Γ(2 + 1 d ) = (2 √ d) 1 d (d + 1) -1 d 1 -1 d > (d + 1) 1 d (386) = exp(- 1 d ln(d + 1)) ≥ 1 - 1 d ln(d + 1) , where in the last step we used the elementary inequality exp(x) ≥ 1 + x, which follows from the mean value theorem. The next theorem states the number of stored patterns for the expected minimal separation. Theorem A7 (Storage Capacity (expected separation): Random Patterns). We assume patterns on the sphere with radius M = K √ d -1 that are randomly chosen. Then for all values c ≥ 1 for which 1 5 (d -1) K 2 c -1 (1 - ln(d -1) (d -1) ) 2 ≥ 2 β c d-1 4 + 1 β ln 2 c d-1 2 β (d -1) K 2 (387) holds, the number of stored patterns for the expected minimal separation is at least N = c d-1 4 . ( ) The inequality Eq. ( 387) is e.g. fulfilled with β = 1, K = 3, c = 2 and d ≥ 17. Proof. Instead of considering the probability that the master inequality Eq. ( 311) is fulfilled we now consider whether this inequality is fulfilled for the expected minimal distance. We consider the expectation of the minimal distance ∆ min : E[∆ min ] = E[M 2 (1 -cos(α min )))] = M 2 (1 -E[cos(α min ))]) . ( ) For this expectation, the master inequality Eq. ( 311) becomes M 2 (1 -E[cos(α min ))]) ≥ 2 β N + 1 β ln 2 N 2 β M 2 . ( ) We want to find the largest N that fulfills this inequality. We apply Eq. ( 329) and Jensen's inequality to deduce the following lower bound: 1 -E[cos(α min )] ≥ 1 5 E α 2 min ≥ 1 5 E[α min ] 2 . ( ) Now we use Eq. ( 383) and Eq. ( 384) to arrive at E[α min ] 2 ≥ N -4 d-1 E[N 2 d-1 α min ] 2 ≥ N -4 d-1 C 2 d-1 ≥ N -4 d-1 (1 - ln(d -1) (d -1) ) 2 , ( ) for sufficiently large d. Thus in order to fulfill Eq. ( 390), it is enough to find values that satisfy Eq. (387). A.1.6.2 Retrieval of Patterns with One Update and Small Retrieval Error. Retrieval of a pattern x i for fixed point x * i and query ξ is defined via an by f (ξ) -x * i < , that is, the update is -close to the fixed point. The update rule retrieves a pattern with one update for well separated patterns, that is, ∆ i is large. Theorem A8 (Pattern Retrieval with One Update). With query ξ, after one update the distance of the new point f (ξ) to the fixed point x * i is exponentially small in the separation ∆ i . The precise bounds using the Jacobian J = ∂f (ξ) ∂ξ and its value J m in the mean value theorem are: f (ξ) -x * i J m 2 ξ -x * i , J m 2 2 β N M 2 (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) . ( ) For given and sufficient large ∆ i , we have f (ξ) -x * i < , that is, retrieval with one update. Proof. From Eq. ( 180) we have J m 2 2 β N M 2 (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) . (395) After every iteration the mapped point f (ξ) is closer to the fixed point x * i than the original point x i : f (ξ) -x * i J m 2 ξ -x * i . For given and sufficient large ∆ i , we have f (ξ) -x * i < , since J m 2 foes exponentially fast to zero with increasing ∆ i . We want to estimate how large ∆ i is. For x i we have: ∆ i = min j,j =i x T i x i -x T i x j = x T i x i -max j,j =i x T i x j . To estimate how large ∆ i is, assume vectors x ∈ R d and y ∈ R d that have as components standard normally distributed values. The expected value of the separation of two points with normally distributed components is E x T x -x T y = d j=1 E x 2 j + d j=1 E [x j ] d j=1 E [y j ] = d . The variance of the separation of two points with normally distributed components is Var x T x -x T y = E x T x -x T y 2 -d 2 (399) = d j=1 E x 4 j + d j=1,k=1,k =j E x 2 j E x 2 k -2 d j=1 E x 3 j E [y j ] - 2 d j=1,k=1,k =j E x 2 j E [x k ] E [y k ] + d j=1 E x 2 j E y 2 j + d j=1,k=1,k =j E [x j ] E [y j ] E [x k ] E [y k ] -d 2 = 3 d + d (d -1) + d -d 2 = 3 d . The expected value for the separation of two random vectors gives: J m 2 2 β N M 2 (N -1) exp(-β (d -2 max{ ξ -x i , x * i -x i } M )) . (400) For the exponential storage we set M = 2 √ d -1. We see the Lipschitz constant J m 2 decreases exponentially with the dimension. Therefore, f (ξ) -x * i is exponentially small after just one update. Therefore, the fixed point is well retrieved after one update. The retrieval error decreases exponentially with the separation ∆ i . Theorem A9 (Exponentially Small Retrieval Error). The retrieval error f (ξ) -x i of pattern x i is bounded by f (ξ) -x i 2 (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) M (401) and for x i -x * i 1 2 β M together with x i -ξ 1 2 β M by x i -x * i 2 e (N -1) M exp(-β ∆ i ) . Proof. We compute the retrieval error which is just f (ξ) -x i . From Lemma A4 we have x i -f (ξ) 2 M , From Eq. ( 179) we have = (N -1) exp(-β (∆ i -2 max{ ξ -x i , x * i -x i } M )) . For x i -x * i 1 2 β M and x i -ξ 1 2 β M Eq. (404) gives e (N -1) M exp(-β ∆ i ) . A.1.7 LEARNING ASSOCIATIONS We consider three cases of learning associations, i.e. three cases of how sets are associated. (i) Non of the sets is mapped in an associative space. The raw state pattern r n is the state (query) pattern ξ n , i.e. ξ n = r n , and the raw stored pattern y s is the stored pattern (key), i.e. x s = y s . (ii) Either one of the sets is mapped to the space of the other set or an association matrix is learned. (iia) The state patterns are equal to the raw patterns, i.e. ξ n = r n , and raw stored patterns are mapped via W to the space of the state patterns, i.e. x s = W y s . (iib) The stored patterns are equal to the raw patterns, i.e. x s = y s , and raw state patterns are mapped via W to the space of the stored patterns, i.e. ξ n = W T r n . (iic) The matrix W is an association matrix. We will compute the derivative of the new state pattern with respect to W , which is valid for all sub-cases (iib)-(iic). (iii) Both set of patterns are mapped in a common associative space. A raw state pattern r n is mapped by W Q to a state pattern (query) ξ n , that is ξ n = W Q r n . A raw stored pattern y s is mapped via W K to stored pattern (key) x s , that is x s = W K y s . We will compute the derivative of the new state pattern with respect to both W Q and W K . A.1.7.1 Association of Raw Patterns -No Mapping in an Associative Space. The sets are associated via their raw patterns, i.e. the raw state pattern r n is the state (query) pattern ξ n , i.e. ξ n = r n , and raw stored pattern y s is the stored pattern (key), i.e. x s = y s . There is no mapping in an associative space. The update rule is ξ new = X p , where we used p = softmax(β X T ξ) . The derivative with respect to ξ is ∂ξ new ∂ξ = β X diag(p) -pp T X T The derivative with respect to X is ∂a T ξ new ∂X = a p T + β X diag(p) -pp T (ξ T a) . These derivatives allow to apply the chain rule if a Hopfield layer is integrated into a deep neural network. A.1.7.2 Learning an Association Matrix -Only One Set is Mapped in an Associative Space. Only one of the sets R or Y is mapped in the space of the patterns of the other set. Case (a): the state patterns are equal to the raw patterns ξ n = r n and raw stored patterns are mapped via W to the space of the state patterns, i.e. x s = W y s . Case (b): the stored patterns are equal to the raw patterns x s = y s and raw state patterns are mapped via W to the space of the stored patterns, i.e. ξ n = W T r n . Case (c): the matrix W associates the sets R and Y . This case also includes that W T = W T K W Q , which is treated in next subsection. The next subsection focuses on a low rank approximation of W by defining the dimension d k of associative space and use the matrices W T K and W Q to define W , or equivalently to map R and Y into the associative space. From a mathematical point of view all these case are equal as they lead to the same update rule. Therefore, we consider in the following Case (a) with x s = W y s and ξ n = r n . Still, the following formula are valid for all three cases (a)-(c). The update rule is ξ new = W Y p , where we used p = softmax(β Y T W T ξ) . We consider the state (query) pattern ξ with result ξ new : ξ new = W Y p = W Y softmax(β Y T W T ξ) For multiple updates this update rule has to be used. However for a single update, or the last update we consider a simplified update rule. Since new state vector ξ new is projected by a weight matrix W V to another vector, we consider the simplified update rule: ξ new = Y p = Y softmax(β Y T W T ξ) The derivative with respect to W is ∂a T ξ new ∂W = ∂ξ new ∂W ∂a T ξ new ∂ξ new = ∂ξ new ∂(W T ξ) ∂(W T ξ) ∂W ∂a T ξ new ∂ξ new . ( ) ∂ξ new ∂(W T ξ) = β Y diag(p) -pp T Y T (415) ∂a T ξ new ∂ξ new = a . We have the product of the 3-dimensional tensor ∂(W T ξ)

∂W

with the vector a which gives a 2dimensional tensor, i.e. a matrix: ∂(W T ξ) ∂W ∂a T ξ new ∂ξ new = ∂(W T ξ) ∂W a = ξ T aI . ( ) ∂a T ξ new ∂W = β Y diag(p) -pp T Y T (ξ T a) = J (ξ T a) , where J is the Jacobian of the update rule defined in Eq. (59). To obtain the derivative of the full update rule Eq. ( 412) we have to add the term a p T Y T and include the factor W to get ∂a T ξ new ∂W = a p T Y T + β W Y diag(p) -pp T Y T (ξ T a) = a p T Y T + W J (ξ T a) . A.1.7.3 Learning Two Association Mappings -Both Sets are Mapped in an Associative Space. Both sets R and Y are mapped in an associative space. Every raw state pattern r n is mapped via W Q to a state pattern (query) ξ n = W Q r n . Every raw stored pattern y s is mapped via W K to a stored pattern (key) x s = W K y s . In the last subsection we considered a single matrix W . For W T = W T K W Q we have the case of the last subsection. However in this subsection we are looking for a low rank approximation of W . Toward this end we define the dimension d k of associative space and use the matrices W T K and W Q to map to the associative space. The update rule is ξ new = X p , where we used p = softmax(β X T ξ) . We consider raw state patterns r n that are mapped to state patterns ξ n = W Q r n with Q T = Ξ = W Q R and raw stored pattern y s that are mapped to stored patterns x s = W K y s with K T = X = W K Y . The update rule is ξ new = W K Y p = W K Y softmax(β Y T W T K W Q r) . Since new state vector ξ new is projected by a weight matrix W V to another vector, we consider the simplified update rule: ξ new = Y p = Y softmax(β Y T W T K W Q r) . For the simplified update rule, the vector ξ new does not live in the associative space but in the space of raw stored pattern y. However W K would map it to the associative space. •Derivative with respect to W Q . The derivative with respect to W Q is ∂a T ξ new ∂W Q = ∂ξ new ∂W Q ∂a T ξ new ∂ξ new = ∂ξ new ∂(W Q r) ∂(W Q r) ∂W Q ∂a T ξ new ∂ξ new . ( ) ∂ξ new ∂(W Q r) = β Y diag(p) -pp T Y T W T K (426) ∂a T ξ new ∂ξ new = a . We have the product of the 3-dimensional tensor ∂(W Q r) ∂W Q with the vector a which gives a 2dimensional tensor, i.e. a matrix: ∂(W Q r) ∂W Q ∂a T ξ new ∂ξ new = ∂(W Q r) ∂W Q a = r T a I . ( ) ∂a T ξ new ∂W Q = β Y diag(p) -pp T Y T W T K (r T a) = J W T K (r T a) , where J is the Jacobian of the update rule defined in Eq. (59). To obtain the derivative of the full update rule Eq. ( 423) we have to include the factor W K , then get ∂a T ξ new ∂W Q = β W K Y diag(p) -pp T Y T W T K (r T a) = W K J W T K (r T a) . •Derivative with respect to W K . The derivative with respect to W K is ∂a T ξ new ∂W K = ∂ξ new ∂W K ∂a T ξ new ∂ξ new = ∂ξ new ∂(W T K W Q r) ∂(W T K W Q r) ∂W K ∂a T ξ new ∂ξ new . ( ) ∂ξ new ∂(W T K W Q r) = β Y diag(p) -pp T Y T (432) ∂a T ξ new ∂ξ new = a . We have the product of the 3-dimensional tensor ∂(W r) ∂W K with the vector a which gives a 2-dimensional tensor, i.e. a matrix: ∂(W T K W Q r) ∂W K ∂a T ξ new ∂ξ new = ∂(W T K W Q r) ∂W K a = W T Q r T a I . ( ) ∂a T ξ new ∂W K = β Y diag(p) -pp T Y T (W T Q r T a) = J (W T Q r T a) , where J is the Jacobian of the update rule defined in Eq. ( 59). To obtain the derivative of the full update rule Eq. ( 423) we have to add the term a p T Y T and to include the factor W K , then get ∂a T ξ new ∂W K = a p T Y T + β W K Y diag(p) -pp T Y T (W T Q r T a) (437) = a p T Y T + W K J (W T Q r T a) .

A.1.8 INFINITE MANY PATTERNS AND FORGETTING PATTERNS

In the next subsection we show how the new Hopfield networks can be used for auto-regressive tasks by causal masking. In the following subsection, we introduce forgetting to the new Hopfield networks by adding a negative value to the softmax which is larger if the pattern was observed more in the past. A.1.8.1 Infinite Many Patterns. The new Hopfield networks can be used for auto-regressive tasks, that is time series prediction and similar. Causal masking masks out the future by a large negative value in the softmax. We assume to have infinite many stored patterns (keys) x 1 , x 2 , . . . that are represented by the infinite matrix X = (x 1 , x 2 , . . . , ) . The pattern index is now a time index, that is, we observe x t at time t. The pattern matrix at time t is X t = (x 1 , x 2 , . . . , x t ) . The query at time t is ξ t . For M t = max 1 i t x t , the energy function at time t is E t E t = -lse(β, X T t ξ t ) + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t (440) = -β -1 ln t i=1 exp(βx T i ξ t ) + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t . The update rule is ξ new t = X t p t = X t softmax(β X T t ξ t ) , where we used p t = softmax(β X T t ξ t ) . We can use an infinite pattern matrix with an infinite softmax when using causal masking. The pattern matrix at time t is X t = (x 1 , x 2 , . . . , x t , -αξ t , -αξ t , . . .) , with the query ξ t and α → ∞. The energy function at time t is E t E t = -lse(β, X T t ξ t ) + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t (445) = -β -1 ln   t i=1 exp(βx T i ξ t ) + α i=t+1 exp(-βα ξ t 2 )   + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t . For α → ∞ and ξ t > 0 this becomes E t = -lse(β, X T t ξ t ) + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t (447) = -β -1 ln t i=1 exp(βx T i ξ t ) + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t . A.1.8.2 Forgetting Patterns. We introduce forgetting to the new Hopfield networks by adding a negative value in the softmax which increases with patterns that are more in the past. We assume to have infinite many patterns x 1 , x 2 , . . . that are represented by the infinite matrix X = (x 1 , x 2 , . . . , ) . The pattern index is now a time index, that is, we observe x t at time t. The pattern matrix at time t is X t = (x 1 , x 2 , . . . , x t ) . The query at time t is ξ t . The energy function with forgetting parameter γ at time t is E t E t = -lse(β, X T t ξ t -γ(t -1, t -2, . . . , 0) T ) + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t (451) = -β -1 ln T i=1 exp(βx T i ξ t -γ(t -i)) + 1 2 ξ T t ξ t + β -1 ln t + 1 2 M 2 t . The update rule is ξ new t = X t p t = X t softmax(βX T t ξ t ) , where we used p t = softmax(βX T t ξ t ) . A.1.9 NUMBER OF SPURIOUS STATES The energy E is defined as E = -lse(β, X T ξ) + 1 2 ξ T ξ + β -1 ln N + 1 2 M 2 (455) = -β -1 ln N i=1 exp(βx T i ξ) + β -1 ln N + 1 2 ξ T ξ + 1 2 M 2 . ( ) Since the negative exponential function is strict monotonic decreasing, exp(-E) has minima, where E has maxima, and has maxima, where as has minima E. exp(-E) = exp(lse(β, X T ξ)) exp(- 1 2 ξ T ξ) C (457) = N i=1 exp(βx T i ξ) β -1 exp(- 1 2 ξ T ξ) C = N i=1 exp(βx T i ξ) β -1 exp(-β 1 2 ξ T ξ) β -1 C = N i=1 exp(β (x T i ξ - 1 2 ξ T ξ)) β -1 C = N i=1 exp( 1 2 β x T i x i - 1 2 β (ξ -x i ) T (ξ -x i )) β -1 C = N i=1 λ(x i , β) G(ξ; x i , β -1 I) β -1 C , where C is a positive constant, λ(x i , β) = exp( 1 2 βx T i x i ) and G(ξ; x i , β -1 I) is the Gaussian with mean x i and covariance matrix β -1 I. Since C is a positive constant and x β -1 = exp(β -1 ln x) is strict monotonic for positive x, the minima of E are the maxima of N i=1 λ(x i , β) G(ξ; x i , β -1 I) . In Carreira-Perpiñán & Williams (2003) it was shown that Eq. ( 458) can have more than N modes, that is, more than N maxima. A.2 PROPERTIES OF SOFTMAX, LOG-SUM-EXPONENTIAL, LEGENDRE TRANSFORM, LAMBERT W FUNCTION For β > 0, the softmax is defined as Definition A1 (Softmax). p = softmax(βx) p i = [softmax(βx)] i = exp(βx i ) k exp(βx k ) . ( ) We also need the log-sum-exp function (lse), defined as Definition A2 (Log-Sum-Exp Function). lse(β, x) = β -1 ln N i=1 exp(βx i ) . We can formulate the lse in another base: β a = β ln a , (β, x) = β -1 ln N i=1 exp(β x i ) (463) = (β a ln a) -1 ln N i=1 exp(β a ln a x i ) = (β a ) -1 log a N i=1 a βa xi . In particular, the base a = 2 can be used to speed up computations. Next, we give the relation between the softmax and the lse function. Lemma A18. The softmax is the gradient of the lse: softmax(βx) = ∇ x lse(β, x) . In the next lemma we report some important properties of the lse function. Lemma A19. We define L := z T x -β -1 N i=1 z i ln z i (465) with L ≥ z T x. The lse is the maximum of L on the N -dimensional simplex D with D = {z | i z i = 1, 0 z i }: lse(β, x) = max z∈D z T x -β -1 N i=1 z i ln z i . The softmax p = softmax(βx) is the argument of the maximum of L on the N -dimensional simplex D with D = {z | i z i = 1, 0 z i }: p = softmax(βx) = arg max z∈D z T x -β -1 N i=1 z i ln z i . Proof. Eq. ( 466) is obtained from Equation (8) in Gao & Pavel (2017) and Eq. ( 467) from Equation (11) in Gao & Pavel (2017) . From a physical point of view, the lse function represents the "free energy" in statistical thermodynamics (Gao & Pavel, 2017) . Next we consider the Jacobian of the softmax and its properties. Lemma A20. The Jacobian J s of the softmax p = softmax(βx) is J s = ∂softmax(βx) ∂x = β diag(p) -pp T , which gives the elements [J s ] ij = βp i (1 -p i ) for i = j -βp i p j for i = j . ( ) Next we show that J s has eigenvalue 0. Lemma A21. The Jacobian J s of the softmax function p = softmax(βx) has a zero eigenvalue with eigenvector 1. Proof. [J s 1] i = β   p i (1 -p i ) - j,j =i p i p j   = β p i (1 - j p j ) = 0 . Next we show that 0 is the smallest eigenvalue of J s , therefore J s is positive semi-definite but not (strict) positive definite. Lemma A22. The Jacobian J s of the softmax p = softmax(βξ) is symmetric and positive semidefinite. Proof. For an arbitrary z, we have z T diag(p) -pp T z = i p i z 2 i - i p i z i 2 (471) = i p i z 2 i i p i - i p i z i 2 ≥ 0 . The last inequality hold true because the Cauchy-Schwarz inequality says (a T a)(b T b) ≥ (a T b) 2 , which is the last inequality with a i = z i √ p i and b i = √ p i . Consequently diag(p) -pp T is positive semi-definite. Alternatively i p i z 2 i -( i p i z i ) 2 can be viewed as the expected second moment minus the mean squared which gives the variance that is larger equal to zero. The Jacobian is 0 < β times a positive semi-definite matrix, which is a positive semi-definite matrix. Moreover, the softmax is a monotonic map, as described in the next lemma. Lemma A23. The softmax softmax(βx) is monotone for β > 0, that is, (softmax(βx) -softmax(βx )) T (x -x ) ≥ 0 . ( ) Proof. We use the version of mean value theorem Lemma A32 with the symmetric matrix J m s = 1 0 J s (λx + (1 -λ)x ) dλ: softmax(x) -softmax(x ) = J m s (x -x ) . Therefore (softmax(x) -softmax(x )) T (x -x ) = (x -x ) T J m s (x -x ) ≥ 0 , since J m s is positive semi-definite. For all λ the Jacobians J s (λx + (1 -λ)x ) are positive semi-definite according to Lemma A22. Since x T J m s x = 1 0 x T J s (λx + (1 -λ)x ) x dλ ≥ 0 (475) is an integral over positive values for every x, J m s is positive semi-definite, too. Next we give upper bounds on the norm of J s . Lemma A24. For a softmax p = softmax(βx) with m = max i p i (1 -p i ), the spectral norm of the Jacobian J s of the softmax is bounded: J s 2 2 m β , (476) J s 1 2 m β , (477) J s ∞ 2 m β . In particular everywhere holds J s 2 1 2 β . ( ) If p max = max i p i ≥ 1 -≥ 0.5, then for the spectral norm of the Jacobian holds J s 2 2 β -2 2 β < 2 β . ( ) Proof. We consider the maximum absolute column sum norm A 1 = max j i |a ij | and the maximum absolute row sum norm A ∞ = max i j |a ij | . ( ) We have for A = J s = β diag(p) -pp T j |a ij | = β   p i (1 -p i ) + j,j =i p i p j   = β p i (1 -2p i + j p j ) (483) = 2 β p i (1 -p i ) 2 m β , i |a ij | = β   p j (1 -p j ) + i,i =j p j p i   = β p j (1 -2p j + i p i ) (484) = 2 β p j (1 -p j ) 2 m β . Therefore, we have J s 1 2 m β , (485) J s ∞ 2 m β , J s 2 J s 1 J s ∞ 2 m β . ( ) The last inequality is a direct consequence of Hölder's inequality. For 0 p i 1, we have p i (1 -p i ) 0.25. Therefore, m 0.25 for all values of p i . If p max ≥ 1 -≥ 0.5 ( 0.5), then 1 -p max and for p i = p max p i . The derivative ∂x(1 -x)/∂x = 1 -2x > 0 for x < 0.5, therefore x(1 -x) increases with x for x < 0.5. Using x = 1 -p max and for p i = p max x = p i , we obtain p i (1 -p i ) (1 -) for all i. Consequently, we have m (1 -). Using the bounds on the norm of the Jacobian, we give some Lipschitz properties of the softmax function. Lemma A25. The softmax function p = softmax(βx) is (β/2)-Lipschitz. The softmax function p = softmax(βx) is (2βm)-Lipschitz in a convex environment U for which m = max x∈U max i p i (1p i ). For p max = min x∈U max i p i = 1-, the softmax function p = softmax(βx) is (2β )-Lipschitz. For β < 2m, the softmax p = softmax(βx) is contractive in U on which m is defined. Proof. The version of mean value theorem Lemma A32 states for the symmetric matrix J m s = 1 0 J(λx + (1 -λ)x ) dλ: softmax(x) -softmax(x ) = J m s (x -x ) . According to Lemma A24 for all x = λx + (1 -λ)x ) J s ( x) 2 2 m β , where m = max i pi (1 -pi ). Since x ∈ U and x ∈ U we have x ∈ U , since U is convex. For m = max x∈U max i p i (1 -p i ) we have m m for all m. Therefore, we have J s ( x) 2 2 m β (490) which also holds for the mean: J m s 2 2 m β . Therefore, softmax(x) -softmax(x ) J m s 2 x -x 2 m β x -x . From Lemma A24 we know m 1/4 globally. For p max = min x∈U max i p i = 1we have according to Lemma A24: m . For completeness we present a result about cocoercivity of the softmax: Lemma A26. For m = max x∈U max i p i (1 -p i ), softmax function p = softmax(βx) is 1/(2mβ)- cocoercive in U , that is, (softmax(x) -softmax(x )) T (x -x ) ≥ 1 2 m β softmax(x) -softmax(x ) . ( ) In particular the softmax function p = softmax(βx) is (2/β)-cocoercive everywhere. With p max = min x∈U max i p i = 1 -, the softmax function p = softmax(βx) is 1/(2β )-cocoercive in U . Proof. We apply the Baillon-Haddad theorem (e.g. Theorem 1 in Gao & Pavel ( 2017)) together with Lemma A25. Finally, we introduce the Legendre transform and use it to describe further properties of the lse. We start with the definition of the convex conjugate. Definition A3 (Convex Conjugate). The Convex Conjugate (Legendre-Fenchel transform) of a function f from a Hilbert Space X to [-∞, ∞] is f * which is defined as f * (x * ) = sup x∈X (x T x * -f (x)) , x * ∈ X See page 219 Def. 13.1 in Bauschke & Combettes (2017) and page 134 in Garling (2017) . Next we define the Legendre transform, which is a more restrictive version of the convex conjugate. Definition A4 (Legendre Transform). The Legendre transform of a convex function f from a convex set X ⊂ R n to R (f : X → R) is f * , which is defined as f * (x * ) = sup x∈X (x T x * -f (x)) , x * ∈ X * , X * = x * ∈ R n | sup x∈X (x T x * -f (x)) < ∞ . ( ) See page 91 in Boyd & Vandenberghe (2009) . Definition A5 (Epi-Sum). Let f and g be two functions from X to (-∞, ∞], then the infimal convolution (or epi-sum) of f and g is f g : X → [-∞, ∞] , x → inf y∈X (f (y) + g(x -y)) See Def. 12.1 in Bauschke & Combettes (2017) . Lemma A27. Let f and g be functions from X to (-∞, ∞]. Then the following hold: 1. Convex Conjugate of norm squared  1 2 . 2 * = 1 2 . 2 . ( (f (Ax + b)) * = f * A -T x * -b T A -T x * . ( ) 5. Convex Conjugate of epi-sums (f g) * = f * + g * . Proof. 1. Since h(t) := t 2 2 is a non-negative convex function and h(t) = 0 ⇐⇒ t = 0 we have because of Proposition 11.3.3 in Garling (2017) that h ( x ) * = h * ( x * ). Additionally, by example (a) on page 137 we get for 1 < p < ∞ and 1 p + 1 q = 1 that |t| p p * = |t * | q q . Putting all together we get the desired result. The same result can also be deduced from page 222 Example 13.6 in Bauschke & Combettes (2017) . 2. Follows immediately from the definition since αf * x * α = α sup x∈X x T x * α -f (x) = sup x∈X (x T x * -αf (x)) = (αf ) * (x * ) 3. (f + β) * := sup x∈X x T x * -f (x) -β =: f * -β 4. (f (Ax + b)) * (x * ) = sup x∈X x T x * -f (Ax + b) = sup x∈X (Ax + b) T A -T x * -f (Ax + b) -b T A -T x * = sup y∈X y T A -T x * -f (y) -b T A -T x * = f * A -T x * -b T A -T x * 5. From Proposition 13.24 (i) in Bauschke & Combettes (2017) and Proposition 11.4.2 in Garling (2017) we get (f g) * (x * ) = sup x∈X x T x * -inf y∈X (f (y) -g(x -y)) = sup x,y∈X x T x * -f (y) -g(x -y) = sup x,y∈X y T x * -f (y) + (x -y) T x * -g(x -y) = f * (x * ) + g * (x * ) Lemma A28. The Legendre transform of the lse is the negative entropy function, restricted to the probability simplex and vice versa. For the log-sum exponential f (x) = ln n i=1 exp(x i ) , Using p = softmax(βX T ξ) , the Hessian of lse(β, X T ξ) ∂ 2 lse(β, X T ξ) ∂ξ 2 = β X diag(p) -pp T X T is positive semi-definite since diag(p) -pp T is positive semi-definite according to Lemma A22. Therefore, lse(β, X T ξ) is convex and continuous. If f is a regular convex function (lower semi-continuous convex function), then f * * = f according to page 135 Exercise 11.2.3 in Garling (2017) . If f is lower semi-continuous and convex, then f * * = f according to Theorem 13.37 (Fenchel-Moreau) in Bauschke & Combettes (2017) . Consequently we have lse(β, X T ξ) * * = lse(β, X T ξ) . ( ) We introduce the Lambert W function and some of its properties, since it is needed to derive bounds on the storage capacity of our new Hopfield networks. Definition A6 (Lambert Function). The Lambert W function (Olver et al., 2010, (4.13 )) is the inverse function of f (y) = ye y . (515) The Lambert W function has an upper branch W 0 for -1 y and a lower branch W -1 for y -1. We use W if a formula holds for both branches. We have W (x) = y ⇒ ye y = x . We present some identities for the Lambert W function (Olver et al., 2010, (4.13 )): Lemma A30. Identities for the Lambert W function are W (x) e W (x) = x , (517) W (xe x ) = x , e W (x) = x W (x) , e -W (x) = W (x) x , e nW (x) = x W (x) n , W 0 (x ln x) = ln x for x ≥ 1 e , W -1 (x ln x) = ln x for x 1 e , W (x) = ln x W (x) for x ≥ - 1 e , W n x n W (x) n-1 = n W (x) for n, x > 0 , W (x) + W (y) = W x y 1 W (x) + 1 W (y) for x, y > 0 , W 0 - ln x x = -ln x for 0 < x e , W -1 - ln x x = -ln x for x > e , e -W (-ln x) = W (-ln x) -ln x for x = 1 . (converts the output into the response format). Memory networks are generalized to an end-to-end trained model, where the arg max memory call is replaced by a differentiable softmax (Sukhbaatar et al., 2015a; b) . Linear Memory Network use a linear autoencoder for sequences as a memory (Carta et al., 2020) . To enhance RNNs with additional associative memory like Hopfield networks have been proposed (Ba et al., 2016a; b) . The associative memory stores hidden states of the RNN, retrieves stored states if they are similar to actual ones, and has a forgetting parameter. The forgetting and storing parameters of the RNN associative memory have been generalized to learned matrices (Zhang & Zhou, 2017) . LSTMs with associative memory via Holographic Reduced Representations have been proposed (Danihelka et al., 2016) . Recently most approaches to new memories are based on attention. The neural Turing machine (NTM) is equipped with an external memory and an attention process (Graves et al., 2014) . End to end memory networks (EMN) make the attention scheme of memory networks (Weston et al., 2014) differentiable by replacing arg max through a softmax (Sukhbaatar et al., 2015a; b) . EMN with dot products became very popular and implement a key-value attention (Daniluk et al., 2017) for self-attention. An enhancement of EMN is the transformer (Vaswani et al., 2017a; b) and its extensions (Dehghani et al., 2018) . The transformer had great impact on the natural language processing (NLP) community as new records in NLP benchmarks have been achieved (Vaswani et al., 2017a; b) . MEMO uses the transformer attention mechanism for reasoning over longer distances (Banino et al., 2020) . Current state-of-the-art for language processing is a transformer architecture called "the Bidirectional Encoder Representations from Transformers" (BERT) (Devlin et al., 2018; 2019) . A.3.1.2 Modern Hopfield networks: Overview. The storage capacity of classical binary Hopfield networks (Hopfield, 1982) has been shown to be very limited. et al., 1987) . The same bound holds for nonlinear learning rules (Mazza, 1997) . Using tricks-of-trade and allowing small retrieval errors, the storage capacity is about 0.138d (Crisanti et al., 1986; Hertz et al., 1991; Torres et al., 2002) . If the learning rule is not related to the Hebb rule then up to d patterns can be stored (Abu-Mostafa & StJacques, 1985) . Using Hopfield networks with non-zero diagonal matrices, the storage can be increased to Cd ln(d) (Folli et al., 2017) . In contrast to the storage capacity, the number of energy minima (spurious states, stable states) of Hopfield networks is exponentially in d (Tanaka & Edwards, 1980; Bruck & Roychowdhury, 1990; Wainrib & Touboul, 2013) . Recent advances in the field of binary Hopfield networks (Hopfield, 1982) led to new properties of Hopfield networks. The stability of spurious states or metastable states was sensibly reduced by a Hamiltonian treatment for the new relativistic Hopfield model (Barra et al., 2018) . Recently the storage capacity of Hopfield networks could be increased by new energy functions. Interaction functions of the form F (x) = x n lead to storage capacity of α n d n-1 , where α n depends on the allowed error probability (Krotov & Hopfield, 2016; 2018; Demircigil et al., 2017) (see (Krotov & Hopfield, 2018) for the non-binary case). Interaction functions of the form F (x) = x n lead to storage capacity of α n d n-1 cn ln d for c n > 2(2n -3)!! (Demircigil et al., 2017) . Interaction functions of the form F (x) = exp(x) lead to exponential storage capacity of 2 d/2 where all stored patterns are fixed points but the radius of attraction vanishes (Demircigil et al., 2017) . It has been shown that the network converges with high probability after one update (Demircigil et al., 2017) .

A.3.2 ENERGY AND UPDATE RULE FOR BINARY MODERN HOPFIELD NETWORKS

We follow (Demircigil et al., 2017) where the goal is to store a set of input data x 1 , . . . , x N that are represented by the matrix X = (x 1 , . . . , x N ) . The x i is pattern with binary components x ij ∈ {-1, +1} for all i and j. ξ is the actual state of the units of the Hopfield model. Krotov and Hopfield (Krotov & Hopfield, 2016 ) defined the energy function E with the interaction function F that evaluates the dot product between patterns x i and the As the code base we use the transformers repository from Hugging Face, Inc (Wolf et al., 2019) . We aim to reproduce the dataset of Devlin et al. (2019) as close as possible, which consists of the English Wikipedia dataset and the Toronto BookCorpus dataset (Zhu et al., 2015) . Due to recent copyright claims the later is not publicly available anymore. Therefore, the pre-training experiments use an uncased snapshot of the original BookCorpus dataset. A.5.1.3 Hopfield Operating Classes of Transformer and BERT Models. To better understand how operation modes in attention heads develop, we tracked the distribution of counts k (see main paper) over time in a BERT-small model. At the end of training we visualized the count distribution, grouped into four classes (see Figure A.4) . The thresholds for the classes were chosen according to the thresholds of Figure 2 in the main paper. However, they are divided by a factor of 4 to adapt to the shorter sequence length of 128 compared to 512. From this plot it is clear, that the attention in heads of Class IV commit very early to the operating class of small metastable states. A.5.1.4 Learning Dynamics of Transformer and BERT Models. To observe this behavior in the early phase of training, we created a ridge plot of the distributions of counts k for the first 20, 000 steps (see Figure A.5 (a) ). This plot shows that the attention in heads of middle layers often change the operation mode to Class IV around 9, 000 to 10, 000 steps. At the same time the second big drop in the loss occurs. The question arises whether this is functionally important or whether it is an artefact which could be even harmful. To check if the attention mechanism is still able to learn after the change in the operation mode we analyzed the gradient flow through the softmax function. For every token we calculate the Frobenius norm of the Jacobian of the softmax over multiple samples. Then, for every head we plot the distribution of the norm (see Figure A.5(b) ). The gradients with respect to the weights are determined by the Jacobian J defined in Eq. ( 59) as can be seen in Eq. ( 418), Eq. ( 429), and Eq. ( 435). We can see that the attention in heads of Class IV remain almost unchanged during the rest of the training. A.5.1.5 Attention Heads Replaced by Gaussian Averaging Layers. The self-attention mechanism proposed in Vaswani et al. (2017a) utilizes the softmax function to compute the coefficients of a convex combination over the embedded tokens, where the softmax is conditioned on the input. However, our analysis showed that especially in lower layers many heads perform averaging over a very large number of patterns. This suggests that at this level neither the dependency on the input nor a fine grained attention to individual positions is necessary. As an alternative to the original mechanism we propose Gaussian averaging heads which are computationally more efficient. Here, the softmax function is replaced by a discrete Gaussian kernel, where the location µ and the scale σ are learned. In detail, for a sequence length of N tokens we are given a vector of location parameters µ = (µ 1 , . . . , µ N ) T and a vector of corresponding scale parameters σ = (σ 1 , . . . , σ N ) T . We subdivide the interval [-1, 1] into N equidistant supporting points {s j } N j=1 , where s j = (j -1) -0.5 (N -1) 0.5 (N -1) . The attention [A] i,j from the i-th token to the j-th position is calculated as [A] i,j = 1 z i exp - 1 2 s j -µ i σ i 2 , where z i normalizes the i-th row of the attention matrix A to sum up to one: z i = N j=1 exp - 1 2 s j -µ i σ i 2 . For initialization we uniformly sample a location vector µ ∈ [-1, 1] N and a scale vector σ ∈ [0.75, 1.25] N per head. A simple way to consider the individual position of each token at initialization is to use the supporting points µ i = s i (see Figure A.6) . In practice no difference to the random initialization was observed. •Number of parameters. Gaussian averaging heads can reduce the number of parameters significantly. For an input size of N tokens, there are 2•N parameters per head. In contrast, a standard self-attention head with word embedding dimension d y and projection dimension d k has two weight matrices The gradients with respect to the weights are determined by the Jacobian J defined in Eq. ( 59) as can be seen in Eq. ( 418), Eq. ( 429), and Eq. ( 435). Only for very long sequences (and given that the word embedding dimension stays the same) the dependence on N may become a disadvantage. But of course, due to the independence from the input the Gaussian averaging head is less expressive in comparison to the original attention mechanism. A recently proposed input independent replacement for self-attention is the so called Random Synthesizer (Tay et al., 2020) . Here the softmax-attention is directly parametrized with an N × N matrix. This amounts to 0.5 • N more parameters than Gaussian averaging. A.5.2.1 Immune Repertoire Classification. An architecture called DeepRC, is based on our modern Hopfield networks, for immune repertoire classification and compared to other machine learning approaches. For DeepRC, we consider immune repertoires as input objects, which are represented as bags of instances. In a bag, each instance is an immune receptor sequence and each bag can contain a large number of sequences. At its core, DeepRC consists of a modern Hopfield network that extracts information from each repertoire. The stored patterns (keys) are representations of the immune amino acid sequences (instances) that are obtained by an 1D convolutional network with position encoding. Each state pattern (query) is static and learned via backpropagation. For details see Widrich et al. (2020a; b) . W Q , W K ∈ R d k ×dy ,

WRNHQ DWWHQWLRQ

Our new Hopfield network has been integrated into a deep learning architecture for immune repertoire classification, a massive multiple instance learning task (Widrich et al., 2020a; b) . Theorem 3 states that modern Hopfield networks possess an exponential storage capacity which enables to tackle massive multiple instance learning (MIL) problems (Dietterich et al., 1997) . Immune repertoire classification (Emerson et al., 2017) typically requires to extract few patterns from a large set of sequences, the repertoire, that are indicative for the respective immune status. Most MIL methods fail due the large number of instances. Data is obtained by experimentally observed immune receptors as well as simulated sequences sequence motifs (Akbar et al., 2019; Weber et al., 2020) with low yet varying degrees of frequency are implanted. Four different categories of datasets are constructed: (a) Simulated immunosequencing data with implanted motifs, (b) immunosequencing data generated by long short-term memory (LSTM) with implanted motifs, (c) real-world immunosequencing data with implanted motifs, and (d) real-world immunosequencing data with known immune status (Emerson et al., 2017) . Categories (a), (b), and (d) contain approx. 300,000 instances per immune repertoire. With over 30 billion sequences in total, this represents one of the largest multiple instance learning experiments ever conducted (Carbonneau et al., 2018) . Despite the massive number of instances as well as the low frequency of sequences indicative of the respective immune status, deep learning architectures with modern Hopfield networks outperform all competing methods with respect to average area under the ROC curve in all four categories, (a), (b), (c) and (d) (for details see Widrich et al. (2020a) ). We evaluate and compare the performance of DeepRC to a set of machine learning methods that serve as baseline, were suggested, or can readily be adapted to immune repertoire classification. The methods comprise (i) known motif, which counts how often the known implanted motifs occur, (ii) Support Vector Machine (SVM) approach that uses a fixed mapping from a bag of sequences to the corresponding k-mer counts and used the MinMax and Jaccard kernel, (iii) k-Nearest Neighbor (KNN) with k-mer representation, transforming MinMax and Jaccard kernel to distances, (iv) logistic regression on the k-mer representation, (v) burden test that first identifies sequences or k-mers and then computes a burden score per individual, and (vi) logistic multiple instance learning (lMIL). On the real-world dataset DeepRC achieved an AUC of 0.832 ± 0.022, followed by the SVM with MinMax kernel (AUC 0.825 ± 0.022) and the burden test with an AUC of 0.699 ± 0.041. Overall on all datasets, DeepRC outperformed all competing methods with respect to average AUC (see Widrich et al. (2020a; b) ). DeepRC 0.832 ± 0.022 1.00 ± 0.00 0.98± 0.01 1.00± 0.00 0.94±0.01 1.00± 0.00 1.00± 0.00 1.00± 0.00 1.00± 0.00 1.00± 0.00 0.846± 0.223 SVM (MM) 0.825 ± 0.022 1.00 ± 0.00 0.58± 0.02 1.00± 0.00 0.53±0.02 1.00± 0.00 1.00± 0.00 1.00± 0.00 1.00± 0.00 0.99± 0.01 0.827± 0.210 SVM (J) 0.546 ± 0.021 0.99 ± 0.00 0.53± 0.02 1.00± 0.00 0.57±0.02 0.98± 0.04 1.00± 0.00 1.00± extracting an average of instances that are indicative for one of the two classes. The input to the HopfieldPooling layer is a set of embedded instances Y and a trainable but fixed state (query) pattern Q used for averaging of class-indicative instances. This averaging enables a compression of variable-sized bags to a fixed-sized representation to discriminate the bags. We performed a manual hyperparameter search on a validation set. In detail, we used the following architecture to perform the given task on the Elephant, Fox, Tiger and UCSCB breast cancer datasets: (I) we apply fully connected linear embedding layers with ReLU activation. (II) The output of this embedding serves as the input to our HopfieldPooling layer where the above described pooling operation is performed. (III) Thereafter we use 'ReLU -Linear blocks' as the final linear output layers that perform the classification. Among other hyperparameters, different hidden layer widths (for the fully connected pre-and post-HopfieldPooling layers), learning rates and batch sizes were tried. Additionally our focus resided on the hyperparameters of the HopfieldPooling layer. Among those were the number of heads, the head dimension and the scaling factor β. All models were trained for 160 epochs using the AdamW optimizer (Loshchilov & Hutter, 2017) with exponential learning rate decay (see Table A .2), and validated by 10-fold nested cross validation repeated five times with different splits on the data sets. The reported ROC AUC scores are the average of these repetitions. As overfitting imposed quite a problem, bag dropout was applied as the regularization technique of choice. A.5.3 EXPERIMENT 3: CLASSIFICATION ON SMALL UCI BENCHMARK DATASETS A.5.3.1 Motivation. Datasets with a small number of samples, like the UCI benchmark datasets, are particularly difficult for neural networks to generalize on. In contrast to their performance on larger datasets, they are consistently outperformed by methods like e.g. gradient boosting, random forests (RF) and support vector machines (SVMs). Finding samples or even learning prototypes that are highly indicative for the class of a sample (query) suggest the use of Hopfield networks. We applied a modern Hopfield network via the layer Hopfield. The input vector is mapped to R using a self-normalizing net (SNN) and W K is learned, where the dimension of W K (the number of stored fixed pattern) is a hyperparameter. The output Z of Hopfield enters the output layer. A.5. A.5.3.3 Experimental design and implementation details. As specified in the main paper, we consider 75 datasets of the UC Irvine Machine Learning Repository, which contain less than 1, 000 samples per dataset, following the dataset separation into large and small dataset in Klambauer et al. (2017a) . On each dataset, we performed a grid-search to determine the best hyperparameter setting and model per dataset. The hyperparameter search-space of the grid-search is listed in Table A .3. All models were trained for 100 epochs with a mini-batch size of 4 samples using the cross entropy loss and the PyTorch SGD module for stochastic gradient descent without momentum and without weight decay or dropout. After each epoch, the model accuracy was computed on a separated validation set. Using early stopping, the model with the best validation set accuracy averaged over 16 consecutive epochs was selected as final model. This final model was then evaluated against a separated test set to determine the accuracy, as reported in Tables 2 and Table uci_detailed_results .csv in the supplemental materials. As network architecture, we use {0, 1, 7} fully connected embedding layers with SELU Klambauer et al. (2017a) activation functions and {32, 128, 1024} hidden units per embedding layer. These embedding layers are followed by the layer Hopfield. The number of hidden units is also used as number of dimensions for the Hopfield association space with a number of {1, 32} heads. The layer Hopfield is followed by a mapping to the output vector, which has as dimension the number of classes. Finally, the softmax function is applied to obtain the predicted probability for a class. A .3: Hyperparameter search-space for grid-search on small UCI benchmark datasets. All models were trained for 100 epochs using stochastic gradient descent with early stopping based on the validation set accuracy and a minibatch size of 4 samples. The number of stored patterns is depending on the number of target classes of the individual tasks. A .5.3.4 Results. We compared the performance of 25 methods based on their method rank. For this we computed the rank per method per dataset based on the accuracy on the test set, which was then averaged over all 75 datasets for each method to obtain the method rank. For the baseline methods we used the scores summarized by (Klambauer et al., 2017a) . In this section, we describe the implementation of Hopfield layers in PyTorch (Paszke et al., 2017; 2019) and, additionally, provide a brief usage manual. Possible applications for a Hopfield layer in a deep network architecture comprise: • multiple instance learning (MIL) (Dietterich et al., 1997) , • processing of and learning with point sets (Qi et al., 2017a; b; Xu et al., 2018) , • set-based and permutation invariant learning (Guttenberg et al., 2016; Ravanbakhsh et al., 2016; Zaheer et al., 2017; Korshunova et al., 2018; Ilse et al., 2018; Zhai et al., 2020) , • attention-based learning (Vaswani et al., 2017a) , • associative learning, • natural language processing, • sequence analysis and time series prediction, and • storing and retrieving reference or experienced data, e.g. to store training data and retrieve it by the model or to store experiences for reinforcement learning. The Hopfield layer in a deep neural network architecture can implement: • a memory (storage) with associative retrieval (Danihelka et al., 2016; Ba et al., 2016a) , • conditional pooling and averaging operations (Wang et al., 2018; Ilse et al., 2020) , • combining data by associations (Agrawal et al., 1993) , • associative credit assignment (e.g. Rescorla-Wagner model or value estimation) (Sutton & Barto, 2018) , and • attention mechanisms (Vaswani et al., 2017a; Bahdanau et al., 2014) . In particular, a Hopfield layer can substitute attention layers in architectures of transformer and BERT models. The Hopfield layer is designed to be used as plug-in replacement for existing layers like • pooling layers (max-pooling or average pooling), • permutation equivariant layers (Guttenberg et al., 2016; Ravanbakhsh et al., 2016) , • GRU & LSTM layers, and • attention layers. where Ξ = (ξ 1 , . . . , ξ N ) is the matrix of N state (query) patterns, X is the matrix of stored (key) patterns, and Ξ new is the matrix of new state patterns, which are averages over stored patterns. A new state pattern can also be very similar to a single stored pattern, in which case we call the stored pattern to be retrieved. These matrices allow to rewrite Eq. ( 552) as: (Q new ) T = K T softmax(β K Q T ) . ( ) For β = 1/ √ d k and changing in Eq. ( 553) softmax ∈ R N to a row vector (and evaluating a row vector), we obtain: Q new = softmax(1/ d k Q K T ) K , ( ) where Q new is again the matrix of new state patterns. The new state patterns Ξ new are projected via W V to the result patterns Z = Ξ new W V , where W V ∈ R d k ×dv . With the pattern projection V = KW V , we obtain the update rule Eq. ( 10) from the main paper: Z = softmax(1/ d k Q K T ) V . •Multiple Updates. The update Eq. ( 553) can be iteratively applied to the initial state ξ of every Hopfield layer head. After the last update, the new states Ξ new are projected via W V to the result patterns Z = Ξ new W V . Therefore, the Hopfield layer allows multiple update steps in the forward pass without changing the number of parameters. The number of update steps can be given for every Hopfield head individually. Furthermore, it is possible to set a threshold for the number of updates of every Hopfield head based on ξξ new 2 . In the general case of multiple initial states Ξ, the maximum over the individual norms is taken. •Variable β. In the main paper, we have identified β as a crucial parameter for the fixed point dynamics of the Hopfield network, which governs the operating mode of the attention heads. In appendix, e.g. in Lemma A7 or in Eq. ( 102) and Eq. ( 103), we showed that the characteristics of the fixed points of the new modern Hopfield network are determined by: β, M (maximal pattern norm), m max (spread of the similar patterns), and m x (center of the similar patterns). Low values of β induce global averaging and higher values of β metastable states. In the transformer attention, the β parameter is set to β = 1/ √ d k as in Eq. ( 555). The Hopfield layer, however, allows to freely choose β > 0, since the fixed point dynamics does not only depend on the dimension of the associative space d k . Additionally, β heavily influences the gradient flow to the matrices W Q and W K . Thus, finding the right β for the respective application can be crucial. •Variable dimension of the associative space. Theorem A5 says that the storage capacity of the modern Hopfield network grows exponentially with the dimension of the associative space. However higher dimension of the associative space also means less averaging and smaller metastable states. The dimension of the associative space trades off storage capacity against the size of metastable states, e.g. over how many pattern is averaged. In Eq. ( 550) and in Eq. ( 549), we assumed N raw state patterns R = (r 1 , . . . , r N ) T and S raw stored patterns Y = (y 1 , . . . , y S ) T that are mapped to a d k -dimensional associative space via the matrices W Q ∈ R dr×d k and W K ∈ R dy×d k , respectively. In the associative space R d k , we obtain the state patterns Q = Ξ T = RW Q and the stored patterns K = X T = Y W K . The Hopfield view relates the dimension d k to the number of input patterns N that have to be processed. The storage capacity depends exponentially on the dimension d k (the dimension of the associative space) and the size to metastable states is governed by this dimension, too. Consequently, d k should be chosen with respect to the number N of patterns one wants to store and the desired size of metastable states, which is the number of patterns one wants to average over. For example, if the input consists of many low dimensional input patterns, it makes sense to project the patterns into a higher dimensional space to allow a proper fixed point dynamics. Intuitively, this coincides with the construction of a richer feature space for the patterns. •Static Patterns. In Eq. ( 550) and Eq. ( 549), the N raw state patterns R = (r 1 , . . . , r N ) T and S raw stored patterns Y = (y 1 , . . . , y S ) T are mapped to an associative space via the matrices W Q ∈ R dr×d k and W K ∈ R dy×d k , which gives the state patterns Q = Ξ T = RW Q and the stored patterns K = X T = Y W K . We allow for static state and static stored patterns. Static pattern means that the pattern does not depend on the network input, i.e. it is determined by the bias weights and remains constant across different network inputs. Static state patterns allow to determine whether particular fixed patterns are among the stored patterns and vice versa. The static pattern functionality is typically needed if particular patterns must be identified in the data, e.g. as described for immune repertoire classification in the main paper, where a fixed d k -dimensional state vector ξ is used. •Pattern Normalization. In the appendix, e.g. in Lemma A7 or in Eq. ( 102) and Eq. ( 103), we showed that the characteristics of the fixed points of the new modern Hopfield network are determined by: β, M (maximal pattern norm), m max (spread of the similar patterns), and m x (center of the similar patterns). We already discussed the parameter β while the spread of the similar patterns m max is given by the data. The remaining variables M and m x that both control the fixed point dynamics are adjusted pattern normalization. M is the maximal pattern norm and m x the center of the similar patterns. Theorem A5 says that larger M allows for more patterns to be stored. However, the size of metastable states will decrease with increasing M . The vector m x says how well the (similar) patterns are centered. If the norm m x is large, then this leads to smaller metastable states. The two parameters M and m x are controlled by pattern normalization and determine the size and convergence properties of metastable states. These two parameters are important for creating large gradients if heads start with global averaging which has small gradient. These two parameters can shift a head towards small metastable states which have largest gradient as shown in Fig. A.5(b) . We allow for three different pattern normalizations: • pattern normalization of the input patterns, • pattern normalization after mapping into the associative space, • no pattern normalization. The default setting is a pattern normalization of the input patterns.

A.6.3 USAGE

As outlined in Sec. A.6.1, there are a variety of possible use cases for the Hopfield layer, e.g. to build memory networks or transformer models. The goal of the implementation is therefore to provide an easy to use Hopfield module that can be used in a wide range of applications, be it as part of a larger architecture or as a standalone module. Consequently, the focus of the Hopfield layer interface is set on its core parameters: the association of two sets, the scaling parameter β, the maximum number of updates, the dimension of the associative space, the possible usage of static patterns, and the pattern normalization. The integration into the PyTorch framework is built such that with all the above functionalities disabled, the "HopfieldEncoderLayer" and the "HopfieldDecoderLayer", both extensions of the Hopfield module, can be used as a one-to-one plug-in replacement for the TransformerEncoderLayer and the TransformerDecoderLayer, respectively, of the PyTorch transformer module. The Hopfield layer can be used to implement or to substitute different layers: • Pooling layers: We consider the Hopfield layer as a pooling layer if only one static state (query) pattern exists. Then, it is de facto a pooling over the sequence, which results from the softmax values applied on the stored patterns. Therefore, our Hopfield layer can act as a pooling layer. • Permutation equivariant layers: Our Hopfield layer can be used as a plug-in replacement for permutation equivariant layers. Since the Hopfield layer is an associative memory it assumes no dependency between the input patterns. • GRU & LSTM layers: Our Hopfield layer can be used as a plug-in replacement for GRU & LSTM layers. Optionally, for substituting GRU & LSTM layers, positional encoding might be considered. • Attention layers: Our Hopfield layer can act as an attention layer, where state (query) and stored (key) patterns are different, and need to be associated. • Finally, the extensions of the Hopfield layer are able to operate as a self-attention layer (HopfieldEncoderLayer) and as cross-attention layer (HopfieldDecoderLayer), as described in (Vaswani et al., 2017a) . As such, it can be used as building block of transformer-based or general architectures.



2 d-1 (1 + ln(2βK 2 p(d -1))), b := 2K 2 β 5 , and c := b W0(exp(a+ln(b))

Figure 3: The layer Hopfield allows the association of two sets R ( ) and Y ( ). It can be integrated into deep networks that propagate sets of vectors. The Hopfield memory is filled with a set from either the input or previous layers. The output is a set of vectors Z ( ).

Figure5: The layer HopfieldLayer enables multiple queries of the training set, a reference set, prototype set, or a learned set (a learned matrix). The queries for each layer are computed from the results of previous layers. The input is a set of vectors R ( ). The output is also a set of vectors Z ( ), where the number of output vectors equals the number of input vectors. The layer HopfieldLayer can realize SVM models, k-nearest neighbor, and LVQ.

wise convex function, and d an affine function. The CCCP algorithm solves this minimization problem by linearization of the concave part and is defined inSriperumbudur & Lanckriet (2009)  as

Figure A.1: The three cases of fixed points. a) Stored patterns (fixed point is single pattern): patterns are stored if they are well separated. Each pattern x i has a single fixed point x * i close to it. In the sphere S i , pattern x i is the only pattern and x * i the only fixed point. b) Metastable state (fixed point is average of similar patterns): x i and x j are similar to each other and not well separated. The fixed point m *x is a metastable state that is close to the mean m x of the similar patterns. c) Global fixed point (fixed point is average of all patterns): no pattern is well separated from the others. A single global fixed point m *x exists that is close to the arithmetic mean m x of all patterns. We begin with a bound on the Jacobian of the iteration, thereby heavily relying on the Jacobian of the softmax from Lemma A24. Lemma A2. For N patterns X = (x 1 , . . . , x N ), p = softmax(βX T ξ), M = max i x i , and m = max i p i (1 -p i ), the spectral norm of the Jacobian J of the fixed point iteration is bounded:

Examples are c ≥ 3.1444 for β = 1, K = 3, d = 20 and p = 0.001 (a + ln(b) > 1.27) and c ≥ 1.2585 for β = 1 K = 1, d = 75, and p = 0.001 (a + ln(b) < -0.94).

Convex Conjugate of a function multiplied by scalar 0 < α ∈ R (α f ) * = α f * (./α) . (499) 3. Convex Conjugate of the sum of a function and a scalar β ∈ R (f + β) * = f * -β . (500) 4. Convex Conjugate of affine transformation of the arguments. Let A be a non-singular matrix and b a vector

Figure A.4: Left: Ridge plots of the distribution of counts k over time for BERT-small Right: Violin plot of counts k after 1, 450000 steps, divided into the four classes from the main paper. The thresholds were adapted to the shorter sequence length.

Figure A.5: (a): change of count density during training is depicted for the first 20, 000 steps. (b): the corresponding distribution of the Frobenius norm of the Jacobian of the softmax function is depicted.The gradients with respect to the weights are determined by the Jacobian J defined in Eq. (59) as can be seen in Eq. (418), Eq. (429), and Eq. (435).

Figure A.6: Attentions of a Gaussian averaging head at initialization for sequence length N = 128. Every line depicts one Gaussian kernel. Here, the location parameters are initialized with the value of the supporting points µ i = s i . A.5.2 EXPERIMENT 2: MULTIPLE INSTANCE LEARNING DATASETS.

Types of neural networks. We consider two types of feedforward neural networks: (I) Neural networks that propagate an activation vector from the input layer to the output layer. Examples are fully-connected or convolutional neural networks. (II) Neural networks that propagate a set of vectors from the input layer to the output layer, where each layer applies the same operation to each element of the set and the output layer may summarize the set via a vector. An example is the transformer. Recurrent neural networks are networks of type (I), which are iteratively applied to a set or a sequence, where intermediate results are stored in a memory and can be reused. Modern Hopfield networks can be integrated into both types of neural network architectures and enable to equip each of their layers with associative memories. See Fig.2.

position encoding in the keys allows for performing pooling operations. The position encoding can be two-dimensional, where standard convolutional filters can be constructed as in convolutional neural networks (CNNs). The HopfieldPooling layer can substitute pooling, averaging, LSTM, and permutation equivariant layers. See Fig.4. The layer HopfieldPooling is used for experiments with multiple instance learning tasks, e.g. for immune repertoire classification in the experiments. Figure 4: The layer HopfieldPooling enables pooling or summarization of sets, which are obtained from the input or from previous layers. The input Y ( ) can be either a set or a sequence. The query patterns of each layer are static and can be learned. The output is a set of vectors Z ( ), where the number of vectors equals the number of query patterns. The layer HopfieldPooling can realize multiple instance learning.

Results for MIL datasets Tiger, Fox, Elephant, and UCSB Breast Cancer in terms of AUC. Results for all methods except the first are taken from either a(Küçükaşcı & Baydogan, 2018) or b

. Results on 75 small datasets of the UCI benchmarks given as difference to average rank.

We applied the Hopfield layer HopfieldLayer, where the training data is used as stored patterns Y , the input vector as state pattern R, and the corresponding training label to project the output of the Hopfield layer Y W V . Our architecture with HopfieldLayer has reached state-of-the-art for predicting side effects on SIDER 0.672 ± 0.019 as well as for predicting β-secretase BACE 0.902 ± 0.023. For details, see TableA.5 in the appendix.

In a d-dimensional space, the standard Hopfield model can store d uncorrelated patterns without errors but only Cd/ ln(d) random patterns with C < 1/2 for a fixed stable pattern or C < 1/4 if all patterns are stable (McEliece

which together amount to 2 • d k • d y parameters. As a concrete example, the BERT-base model from Devlin et al. (2019) has an embedding dimension d y = 768, a projection dimension d k = 64 and a sequence length of N = 512. Compared to the Gaussian head, in this case (2 • 768 • 64)/(2 • 512) = 95.5 times more parameters are trained for the attention mechanism itself.

1 reports the average performance in the simulated immunosequencing datasets (last column) and the performance on datasets of the remaining three categories. DeepRC outperforms all competing methods with respect to average AUC. Across categories, the runner-up methods are either the SVM for MIL problems with MinMax kernel or the burden test.

Average performance over 5 CV folds for each of the 5 datasets. In each dataset, a signal was implanted with a frequency of 10%, 1%, 0.5%, 0.1%, and 0.05%, respectively. Simulated: Here we report the mean over 18 simulated datasets with implanted signals and varying difficulties. The error reported is the standard deviation of the AUC values across the 18 datasets. Fox and Tiger are MIL datasets for image annotation which comprise color images from the Corel dataset that have been preprocessed and segmented. An image consists of a set of segments (or blobs), each characterized by color, texture and shape descriptors. The datasets have 100 positive and 100 negative example images. The latter have been randomly drawn from a pool of photos of other animals. Elephant has 1391 instances and 230 features. Fox has 1320 instances and 230 features. Table A.2: Hyperparameter search-space of a manual hyperparameter selection on the respective validation sets of the Elephant, Fox, Tiger and UCSB breast cancer datasets.

3.2 Methods compared. Modern Hopfield networks via the layer Hopfield are compared to 17 groups of methods(Fernández-Delgado et al., 2014; Klambauer et al., 2017a):

5: Results on drug design benchmark datasets. Predictive performance (ROCAUC) on test set as reported byJiang et al. (2020) for 50 random splits

ACKNOWLEDGMENTS

The ELLIS Unit Linz, the LIT AI Lab and the Institute for Machine Learning are supported by the Land Oberösterreich, LIT grants DeepToxGen (LIT-2017-3-YOU-003), and AI-SNN (LIT-2018-6-YOU-214), the Medical Cognitive Computing Center (MC3), Janssen Pharmaceutica, UCB Biopharma, Merck Group, Audi.JKU Deep Learning Center, Audi Electronic Venture GmbH, TGW, Primal, S3AI (FFG-872172), Silicon Austria Labs (SAL), Anyline, FILL, EnliteAI, Google Brain, ZF Friedrichshafen AG, Robert Bosch GmbH, TÜV Austria, DCS, and the NVIDIA Corporation. IARAI is supported by Here Technologies.

A APPENDIX

This appendix consists of six sections (A.1-A.6 ). Section A.1 introduces the new modern Hopfield network with continuous states and its update rule. Furthermore, Section A.1 provides a thorough and profound theoretical analysis of this new Hopfield network. Section A.2 provides the mathematical background for Section A.1. Section A.3 reviews binary Modern Hopfield Networks of Krotov & Hopfield. Section A.4 shows that the Hopfield update rule is the attention mechanism of the transformer. Section A.5 gives details on the experiments. Section A.6 describes the PyTorch implementation of layers based on the new Hopfield networks and how to use them. Lemma A15. For 0 x π the function cos can be upper bounded by:

CONTENTS OF THE APPENDIX

Proof. We use the infinite product representation of cos, c.f. (Olver et al., 2010, (4.22. 2)):Since it holds thatfor |x| π and n ≥ 2, we can get the following upper bound on Eq. ( 330):The last but one inequality uses x π, which implies x/π 1. Thus Eq. ( 329) is proven.•Exponential storage capacity: the base c as a function of the parameter β, the radius of the sphere M , the probability p, and the dimension d of the space.We express the number N of stored patterns by an exponential function with base c > 1 and an exponent linear in d. We derive constraints on he base c as a function of β, the radius of the sphere M , the probability p that all patterns can be stored, and the dimension d of the space. With β > 0, K > 0, and d ≥ 2 (to ensure a sphere), the following theorem gives our main result. Theorem A5 (Storage Capacity (Main): Random Patterns). We assume a failure probability 0 < p 1 and randomly chosen patterns on the sphere with radius M := K √ d -1. We definewhere W 0 is the upper branch of the Lambert W function (Olver et al., 2010, (4.13 )) and ensureThen with probability 1 -p, the number of random patterns that can be stored isTherefore it is proven for c ≥ 3.1546 with β = 1, K = 3, d = 20 and p = 0.001 (a + ln(b) > 1.27) and proven for c ≥ 1.3718 with β = 1, K = 1, d = 75, and p = 0.001 (a + ln(b) < -0.94).Proof. We consider the probability that the master inequality Eq. ( 311) is fulfilled:the Legendre transform is the negative entropy function, restricted to the probability simplex:For the negative entropy function, restricted to the probability simplex:the Legendre transform is the log-sum exponentialProof. See page 93 Example 3.25 in Boyd & Vandenberghe (2009) and (Gao & Pavel, 2017) . If f is a regular convex function (lower semi-continuous convex function), then f * * = f according to page 135 Exercise 11.2.3 in Garling (2017) . If f is lower semi-continuous and convex, then f * * = f according to Theorem 13.37 (Fenchel-Moreau) in Bauschke & Combettes (2017) . The log-sum-exponential is continuous and convex.Lemma A29. Let XX T be non-singular and X a Hilbert space. We defineandThe Legendre transform of lse(β,Proof. We use the definition of the Legendre transform:According to page 93 Example 3.25 in Boyd & Vandenberghe (2009) , the equations for the maximumThe domain of lse(β, X T ξ) * is X * , since on page 93 Example 3.25 in Boyd & Vandenberghe (2009) it was shown that outsideWe also present some special values for the Lambert W function (Olver et al., 2010, (4.13 )): Lemma A31.W -W e 1+e = e , (533)where the Omega constant Ω isWe need in some proofs a version of the mean value theorem as given in the next lemma.and x ∈ U as well as h ∈ R n vectors such that the line segment x + th for 0 t 1 is in U . Then the following holds:where J is the Jacobian of f and the integral of the matrix is component-wise.Proof. Let f 1 , . . . , f m denote the components of f and define g i : [0, 1] → R bythen we obtainThe statement follows since the Jacobian J has as entries ∂fi ∂xj .A.3 MODERN HOPFIELD NETWORKS: BINARY STATES (KROTOV AND HOPFIELD)

A.3.1 MODERN HOPFIELD NETWORKS: INTRODUCTION

A.3.1.1 Additional Memory and Attention for Neural Networks. Modern Hopfield networks may serve as additional memory for neural networks. Different approaches have been suggested to equip neural networks with an additional memory beyond recurrent connections. The neural Turing machine (NTM) is a neural network equipped with an external memory and an attention process (Graves et al., 2014) . The NTM can write to the memory and can read from it. A memory network (Weston et al., 2014) consists of a memory together with the components: (1) input feature map (converts the incoming input to the internal feature representation) (2) generalization (updates old memories given the new input), (3) output feature map (produces a new output), (4) response actual state ξ:with F (a) = a n , where n = 2 gives the energy function of the classical Hopfield network. This allows to store α n d n-1 patterns (Krotov & Hopfield, 2016) . Krotov and Hopfield (Krotov & Hopfield, 2016) suggested for minimizing this energy an asynchronous updating dynamics T = (T j ) for component ξ j : For any i and any x i taken uniformly at random from the Hamming sphere with radius d centered in x i , S(x i , d), where d is assumed to be an integer, it holds that Pr (∃i ∃j :Proof. The proof can be found in Demircigil et al. (2017) .The number of patterns N = exp (αd) + 1 is exponential in the number d of components. The result Pr (∃i ∃j : T j ( x i ) = x ij ) → 0 means that one update for each component is sufficient to recover the pattern with high probability. The constraint α < I(1-2 ) 2 on α gives the trade-off between the radius of attraction d and the number N = exp (αd) + 1 of pattern that can be stored.

Theorem A10 in particular implies that

Pr (∃i ∃j :. with a probability converging to 1, all the patterns are fixed points of the dynamics. In this case we can have α → I(1) 2 = ln(2)/2. Krotov and Hopfield define the update dynamics T j (ξ) in Eq. ( 545) via energy differences of the energy in Eq. ( 544). First we express the energy in Eq. ( 544) with F (a) = exp(a) (Demircigil et al., 2017) by the lse function. Then we use the mean value theorem to express the update dynamics T j (ξ) in Eq. ( 545) by the softmax function. For simplicity, we set β = 1 in the following. There exists a v ∈ [-1, 1] withwhere e j is the Cartesian unit vector with a one at position j and zeros elsewhere, [.] j is the projection to the j-th component, and To see this, we assume N stored (key) patterns y i and S state (query) patterns r i that are mapped to the Hopfield space of dimension d k . We setand multiply the result of our update rule with W V . The matrices Y = (y 1 , . . . , y N ) T and R = (r 1 , . . . , r S ) T combine the y i and r i as row vectors. We define the matriceschanged to a row vector, we obtain for the update rule Eq. ( 3) multiplied by W V :The left part of Eq. ( 548) is the transformer attention. Besides the attention mechanism, Hopfield networks allow for other functionalities in deep network architectures, which we introduce via specific layers in the next section. The right part of Eq. ( 548) serves as starting point for these specific layers. (averaging over a very large number of patterns) is abundant in lower layers. Similar observations have been reported in other studies (Toneva & Wehbe, 2019a; b; Tay et al., 2020) . Operating class (III) (medium metastable states) is predominant in the last layers.A.5.1.2 Experimental Setup. Transformer architectures are known for their high computational demands. To investigate the learning dynamics of such a model and at the same time keeping training time manageable, we adopted the BERT-small setting from ELECTRA (Clark et al., 2020) . It has 12 layers, 4 heads and a reduced hidden size, the sequence length is shortened from 512 to 128 tokens and the batch size is reduced from 256 to 128. Additionally, the hidden dimension is reduced from 768 to 256 and the embedding dimension is reduced from 768 to 128 (Clark et al., 2020) . The training of such a BERT-small model for 1.45 million update steps takes roughly four days on a single NVIDIA V100 GPU. For each head in each layer, the distribution of the minimal number k of patterns required to sum up the softmax values to 0.90 is displayed as a violin plot in a panel. k indicates the size of a metastable state. The bold number in the center of each panel gives the median k of the distribution. The heads in each layer are sorted according to k. Attention heads belong to the class they mainly operate in. Class (IV) in blue: Small metastable state or fixed point close to a single pattern, which is abundant in the middle layers (6, 7, and 8). Class (II) in orange: Large metastable state, which is prominent in middle layers (3, 4, and 5) . Class (I) in red: Very large metastable state or global fixed point, which is predominant in the first layer. These heads can potentially be replaced by averaging operations. Class (III) in green: Medium metastable state, which is frequently observed in higher layers. We hypothesize that these heads are used to collect information required to perform the respective task. These heads should be the main target to improve transformer and BERT models. (Wu et al., 2017) , which are challenging for deep learning methods. The first dataset is HIV, which was introduced by the Drug Therapeutics Program (DTP) AIDS Antiviral Screen. The second dataset is BACE, which has IC50 measurements for binding affinities of inhibitors (molecules) to the human β-secretase 1 (BACE-1). The third dataset is BBBP (blood-brain barrier permeability), which stems from modeling and predicting the blood-brain barrier permeability (Martins et al., 2012) . The fourth dataset is SIDER (Side Effect Resource) Kuhn et al. (2016) and contains 1427 approved drugs. These datasets represent four areas of modeling tasks in drug discovery, concretely to develop accurate models for predicting a) new anti-virals (HIV), b) new protein inhibitors (BACE), c) metabolic effects (BBBP), and d) side effects of a chemical compound (SIDER).We implemented a Hopfield layer HopfieldLayer, in which we used the training-input as storedpattern Y or key, the training-label as pattern-projection Y W V or value and the input as state-pattern R or query. As described in section A.6 by concatenation of input z i and target t i the matrices W K and W V can be designed such that inside the softmax the input z i is used and outside the softmax the target t i .All hyperparameters were selected on separate validation sets and we selected the model with the highest validation AUC on five different random splits.A.5.4.2 Results. We compared the Hopfield layer Hopfieldlayer to Support Vector Machines (SVMs) (Cortes & Vapnik, 1995; Schölkopf & Smola, 2002) , Extreme Gradient Boosting (XGBoost) (Chen & Guestrin, 2016) , Random Forest (RF) (Breiman, 2001) , Deep Neural Networks (DNNs) (LeCun et al., 2015; Schmidhuber, 2015) , and to graph neural networks (GNN) like Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016) , Graph Attention Networks (GATs) (Velicković et al., 2018) , Message Passing Neural Networks (MPNNs) (Gilmer et al., 2017) , and Attentive FP (Xiong et al., 2020) . Our architecture with HopfieldLayer has reached state-of-theart for predicting side effects on SIDER 0.672 ± 0.019 as well as for predicting β-secretase BACE 0.902 ± 0.023. See Table A .5 for all results, where the results of other methods are taken from Jiang et al. (2020) .In contrast to classical Hopfield networks, the Hopfield layer is based on the modern Hopfield networks with continuous states that have increased storage capacity, as discussed in the main paper. Like classical Hopfield networks, the dynamics of the single heads of a Hopfield layer follow a energy minimization dynamics. The energy minimization empowers our Hopfield layer with several advantages over other architectural designs like memory cells, associative memory, or attention mechanisms. For example, the Hopfield layer has more functionality than a transformer self-attention layer (Vaswani et al., 2017a) as described in Sec. A.6.2. Possible use cases are given in Sec. A.6.3. Source code will be provided under github.A.6.2 FUNCTIONALITY Non-standard functionalities that are added by a Hopfield layer are• Association of two sets,• Multiple Updates for precise fixed points,• Variable Beta that determines the kind of fixed points,• Dimension of the associative space for controlling the storage capacity,• •Association of two sets. The Hopfield layer makes it possible to associate two sets of vectors. This general functionality allows• for transformer-like self-attention,• for decoder-encoder attention,• for time series prediction (maybe with positional encoding),• for sequence analysis,• for multiple instance learning,• for learning with point sets,• for combining data sources by associations,• for constructing a memory,• for averaging and pooling operations, and• for many more.The first set of vectors consists of S raw state patterns R = (r 1 , . . . , r S ) T with r s ∈ R dr and the second set of vectors consists of N raw stored patterns Y = (y 1 , . . . , y N ) T with y i ∈ R dy .Both the S raw state patterns and N raw stored patterns are mapped to an associative space in R d k via the matrices W Q ∈ R dr×d k and W K ∈ R dy×d k , respectively. We define a matrix Q (Ξ T ) of state patterns ξ n = W Q r n in an associative space R d k and a matrix K (X T ) of stored patterns x i = W K y s in the associative space R d k :In the main paper, Eq. (3) defines the novel update rule:For multiple patterns, Eq. (3) becomes: The raw stored patterns Y can in principle be also two different input tensors. Optionally, multiple updates take place in the projected space of Q and K. This update rule is obtained e.g. from the full update Eq. ( 423) or the simplified update Eq. ( 424) in the appendix.

