SELF-ORGANIZING INTELLIGENT MATTER: A BLUEPRINT FOR AN AI GENERATING ALGORITHM

Abstract

We propose an artificial life framework aimed at facilitating the emergence of intelligent organisms. In this framework there is no explicit notion of an agent: instead there is an environment made of atomic elements. These elements contain neural operations and interact through exchanges of information and through physics-like rules contained in the environment. We discuss how an evolutionary process can lead to the emergence of different organisms made of many such atomic elements which can coexist and thrive in the environment. We discuss how this forms the basis of a general AI generating algorithm. We provide a simplified implementation of such system and discuss what advances need to be made to scale it up further.

1. INTRODUCTION

An AI generating algorithm (Clune, 2019) is a computational system that runs by itself without outside interventions and after a certain amount of time generates intelligence (though the general idea is much older than this reference). Evolution on earth is the only known successful system thus far that we know of. In this paper we propose a computational framework, and argue why it might constitute such general algorithm, while being computationally tractable on current or near future hardware. Building such system successfully will take many iterations and require a number of advances. What we hope to provide however, is a general procedure, where better and better systems arise as a result of improving the elements of the system and of experimentation rather than a fundamentally new algorithm. As an example, we had such procedure for supervised learning since the 1980's -neural networks trained by back-propagation and stochastic gradient descent. To reach the current impressive performance, it required a number of clever improvements, such as rectified non-linearities, convolutions, batch normalization, attention, residual connections and better optimizers, but the overall algorithm hasn't changed.

1.1. EVOLUTION

Evolution is the primary process by which our algorithm operates. We describe it here. In machine learning, the word evolution is typically used to describe variations of the following process (Back, 1996) . We have a number of individuals and an objective to optimize. We evaluate the individuals, select the ones with good values of the objective, and mutate them to produce the next generation. Over time, individuals that are better at optimizing the objective appear. The use of the word evolution in this context is perhaps unfortunate, as this process is quite unlike the evolution observed in nature (Stanley et al., 2017) . The clearest difference we can see is in the outcome. The former results in a small variation of final individuals that are the best at the objective. The latter results in a coexistence of huge variation of individuals with different behaviors -it is open-ended. Let us therefore discuss the basic operation of natural evolution. We have an environment built out of elements (atoms) that are organized into bigger units such as individual bacteria or animals or groups of these. Those classes of units that propagate (Joyce, 1994) into the future (e.g. replicate -we will discuss this in a moment) keep existing, while those that don't propagate cease to exist. There is no objective based on which units are selected for propagation. Different collections of units find different means of propagating. An important mechanism for the coexistence of a large number of different solutions is niche construction (NicheConstruction, 2020). Different collections of individuals modify or form the environment for one another. For example a bacterium consumes a food and excretes waste products, which modifies the local environment, being either food or a toxic substance for other bacteria. Another example is a prey forming a food source for a predator. These systems are self balancing, for example too many predators means they can't find food and die off, and vice versa. This means that the coexistence of multiple solutions is present (and evolving). Collections of individuals in such systems have different means of propagating, rather then being selected by a global objective. The lack of objective we believe, as argued in (Stanley & Lehman, 2015) , is critical and fundamental to open-ended creation and coexistence of diversity and runs counter to most developments in machine learning. Conversely, a presence of an objective in a system likely leads to a collapse of diversity. To see that attempting to create an objective is problematic, let us therefore try to suggest one and see what the problems would be. One of the clearest objectives one might propose is to reward an individual for reproduction. However, a predator might then simply kill its offspring which would increase its chance of making another one. We could try to tweak or find other objectives, but this might lead to unwanted and unforeseen behaviours similar to the one described above. There are other issues. We don't actually make copies of ourselves, but instead mix genes from individuals (sexual reproduction), which we need to select. What do we actually want to reward? In addition, most of the time, propagation of species depends on individuals working together in a group, often sacrificing some members for the good of the group. What is then a real reproducing unit? This is the reason why the word "propagate" (Joyce, 1994) is more appropriate than "reproduce". What really happens is that those classes of groups of elements that are set "a certain way" propagate and those that are not, don't. Note that this does not contradict the evolution of an intrinsic rewardwhich is an evolved means of finding a good policy during the lifetime of a given individual. Another example of evolutionary process is our society. People don't have children in proportion to some objective the society has set, or there isn't just one job or hobby that we all converge on and that is the "best". People engage in a large number of jobs and hobbies. At the same time, memes, values, work practices, company structures and many other emergent concepts propagate. All these things coexist, both cooperating and competing.

1.2. PRINCIPLES

The field of artificial life aims at producing life and an evolutionary process inside a computer (Langton, 1997; Ray, 1991; Lenski et al., 2003; Sims, 1994; Yaeger et al., 2011; Gras et al., 2009; Soros & Stanley, 2014) , see (Aguilar et al., 2014) for a review. In seminal work on Tierra (Ray, 1991) , Thomas Ray created an artificial life system in the substrate of assembly instructions in computer memory. The set of instructions is executed by a number of heads and one organism corresponds to one head. The system was initialized by a handcrafted sequence of instructions that when executed, will copy itself to another part of the memory. The executions undergo mutations. Some of these result in organisms that are unable to replicate, while some others get better at it. Some organisms find ways to use other organisms copying mechanism to copy, forming parasites. Then resistance to parasites evolves, and hyper-parasites, phenomenons of sociality and cheating are observed. This process eventually peters out, and the quest ever since has been to create a system that is truly open-ended, where complexity keeps increasing without bounds (Standish, 2003) . A number of principles that characterize and open-ended process has been proposed (Soros & Stanley, 2014; Taylor et al., 2016) Here we select two that that we find the most important (points 2 and 3) and introduce two new ones (points 1 and 4). • There should be no built in notion of an individual and no built in operation for reproduction of an individual. Instead, these should be an emergent properties of collections of units, composing new collections of units or themselves. • The evolution of new (here emergent) individuals should create novel opportunities for the survival of others (Soros & Stanley, 2014) . • The potential size and complexity of the individuals phenotypes should be (in principle) unbounded (Soros & Stanley, 2014) .

