Hi! My name is Lana, and in this talk I will present some of the links between my scientific field, Artificial Life (also called ALife), and games, whether it’s video games or other types of games.
First I will introduce you to Artificial Life: what do we want, what do we do, and how do we do it. Then I will talk about one area that has a big overlap between ALife and games: Open Endedness. Open Endedness in games, and Open Endedness research as a game. Finally I will present 2 ALife competitions that you might be interested to join if you work in video games.
First a brief self-introduction. My name is Lana Sinapayen, I’m a researcher in ALife and AI. I used to do research on drones, and rat neurons, and cellular automata… but now I focus more on the topics of prediction and failure. I’m also very involved in open science. Before joining the field of ALife, my research was very conventional and serious. Then i joined ALife and started doing weird things, like trying to teach rat’s neurons how to drive a robot. Here are some of my latest projects: I evolve visual illusions; I try to reproduce the tree of life in simulation from scratch; I build indestructible automata, or on the opposite I try to destroy automata; and in collaboration with NASA i look for alien life in the universe. You might feel like these are all quite unrelated but I hope that at the end of my talk you will understand what are the links between all of these projects. So what is ALife? I’m very curious to know what you think of when you hear “artificial life.” Maybe you think of something like this, the game of life created by John Conway? Or maybe you think more of something like this, a mix between robots and humans? Or even uploading your consciousness to the internet. Well, ALife is none of these things… or maybe it’s all of those things.
ALife focuses on building Life as it could be. To give you an analogy, cognitive scientists study the brain to understand intelligence, and artificial intelligence scientists try to build a brain to understand intelligence. In the same way, in biology scientists study living organisms to understand life, and in ALife we try to a build living organism, also to understand Life. On the bottom left of this slide you have a creature called the Strandbeest created by Theo Jansen: it’s a kind of machine that doesn’t have any electrical parts and no brain but it is way more efficient at walking than, say, Boston Dynamics robots. It can walk quite well on uneven surfaces, and it uses very little energy. In that sense you could say that it’s closer to Life than a robot. So in ALife we try to build life in artificial systems because we think that life could be like a software, that you can run on different types of machines. It could be substrate-agnostic. So what does it mean, substrate-agnostic? Here are some examples. A biological virus infects organisms, trying to manipulate them and exploit their resources to replicate itself. A computer virus tries to infect computers, stealing their resources to replicate itself and propagate. These two things are implemented in very different substrates but they both have the same kind of functions.Or take DNA. The function of DNA is to transmit information between cells, or between parent and offspring; but you can do the same thing with bits, that’s why you can have genetic simulations in computers. If the function is just to transmit information, then there are several ways to do that. So exactly which ones of these functions of living organisms should we focus on to reproduce in artificial systems, so that we recognize them as living? Is self-replication enough? Is it self-replication and evolution? Is it self-replication, evolution, and learning? And once we find the right set of functions, we don’t have to limit ourselves to what already exists. We could build forms of life that are completely different from what we know, because life as we know it evolved through a series of coincidences and we don’t have to reproduce the exact same events.
Now let me give you a very brief summary of the history of ALife. The idea of a human creating life from scratch is of course quite old, in many cultures you have legends about a human that just creates Life. You can think of for example the Jewish golem. And a lot of these stories have this idea of a magic word that you could put into a substrate, to make it alive. So again there is this concept of “the substrate doesn’t really matter” but what you put in it, the rules, they matter. The first attempts at actually building this kind of thing seem to date from antiquity to the renaissance, with machines that were powered by water or steam or even human energy, and tried to imitate the motion of life, the appearance of life. The first recorded attempt at actually reproducing the functions of life comes from the French engineer Jacques de Vaucanson, who built a robot duck that could digest food… or at least he said the robot could digest things. In reality he was cheating, but at least, he tried… Then in the 50s you have the rise of computation, with genetic algorithms and self-replication. Then at the end of the 80s, the birth of Artificial Life as a scientific field. Modern attempts at ALife were born in the 60s, with John Conway’s Game of Life, that thing that I showed at the beginning with some white cells and black cells and complex patterns that move around. The rules are very simple, but it turns out that this game is Turing complete, which means you can perform any kind of computation with it. Then in 1980 Christopher Langton named the field “Artificial Life” and in 1987 he organized the first ALife workshop. And now every year we have the ALife conference.
ALife has three main subfields: Hard ALife, Wet ALife and Soft ALife. Hard ALife focuses on hardware: everything that is robots. Here you have a termite robot, which alone cannot do much but when you have a swarm of them, they can build complex structures. Then Wet ALife focuses on wetware, everything that is chemistry and biology. Here the video shows oil droplets that move a bit like you would expect a living being to move. You can even apply evolution to these droplets to optimize for different behaviors. And then soft ALife focuses on software: everything that is simulation, genetic algorithms, AI, that kind of stuff. The “+1” field in ALife is Art. ALife and art have always been very linked because of the philosophical and aesthetic questions that ALife asks, like “what is Life”, “What does it look like” etc. The audio that you can hear in the background was generated using an ALife simulation. Agents from different species had to communicate with each other while also avoiding overlap with other species, and it really sounds quite a bit like what you would hear if you go outside at night and there are frogs, and crickets, and cicadas and all kinds of animals all talking to each other.
Now I would like to introduce the concept of Open Endedness. It’s at the same time different and similar to what we mean when we say “an open ended game”. When you say open ended game, you mean a game that is not bound by a plot: you can do whatever you want, in whatever order you want. You are free and you might even want to play forever. In Artificial Life, open endedness means something that is always changing in an interesting way, and the way to keep it interesting is to always change the rules. A related concept is Open Ended Evolution (OEE). It’s basically one of the biggest goal of Artificial Life, the idea that you could start from nothing and get everything just by applying the right kind of evolution to your simulation. There are many different definitions of Open Ended Evolution and they all include a version of exponential complexity. You might be familiar with the concept of “singularity”: the pace of the evolution of technology getting faster and faster, until it’s basically infinite. But of course something similar happens in (biological) evolution, where you start with something that is not even alive, and then Life appears, and evolves into all of these very complex and very different species. And having a lot of different things is not enough: you also need an increase in complexity. For example a smartphone might be different from an other smartphone, but what is interesting is that the smartphone is much more complex than a normal phone. In the same way a bird might be more complex than an ancient bacteria which doesn’t have, for example, eyes. Now we’re not even really sure that Open Ended Evolution is possible, because no one has yet managed to build a simulation that convinces other people that this is really Open Ended Evolution, and there is a prize for whoever would manage to build such a simulation. It’s called the “Evolution Prize for Demonstration of Open Ended Evolutionary Innovation in a Closed System.” As I said, Open Ended Evolution could be impossible, the first indication being that so far no one has succeeded. But we can even ask if Earth itself is open ended. Just because we see an explosion in evolution now, doesn’t mean that this would last forever. Another issue is that earth is not a closed system. Life could have originated elsewhere, and then contaminated Earth; and even if Life started on Earth, we are always influenced by all kinds of cosmic events and it’s not sure that without them Earth would have Open Ended Evolution. For example, if an asteroid had not destroyed most of the dinosaurs, humans would not exist. So would we have another type of species that has complex culture, and technology, and smartphones? Nobody knows!
In the next part of my talk I will discuss open-endedness (as defined by ALife) in games. There are quite a few games where you try to imitate evolution. I like this one called Cell Lab where you have a petri dish and you get to choose the genes of the first cell that you put in, and from this one cell you have to reach different complex goals, like different configurations or different behaviors. But you can only choose the initial condition. After that it just evolves on its own. You can get quite a bit of complexity in this game: to do that you need a more and more complex starting point, so that you get basically complexity from complexity. And that would be okay, if you could reach Open Ended Evolution, but in reality the game always plateau at some point. So another approach is to start with entire ecosystems. This game called Orb Farm got quite popular at the start of the pandemic. You have a small fish tank in which you can add different species, and you have cycles between day and night, the main goal being to reach balance between all of the species and the nutrients. And if you don’t reach balance, everyone dies basically. Which is kind of the opposite of OEE because imbalance is a source of evolution: if everything is perfect there is no reason to evolve or to change. So if we set apart evolution, what about “just” open-endedness? One obvious approach is to just have a very complex game, with lots of contents. And a good way to reach this goal is to use procedural generation. Unfortunately it does not always go well… This is a screenshot from No Man’s Sky. People were sooo excited at the idea of having an infinity of planets with all kinds of different animals and ecosystems… but when the game came out, people said it was boring and repetitive, because the gameplay was boring and repetitive. Even if each planet was different, what you could do was not that different. Another issue is that combinatorial explosion is not the same as novelty: just because you have tons of different animals doesn’t mean that they’re all interesting. It doesn’t really matter if you have a ton of different animals if the patterns with which you create these animals are all the same. For example I could say that I can create an infinity of colors between black and white, all of the colors are gray but they’re slightly different grays: this is really not the same as saying I have an infinity of novel colors because after black and white I introduce red for example. So there is a component of novelty that is really hard to define and to obtain in simulation. Generating new content is not the same as generating new rules. And yet, you find open-endedness often where you don’t expect it. Take a simple old game like Rayman. It’s a 2D game where you go from left to right and defeat enemies. It’s linear and it’s not intrinsically open-ended, but you can decide to play it in an open-ended way. You can always make up new goals for yourself, for example finish the game without any power up, or finish the game without jumping, or with all the power-ups, or without saving… humans are very good at making up new rules even in very simple environments. In fact, as long as you have humans, it doesn’t have to be a game to be open ended. On the right you have a figure comparing different picture-sharing social media, and researchers found that when you analyze the kind of tags that people use, you find an open-ended increase in novelty. People don’t just use the same words or combinations of words, they also make up new words and new ways to use it, new definitions for new concepts that are born in these social media.
Actually humans are so good at open endedness that you could say that as long as you have a human in your simulation it’s cheating. Because you could give somebody two sticks and they would find an infinite number of ways to play with them… in fact humans are so good at this, that it can often be a problem. If you have ever built software, you know that whatever you do people will find ways to use it in a way that you didn’t expect. No matter how many new rules and new limitations you put, people find a way. In my opinion even the concept of play is itself open ended. What happens when you make a new game? You do something random because it’s fun, and then you add new conditions and new rules until the game is just difficult enough that you sometimes fail, and sometimes win. And if you get too good at these new rules, you just change rules again. So open endedness research has tried to exploit this phenomenon. For example this is a platform called Pic Breeder, where the game is that you choose two pictures and these pictures will mate and make offspring; and then you repeat the process again and again. At the beginning the pictures are quite boring, like just circles or lines; but if you let people play long enough, you get this kind of output. Some look like animals or faces, buildings or objects. What is interesting with Pic Breeder is that it doesn’t really have a goal. You just do what you want to do, and in this way it’s kind of similar to evolution, because evolution doesn’t really have a goal. There is often a fixation on saying that evolution has a fitness function, and it’s the survival of the fittest… but in reality evolution doesn’t want anything: if you’re fit enough to survive then that’s enough. Optimization is even the opposite of open endedness because you’re defining one precise goal, and expecting all of your species to converge to this goal, instead of having them diverge. You know, typically fitness and optimization kill open-endedness. I talked several times about the Game of Life, which is a cellular automaton. Cellular automata are sometimes defined as zero-player games, because once you have chosen the initial configuration you don’t interfere anymore. You just look at what happens. The Game of Life has three simple rules, but that is enough to make it able to compute anything. You can find a lot of interesting videos on youtube. Even more interesting is the continuous version of the Game of Life, which was created by Bert Chan. So this simulation is called Lenia, and even with the same rules as the game of life you can get this kind of very biological-looking patterns. It’s not only self-replication, but you can even see, like, membranes, which make these organisms look like cells; or you can have things that look like tiny predators, or interact with each other.
Initially there wasn’t going to be any talk of my own research in this video, only me presenting other people’s interesting work. But I finished recording and i realized that i was 10 minutes short, so there is time for my research after all. So one characteristic of the Game of Life is that it’s very fragile. If you change even just one pixel the whole thing collapses, and real living organisms are not this fragile. They can usually withstand a little bit of damage, and then just repair themselves. And it is this idea of self-repair that made me interested in Gacs’ automaton when I was a PhD student. So how do you make an automaton that cannot be destroyed? Gacs’ automaton refers to a paper that is 162 pages long, that was initially written to try to solve a physics problem called the “positive rates conjecture.” The paper was extremely long and complex and even if you understood it, it wasn’t possible at the time to actually implement the automaton, because computers were not powerful enough. So people cited the paper but nobody ever actually implemented it. When I actually tried to implement it, I realized that it was really even more complex than it seemed because you had to implement the automaton in binary. Still, I gave it a shot through something called Grey’s automaton which is a simplification of Gacs.’ So this automaton has little cells that have lots of information, including something called their address; and it is that address that they have to try to preserve, so that even if many cells are destroyed they can restore the address. And the trick in this automaton, is that every group of 15 cells is simulating a higher order cell. This group of 15 cells represents one bigger cell. These bigger cells can send information to each other, although it’s very slow, and they can also send information downwards to tell to the cells that are simulating them that there is something to repair. And you can take 15 of these bigger cells and make them simulate one bigger, bigger cell. And so you can simulate simulations forever and that’s where the automaton gets its robustness from. In practice though you don’t make infinite simulations, you adjust the number of higher simulations that you need based on how much noise you assume this automaton will be subject to. In the following video I show a sort of proof of concept where I only destroy one cell, and we can see the waves of self-repair that go through the automaton. So first everything is normal, and then I destroy one cell, and these little green things that you can see are flags that tell the automaton that it has to repair something. And then the green goes back to blue, because the automaton realizes that it has repaired the broken cell. Here is another video where I destroyed 30% of the cells of the automaton. What’s interesting in this video is that I only destroyed some cells, but at some point, like now, we can see that all of the addresses are completely random and destroyed but slowly the automaton starts repairing itself… and suddenly everything is fine again.
Ok, but no organism in nature is actually invincible. They are not completely fragile, and they are not completely robust either. It is said that this in-betweenness is actually necessary to open endedness, notably to allow things like parasitism. Simulations that allow parasites to exist tend to manage to achieve higher complexity than other simulations, because then you can have arm races and even symbiosis. So for example a virus that gets into your cell and hacks the mechanism so that the cell will produce more viruses, or even things like mind control. I didn’t put a real picture here because I know some people will be grossed out, but there are several different parasites that can actually change the behavior of other organisms. And then some other parasites can even force organisms to make special structures for them, for example some insects can force trees to make round structures that have a little chamber where the insect can be protected. And recently a new type of automaton was published that I thought was perfect for this kind of parasitism. They are called “neural cellular automata” because each pixel has a small neural network inside that allows it to get information from its neighbor to calculate what it should do. And in this article what they did that made it even more interesting is that they trained the neural networks to regenerate the shape when it was damaged. If you perturbate it the right way you can get interesting results, I call this one mitosis, you get two little happy cyclops from one smiley… but if you wait too long they kind of try to regenerate the lost eye and then they get a little bit crazy. And so the next paper by some of the same authors looked at some techniques to manipulate the automaton. You insert a few parasitic cells and you train these parasitic cells to manipulate the host, for example preventing the tail from re-growing, or changing the color of the lizard. They try to find how many virus cells you need to get the result that you want. But I was more interested in a concentrated form of control, not putting parasites in many different points, but restrict it to just one location and see how far you can go in terms of changing what the automaton was trained to do. For example can you destroy the lizard entirely with just a little square of parasitic cells? Or in terms of mind control, in this case I tried to force the lizard to move left. The red line represents the parasites. Next I tried constructing the unexpected structure thing. So here I tried to grow a third eye on the lizard, and it worked surprisingly well. You can also go bigger and build an entire inverted head on the lizard. And you can even almost convince it that it should actually be a christmas tree. Honestly neural cellular automata are super fun to play with, specifically because they are robust but still fragile.
Okay but that was all about just one organism. What if you are really trying to recreate the tree of life, start from zero and get a lot of different species? That was one of my side projects when I was in masters and so I never really published it and the graphics are really bad because I did it for fun, but I do think the results were interesting. I came across this paper called the Interface Theory of Perception by Donald Hoffman, who says that in nature, having perfectly accurate perception is not actually always a good thing, because it’s costly in terms of brain power. You have to process all this information and probably integrate it with other information and then make a decision… so instead of this perfect perception, what you really need is just a quick way to make a decision. So he thinks that the world that we perceive is basically made of interfaces, where your senses help you divide the world in something akin to icons on your computer. The icon just has the necessary information for you to know what it is and what you should do, but as Hoffman says, the color/shape/position of an icon doesn’t reconstruct the true color/shape/position of the file in the computer. It’s just an abstraction. So I thought, I’m going to make a simulation where the agents have very limited perception, I don’t even define species. So excuse the bad graphics but here what you can see is, this square here is where all agents can get free energy. At each step some light agents are generated a bit like sunlight on Earth, that’s the random black dots. These agents can basically move around for a few steps and maybe have one kid and then they die, so obviously most of them never really get to take off. But then if the kid has some interesting mutations, like the ability to detect other agents and eat them, then it might be able to survive. This is what you see on the figure on the right: once in a while an interesting lineage of agents appears that is probably able to detect something and eat it, but because of bad luck or bad mutations they disappear pretty fast. And then you finally get one lineage of very successful agents, and the question is, what happens after that? Because you don’t want a tree with just one branch. And so the next issue is how do you even see whether you got different species or not, since you haven’t defined what the species is… So what I did for visualization is I took the characteristics of each agent, for example its speed, how many kids it could have, how much energy it can store before having kids… I give those rgb values so that two agents with very different colors basically have very different characteristics. So in this video with the, again, pretty bad graphics, what you will see is first most agents that appear die very quick, and then at some point right there at the bottom you have an explosion from a successful lineage. And as the descendants accumulate mutations they become very different colors from each other, and the reason why they stop at the middle of the screen is that I gave these squares different physical properties. But then another lineage finally appears at the top part and somehow they are able to infiltrate the bottom part, and they start interacting with this completely different lineage of life. And in this next video I tried to make things even more interesting by giving the agents the possibility to change the physics of each cell. So they had the choice to use their energy to make the cell easier to move through, or harder to move through, or make it so they have a longer visibility, or shorter visibility of other agents, instead of just fixing that by myself. You can see different colors, so different species of agents, and with this I really felt like i had reached my goal of having many different species interacting on a very small surface, and a super interesting thing that happened in this run is that you will see a group of agents move out of the predefined surface right here, so they don’t get free energy from the game anymore, but they still manage to survive for quite a long time. And I think it could have been because of cannibalism but if you look at the bottom there were also some agents that slowly escaped and survived long enough to almost come back… but finally they go down again and they disappear. In the end by tracing the ancestry and the different characteristics of all the agents, I could obtain this kind of visualization. And so I more or less got what I wanted: everyone is descended from very few ancestors, but they do diverge and become several different species, and you get a simulation with no initial definition of species but where species emerge naturally, even giving you a tree with many branches. It’s not actually open-endedness because I didn’t use any measure of complexity, but I think it’s fair to at least say that there is more and more different species who interact with each other in rich ways, creating their own little niches.
For the final part of my talk I want to present 2 game-like ALife competitions. The first is the Virtual Creatures Competition where you submit videos of a simulation and the simulation is judged on how alive it looks. There are many super interesting submissions and I encourage you to look it up and maybe participate if you want. The second competition is the Minecraft Open Endedness challenge where the goal is not to make something that looks alive but to make something that demonstrates open-endedness, that is an exponential increase in diversity or in complexity. The winner from this year evolved tiny electric circuits that then became modules in other electric circuits and so on and so forth, until you get something really complex.
This is the end of my talk, and this is my conclusion: in ALife the fun is in changing the rules, more than in playing the game. For example looking for the right type of evolution is looking for the right set of rules that will give you an interesting game even if you’re just watching the game and not actually playing. If you’re interested in joining the ALife community, which is a very broad community covering different fields, I would recommend looking it up on twitter, or joining the conferences, or the art competitions. There are also some online resources and if you just look up my name in google + ALife you will find some of the stuff I wrote or some of the projects that I’m working on. Thank you for listening.