Lenia

Lenia is a family of cellular automata created by Bert Wang-Chak Chan. It is intended to be a continuous generalization of Conway's Game of Life, with continuous states, space and time. As a consequence of its continuous, high-resolution domain, the complex autonomous patterns ("lifeforms" or "spaceships") generated in Lenia are described as differing from those appearing in other cellular automata, being "geometric, metameric, fuzzy, resilient, adaptive, and rule-generic".

Lenia won the 2018 Virtual Creatures Contest at the Genetic and Evolutionary Computation Conference in Kyoto, an honorable mention for the ALIFE Art Award at ALIFE 2018 in Tokyo, and Outstanding Publication of 2019 by the International Society for Artificial Life (ISAL).

Iterative updates
Let $$\mathcal{L}$$ be the lattice or grid containing a set of states $$S^\mathcal{L}$$. Like many cellular automata, Lenia is updated iteratively; each output state is a pure function of the previous state, such that

$$\Phi(A^0) = A^{\Delta t}, \Phi(A^{\Delta t}) = A^{2\Delta t}, \ldots, \Phi(A^t) = A^{t + \Delta t},\ldots$$

where $$A^0$$ is the initial state and $$\Phi : S^\mathcal{L} \rightarrow S^\mathcal{L}$$ is the global rule, representing the application of the local rule over every site $$\mathbf{x}\in\cal{L}$$. Thus $$\Phi^N(A^t) = A^{t + N\Delta t}$$.

If the simulation is advanced by $$\Delta t$$ at each timestep, then the time resolution $$T = \frac{1}{\Delta t}$$.

State sets
Let $$S = \{0, 1, \ldots, P-1, P\}$$ with maximum $$P \in \Z$$. This is the state set of the automaton and characterizes the possible states that may be found at each site. Larger $$P$$ correspond to higher state resolutions in the simulation. Many cellular automata use the lowest possible state resolution, i.e. $$P = 1$$. Lenia allows for much higher resolutions. Note that the actual value at each site is not in $$[0,P]$$ but rather an integer multiple of $$\Delta p = \frac{1}{P}$$; therefore we have $$A^t(\mathbf{x}) \in [0, 1]$$ for all $$\mathbf{x} \in \mathcal{L}$$. For example, given $$P = 4$$, $$\mathbf{A}^t(\mathbf{x}) \in \{0, 0.25, 0.5, 0.75, 1\}$$.

Neighborhoods
Mathematically, neighborhoods like those in Game of Life may be represented using a set of position vectors in $$\R^2$$. For the classic Moore neighborhood used by Game of Life, for instance, $$\mathcal{N} = \{-1, 0, 1\}^2$$; i.e. a square of size 3 centered on every site.

In Lenia's case, the neighborhood is instead a ball of radius $$R$$ centered on a site, $$\mathcal{N} = \{\mathbf{x} \in \mathcal{L} : \lVert \mathbf{x} \rVert_2 \leq R\}$$, which may include the original site itself.

Note that the neighborhood vectors are not the absolute position of the elements, but rather a set of relative positions (deltas) with respect to any given site.

Local rule
There are discrete and continuous variants of Lenia. Let $$\mathbf{x}$$ be a vector in $$\R^2$$ within $$\mathcal{L}$$ representing the position of a given site, and $$\mathcal{N}$$ be the set of sites neighboring $$\mathbf{x}$$. Both variations comprise two stages:

Once $$\mathbf{G}^t$$ is computed, it is scaled by the chosen time resolution $$\Delta t$$ and added to the original state value:$$\mathbf{A}^{t+\Delta t}(\mathbf{x}) = \text{clip}(\mathbf{A}^{t} + \Delta t \;\mathbf{G}^t(\mathbf{x}),\; 0,\; 1)$$Here, the clip function is defined by $$\operatorname{clip}(u,a,b):=\min(\max(u,a),b)$$.
 * 1) Using a convolution kernel $$\mathbf{K} : \mathcal{N} \rightarrow S$$ to compute the potential distribution $$\mathbf{U}^t(\mathbf{x})=\mathbf{K} * \mathbf{A}^t(\mathbf{x})$$.
 * 2) Using a growth mapping $$G : [0, 1] \rightarrow [-1, 1]$$ to compute the final growth distribution $$\mathbf{G}^t(\mathbf{x})=G(\mathbf{U}^t(\mathbf{x}))$$.

The local rules are defined as follows for discrete and continuous Lenia:

$$\begin{align} \mathbf{U}^t(\mathbf{x}) &= \begin{cases} \sum_{\mathbf{n} \in \mathcal{N}} \mathbf{K(n)}\mathbf{A}^t(\mathbf{x}+\mathbf{n})\Delta x^2, & \text{discrete Lenia} \\ \int_{\mathbf{n} \in \mathcal{N}} \mathbf{K(n)}\mathbf{A}^t(\mathbf{x}+\mathbf{n})dx^2, & \text{continuous Lenia} \end{cases} \\ \mathbf{G}^t(\mathbf{x}) &= G(\mathbf{U}^t(\mathbf{x})) \\ \mathbf{A}^{t+\Delta t}(\mathbf{x}) &= \text{clip}(\mathbf{A}^t(\mathbf{x}) + \Delta t\;\mathbf{G}^t(\mathbf{x}),\; 0,\; 1) \end{align}$$

Kernel generation
There are many ways to generate the convolution kernel $$\mathbf{K}$$. The final kernel is the composition of a kernel shell $$K_C$$ and a kernel skeleton $$K_S$$.

For the kernel shell $$K_C$$, Chan gives several functions that are defined radially. Kernel shell functions are unimodal and subject to the constraint $$K_C(0) = K_C(1) = 0 $$ (and typically $$K_C\left(\frac{1}{2}\right) = 1$$ as well). Example kernel functions include:

$$K_C(r) = \begin{cases} \exp\left(\alpha - \frac{\alpha}{4r(1-r)}\right), & \text{exponential}, \alpha=4 \\ (4r(1-r))^\alpha, & \text{polynomial}, \alpha=4 \\ \mathbf{1}_{\left[\frac{1}{4},\frac{3}{4}\right]}(r), & \text{rectangular} \\ \ldots, & \text{etc.} \end{cases}$$

Here, $$\mathbf{1}_A(r)$$ is the indicator function.

Once the kernel shell has been defined, the kernel skeleton $$K_S$$ is used to expand it and compute the actual values of the kernel by transforming the shell into a series of concentric rings. The height of each ring is controlled by a kernel peak vector $$\beta = (\beta_1, \beta_2, \ldots, \beta_B) \in [0,1]^B$$, where $$B$$ is the rank of the parameter vector. Then the kernel skeleton $$K_S$$ is defined as

$$K_S(r;\beta)=\beta_{\lfloor Br \rfloor} K_C(Br \text{ mod } 1)$$

The final kernel $$\mathbf{K}(\mathbf{n})$$ is therefore

$$\mathbf{K}(\mathbf{n}) = \frac{K_S(\lVert \mathbf{n} \rVert_2)}{|K_S|}$$

such that $$\mathbf{K}$$ is normalized to have an element sum of $$1$$ and $$\mathbf{K} * \mathbf{A} \in [0, 1]$$ (for conservation of mass). $$|K_S| = \textstyle \sum_{\mathcal{N}} \displaystyle K_S \, \Delta x^2$$ in the discrete case, and $$\int_{N} K_S \,dx^2$$ in the continuous case.

Growth mappings
The growth mapping $$G : [0, 1] \rightarrow [-1,1]$$, which is analogous to an activation function, may be any function that is unimodal, nonmonotonic, and accepts parameters $$\mu,\sigma \in \R$$. Examples include

$$G(u;\mu,\sigma) = \begin{cases} 2\exp\left(-\frac{(u-\mu)^2}{2\sigma^2}\right)-1, & \text{exponential} \\ 2\cdot\mathbf{1}_{[\mu\pm3\sigma]}(u)\left(1-\frac{(u-\mu)^2}{9\sigma^2}\right)^\alpha-1, & \text{polynomial}, \alpha=4 \\ 2\cdot\mathbf{1}_{[\mu\pm\sigma]}(u)-1, & \text{rectangular} \\ \ldots, & \text{etc.} \end{cases}$$

where $$u$$ is a potential value drawn from $$\mathbf{U}^t$$.

Game of Life
The Game of Life may be regarded as a special case of discrete Lenia with $$R = T = P = 1$$. In this case, the kernel would be rectangular, with the function$$K_C(r) = \mathbf{1}_{\left[\frac{1}{4},\frac{3}{4}\right]}(r) + \frac{1}{2}\mathbf{1}_{\left[0,\frac{1}{4}\right)}(r)$$and the growth rule also rectangular, with $$\mu = 0.35, \sigma = 0.07$$.

Patterns
By varying the convolutional kernel, the growth mapping and the initial condition, over 400 "species" of "life" have been discovered in Lenia, displaying "self-organization, self-repair, bilateral and radial symmetries, locomotive dynamics, and sometimes chaotic nature". Chan has created a taxonomy for these patterns.

Related work
Other works have noted the strong similarity between cellular automata update rules and convolutions. Indeed, these works have focused on reproducing cellular automata using simplified convolutional neural networks. Mordvintsev et al. investigated the emergence of self-repairing pattern generation. Gilpin found that any cellular automaton could be represented as a convolutional neural network, and trained neural networks to reproduce existing cellular automata

In this light, cellular automata may be seen as a special case of recurrent convolutional neural networks. Lenia's update rule may also be seen as a single-layer convolution (the "potential field" $$\mathbf{K}$$) with an activation function (the "growth mapping" $$G$$). However, Lenia uses far larger, fixed, kernels and is not trained via gradient descent.