Turing patterns and morphogenesis in swarm robots


Daniel Carrillo Zapata holding a Kilobot, taken at the Bristol Robotics Laboratory

Morphogenesis is a process usually associated with the development of internal biological structures within an organism. However in a recent paper titled “Morphogenesis in robot swarms”, it was demonstrated that morphogenesis could be enacted in a swarm of hundreds of tiny robots to form self-organised patterns and shapes, seen below. I caught up with one of the authors, FARSCOPE PhD student Daniel Carrillo Zapata, at the Bristol Robotics Laboratory to find out more about swarm robotics and taking inspiration from locally organising biological systems.


Spots and shapes formed by a robotic swarm, taken from Morphogenesis in robot swarms.


“In nature you can see examples of both top down and local organisation” Daniel tells me, “You can see the top down positional information in the development of drosophila, the fruit fly. What happens is that in the genes is encoded how the pattern should emerge, leading to the development of the left and right side of the fly. So it’s not self-organised, because it’s encoded genetically. The symmetry breaking Turing patterns we use in this paper are completely self-organised – they emerge from the interaction of molecules. The final end pattern is not encoded, only how the molecules interact.”

Turing patterns are a reaction–diffusion theory proposed by Alan Turing in the 50s to describe how animals like leopards and zebras develop their complex skin patterns. This type of self-organisation is an interesting principle in the context of swarm robotics, which aims to control large groups of robots with no means of centralised control. Dani explains how the collaboration came to be between the Hauert Lab (Swarm Engineering at the British Robotics Laboratory), and the Centre for Genomic Regulation with whom Dr. Sabine Hauert and Dani worked on the project.

“At first, it was meant to be a side project! We were contacted by the Centre for Genomic Regulation in Barcelona, led by Prof. James Sharpe. They had developed a preliminary morphogenesis algorithm but had only a hundred robots, while we had a thousand. Two weeks of work turned into six months and I’d started to love the project so decided to continue with it.”

A hundred robots may sound like a lot, but the living systems from which the team takes inspiration work in population ranges many orders of magnitude greater. The robots used in the team’s experiments, Kilobots, are designed specifically to work co-operatively in a large group. In fact, Daniel goes on to say, their development was somewhat revolutionary in terms of scalable robotic systems.

“Kilobots are super cool because they are the first robotic platform that has been designed for really large numbers. They were developed a few years ago in 2012, and until then swarm robotics consisted only of up to maybe 30 robots. But when Mike Rubenstein created them at Harvard they did experiments with 1024 robots. That was really amazing.”


Kilobot shape formation by Mike Rubenstein using a binary image (top) programmed into the robots as an internal map

53441993_248889206056845_1294938837381808128_n“They are really simple, they have 2 vibrating motors, 1 LED to signal their state, 1 ambient light sensor so they can detect light to follow a gradient or avoid it. Most importantly they’ve got infrared communication so they can send short messages to their neighbours up to about 10cm. You can make them – because they are open source – for about £15 each, so that’s how you can really scale it up because having a swarm of 1000 robots is really a similar price as having 1 robotic arm.”

So given this efficient demonstration of complex shape formation by the Harvard lab that designed the robots back in 2014; what is it that makes the results of this paper novel? The difference, Daniel says, is the manner in which the organisation occurs.

“The main difference between our work and the work from Rubenstein was that they could replicate the same shape as they used an internal map so were able to recreate a spanner, a star and so on. Basically they had a binary image of their final shape, so when they were programmed Rubenstein and his team said okay you need to create these shapes, and they did that by creating a local coordinate system and checking whether they were inside the shape or not.”

Inspired by natural systems, the team wanted to remove the need for this internal map in their morphogenesis, and focus on emergent self-organised behaviour. One system that provided a key source of inspiration as that of finger formation in a developing mouse embryo, a process which was actually shown to obey Turing Pattern reaction–diffusion rules by James Sharpe’s team in 2014.

“We didn’t aim to replicate exactly the behaviour seen in these systems, but take inspiration from it”, Dani explains. “In the paper we have a linear approximation of a Turing system, composed of two molecules. Actually, because it’s an approximation, it’s far from the continuous process of morphogenesis in nature. But we took the main principles of reaction-diffusion systems and encoded them into the robots to replicate the overall macroscopic behaviour of patterning.”

The time lapse photos of the morphogenesis experiment appear to show a distinct two stage process, initially patches of similarly coloured robots form, followed by a reorganisation to form protrusions. Daniel describes the first part of this as the patterning phase, followed by the morphogenesis itself.


Kilobots form shapes via a two phase process: (a) spot formation by virtual reaction – diffusion (b) morphogenesis shaped by spot locations. Taken from Mophogenesis in robot swarms.

“For the patterning everything is inspired by Turing’s reaction and diffusion – the linear model I talked about. Each robot has an internal representation of two molecules, one of them is able to create itself and the other molecule whereas the other molecule destroys itself and the other one similar to a predator-prey system. So in the reaction and diffusion model, that was the reaction. Diffusion was simulated by message passing – they were sending part of their molecules concentration to their neighbours, losing concentration but gaining from the rest. At the beginning they are initialised by random concentrations of molecules, and after about 10 minutes you get a pattern.”

“After those 10 minutes, morphogenesis starts. The rules are quite simple; robots with a low concentration of the activator molecule and on the edge of the swarm will start moving, orbiting in one direction around the swarm until they find these spots. And that’s what you see in the pattern – you see a spot of green robots surrounded by blue and pink ones. They have higher concentrations, it goes from no colour to pink, blue and then green. When these robots with low concentrations start moving and find areas of high concentration, these spots, they will stop. And because the Turing pattern is continuously running, the spots move to the edge. So that’s how we get morphogenesis for free, because the pattern sticks to the edge. When new robots arrive they will see their concentration increased, they may eventually become green so new robots will stop by them.”

Unlike the internal map based shape formation, the truly emergent nature of this system leads to less predictable morphogenic patterns rather than largely reproducible spanners or stars. However in sacrificing shape complexity and controllability, the team gained something valuable in return: robustness, the ability to reform in the face of perturbation or damage.

“Sometimes the pattern becomes unstable half-way through the experiments. However, these instabilities are a self-adaptive mechanism that the system has”, Dani says. “We could exploit this in the future, for example you could imagine the swarm exploring an environment and if it gets stuck with obstacles and the limb can’t grow anymore, you could generate an instability to get new spots somewhere else to get around the obstacle. He adds, “We did three experiments on chopping off limbs or even splitting the swarm in two, and we could see that though self-organisation they were able to regrow the limbs or re-join the swarm.”

Asked about the types of real world environments he envisions being explored by such a swarm, and what his next steps are to get there, he tells me “I want to be a responsible scientist – the purpose of my PhD is to take this morphogenesis algorithm and turn it into a more functional algorithm that can be useful for firefighters, so I’m working on controllability and functionality. With this work we showed that it’s emergent and robust and we get shapes, but they don’t have any purpose so that’s what I’m focusing on. Imagine this scenario; there is a building on fire, you are a firefighter and have a bag full of swarm robots as a tool. So you release them and they develop a shape to explore the environment and find victims.”

Firefighting swarm robots may sound like a distant technological future, but Daniel is keen to work together with firefighters to figure out how robotic self-organisation could be used to make their job safer, and help to save lives.

“I’m planning to hold a focus group with firefighters next month to say, this is what it looks like and these are the possibilities in the future. What do you think? How can we improve it? To come up with ideas with the potential users of that technology together. I believe everyone should do that if planning to develop something that will be used in the future. Because if not and it turns out not to be useful you may have wasted a lot of time and money.”

To find out more check out this video, or the full paper.

Daniel Carrillo Zapata is a FARSCOPE PhD researcher at the University of Bristol swarm engineering lab led by Dr. Sabine Hauert.