the term originally referred to the spontaneous appearance of patterns in physical systems, such as the rippling of sand dunes or the hypnotic spirals that form when certain chemical reactants are combined. Later it was adopted by biologists to explain the intricate structure of wasp nests, the synchronized flashing of some species of fireflies, and the way that swarms of bees, flocks of birds, and schools of fish instinctively coordinate their actions.
What these phenomena all have in common is that none of them is imposed from the top by a master plan. The patterns, shapes, and behaviors we see in such systems don’t come from preexisting blueprints or designs, but emerge on their own, from the bottom up, as a result of interactions among their many parts. We call an ant colony self-organizing because nobody’s in charge, nobody knows what needs to be done, and nobody tells anybody else what to do. Each ant goes through its day responding to whatever happens to it, to the other ants it bumps into, and to changes in the environment—what scientists call “local” knowledge. When an ant does something, it affects other ants, and what they do affects still others, and that impact ripples through the colony. “No ant understands its own decisions,” Gordon says. “But each ant’s decision is linked to another ant’s decision and the whole colony changes.”
Although the ultimate origins of self-organization remain something of a mystery, researchers have identified three basic mechanisms by which it works: decentralized control, distributed problem-solving, and multiple interactions. Taken together, these mechanisms explain how the members of a group, without being told to, can transform simple rules of thumb into meaningful patterns of collective behavior.
To get a feel for how these mechanisms work, consider a day at the beach with your family or friends. When you first arrive, you don’t stand around waiting for someone to give you instructions. Apart from certain restrictions imposed by the community (no nudity, no pets, no alcohol, for example) you’re on your own. Nobody tells you where to sit, what to do, whether to go into the water or not (unless the lifeguard gets bossy). Everybody can do pretty much what they want to, which is one way of describing decentralized control.
If it’s a beautiful day and the beach is crowded, of course, it might take some time to find the perfect place to sit down. You don’t want to choose a spot too close to the water, or your beach chairs and blanket could get soaked by a big wave. Nor do you want to sit far away from the water, where you can’t feel the ocean breeze. If you plan to go swimming, it might be convenient to choose a location near the lifeguard, as every family with little children has already figured out (which is why all those umbrellas are clustered around the guard’s stand). In the end, you choose a space with just enough room to spread your blanket yet maintain the proper distance in all directions from your neighbors’ blankets, which is the unspoken rule of thumb at the beach. If you could look down from a helicopter, you’d see a mosaic of blankets evenly spaced from one another, reflecting the success of the crowd’s distributed problem-solving.
Then something curious happens. Just as you’re settling into your beach chair with Stephen King’s latest novel, you notice that a few people have stood up to look at the water. Then a few more do the same thing. And a few more. Suddenly it seems like everybody’s standing and looking at the water, so you do too. You don’t have any idea why, but you’re suddenly alert, full of questions. What’s going on? Is somebody drowning? Is there a shark? What’s everybody looking at? What began, perhaps, as a simple act of curiosity by a few individuals—staring at the water—spreads from person to person down the beach, snowballing into a collective state of alarm. That’s how infectious multiple interactions can be. And the impressive thing is, if there had been a shark, everybody would have found out about it almost as quickly as if someone had shouted “Jaws” with a bullhorn.
“If we each respond to little pieces of information, and we follow certain rules, the whole crowd will organize in a certain way,” Mike Greene says, “just like when we’re looking down on an ant colony, we can actually see its behavior change, even though none of the ants is aware of it.”
Day in and day out, that is, self-organization provides an ant colony like 550 with a reliable way to manage an unpredictable environment. Wouldn’t it be useful if we could do the same thing?
The Traveling Salesman Problem
One afternoon in the summer of 1990, an Italian graduate student named Marco Dorigo was attending a workshop at the German National Research Center for Computer Science near Bonn. At the time, Dorigo was working on a doctoral thesis in Milan about ways to solve difficult computational problems. The talk he’d come to hear was by Jean-Louis Deneubourg, a professor from the Free University of Brussels, about his research with ants. “I was already interested in ways that natural systems could be used as inspiration for information science,” Dorigo says. “But this was the first time anybody had made a connection between ant behavior and computer science.”
In his presentation, Deneubourg described a series of experiments that he and his colleagues had done with common black ants known as Argentine ants (Iridomyrmex humilis). Like many ants, this species leaves a trail of chemical secretions when foraging. Such chemicals, called pheromones, come from glands near the tip of the ant’s abdomen, and they act as powerful signals, telling other ants to follow their trails. Foragers normally lay down such trails after they have found a promising source of food. As they return to the nest, they mark their paths so that other ants can retrace them to the food. But Argentine ants are different. They lay down pheromone trails during the search phase as well. That appealed to Deneubourg, who was curious about how foragers decided where to explore.
In one experiment in his lab, Deneubourg and his colleagues placed a bridge between a large tub containing a colony of Argentine ants and another tub containing food. The bridge had a special design. About a fourth of the way across, it split into two branches, both of which led to the food, but one of which was twice as long as the other. How would the little explorers deal with this?
As you might expect, the ants quickly determined which branch was best (this is the same species, after all, that demonstrates such a knack for locating maple syrup spilled on your kitchen floor). In most trials of the experiment, after an initial period of wandering, all of the ants chose the shorter branch.
The pheromone trail was the key. As more and more ants picked the shorter branch, it accumulated more and more of their pheromone, increasing the likelihood that other ants would choose it. Here’s how it works: Let’s say two ants set out across the bridge at the same time. The first ant takes the shorter branch, and the second the longer one. By the time the first ant reaches the food, the second is only halfway across the bridge. By the time the first ant returns all the way to the colony, the second ant has just arrived at the food. To a third ant standing at the split in the bridge at this point, the pheromone trail left by the first ant would be twice as strong as that left by the second (since the first ant went out and returned), making it more likely to take the shorter branch. The more this happens, the stronger the pheromone trail grows, and the more ants follow it.
Ant colonies, in other words, have evolved an ingenious way to determine the shortest path between two points. Not that any of the ants are doing so on their own. None of them attempts to compare the length of the two branches independently. Instead, the colony builds the best solution as a group, one individual after another, using pheromones to “amplify” early successes in an impressive display of self-organization.
Taking this idea one step further, Deneubourg and his colleagues proposed a relatively simple mathematical model to describe this behavior. If you know how many ants have taken the shorter branch at any particular time, Deneubourg said, you can reliably calculate the probability of the next ant choosing that branch. To demonstrate this, he plugged his team’s equations into a computer simulation of the double-bridge experiment and ran it for a thousand ants. The results mirrored those of real ants. When the branches were the same length, the odds of an ant picking either one were fifty-fifty. But when one branch was twice as long as the other, the odds of picking the shorter one shot up dramatically.
The key to the colony’s system, in short, lay in the simple rules that each ant applied to