Take a good look at what you're seeing above — these bizarre, bent paperclip-looking objects don't look like much, and chances are you don't know what you're staring at. But the middle one, named ST5-33-142-7, is actually a high-tech communications antenna that holds the title of the world's first artificially evolved hardware to fly in space.
Almost two decades ago, NASA engineer Jason Lohn stood at the Evolvable Systems group at Ames Research Center's Computational Sciences Division with his fingertips wrapped around the unusual design, announcing a breakthrough. "This was designed by a computer — that's the cool thing about it, and it actually works,” he said. “No human would build an antenna as crazy as this."
He was most probably right. Without a doubt, NASA engineers knew their physics; however, if you've ever played with a TV antenna, you'll know that pretty much every material in an antenna's environment can affect its performance, making their optimal design a traditionally difficult problem for electrical engineers.
Dozens of computers armed with AI were tasked with creating the best radiation pattern by Lohn and his research group, and ultimately, achieved this complicated and bizarre design. Successfully meeting a challenging set of mission requirements, it was chosen among many designs which were supplied by professional (and human) engineers and was launched on the ST5 mission on March 22, 2006, where it stayed until the end of its mission.
This is one of the many examples that show the untapped potential of evolutionary algorithms. Hearing about evolution outside the context of biology might come as a surprise, but in reality, its impact reaches far beyond school desks and biology spheres.
Capturing evolution inside a computer
In biology, the theory of evolution first came to public attention in 1859 through British naturalist Charles Darwin's still influential and then controversial book On the Origin of Species. Evolution is generally defined in biology as "the change in heritable traits of biological populations over successive generations," and relies on natural selection as a mechanism. The premise of an evolutionary algorithm becomes crystal clear when you think of it as capturing the essence of evolution inside a computer.
An evolutionary algorithm, which is a subset of evolutionary computation, can be defined as a “population-based metaheuristic optimization algorithm.” These nature-inspired algorithms evolve populations of experimental solutions through numerous generations by using the basic principles of evolutionary biology such as reproduction, mutation, recombination, and selection.
What's so fascinating about them is that algorithms which are based on the laws of nature have the capacity to find out-of-the-box solutions which can at times be overly complex or even beyond an engineer’s imagination — or capability.
"This is one of the most interesting things we've seen while studying evolutionary algorithms and evolutionary engineering: Whenever we say 'I'm an engineer, I can sit and calculate rigorously until I get the correct answer,' evolution slaps us on the wrist since evolutionary principles far outreach what the human mind can achieve — just like it has always done in nature. This makes the field quite attractive, but it's still a rather silent one", says Dr. Çağrı Mert Bakırcı-Taylor, founder and CEO of Evrim Ağacı (Tree of Evolution), one of the largest popular science organizations in Turkey. As a mechanical engineer-turned-popular science communicator, he has worked on a unified theory of algorithmic evolution and studied the effects of fitness functions in evolutionary algorithms during his Ph.D. at Texas Tech University, where he also received his Ph.D. minor in Biology.
And when he says that the field is rather silent, he does have a point. Evolutionary principles were already being presented as a possibly viable approach to computational methods as early as the 1950s; however, evolutionary algorithms are yet to have their big moment.
The theoretical foundations of evolutionary algorithms were laid after the 1970s, but they were way ahead of their time. They only began to be used in the 1990s when computers became fast enough to handle them, and with humanity hanging on the edge of an artificial intelligence revolution, that big moment might be just around the corner.
Evolutionary algorithms for dummies
But how does one capture evolution in a computer, really? NASA has previously likened building an evolutionary algorithm to an art form, and with its details and intricacies, it truly is a complex task. Its many steps roughly correspond to a particular aspect of natural selection.
Evolutionary algorithms consist of “individuals”, which are temporary solutions to the problem at hand. These individuals are defined by using artificial chromosomes that are often generated randomly to determine the characteristics of an individual. Each chromosome consists of multiple “genes,” which are parameters of the problem.
These parameters are part of the model being evolved, but there are also meta-parameters, such as crossover rate and mutation rate, that are not part of the model and need to be decided by the programmers before running the algorithm.
Once all is set, the genes, which are the main components of the algorithms, evolve over time, and the individuals compete to be the most successful in the "survival of the fittest," much like in biological evolution. This competition can take place in the real world by implementing actual robots or other physical devices communicating with computers, but it usually happens inside a computer simulation, where these individuals are played out to see how their gene combinations interact with the problem to be solved.
The individuals' success, called fitness, is measured according to how well they perform at the given task. To make this measurement, the success criteria that reflect the characteristics of the desired solution are formulated using a mathematical equation called the fitness function, which is usually the first step in evolutionary algorithms.
All of the individuals' fitnesses form an imaginary topography that is usually called a design space or fitness landscape. The design space, which usually has more than 3 dimensions, has many peaks and troughs that correspond to the relative fitness of the combinations of the parameters.
The design space is already determined by the constraints of the system and the laws of physics; however, it is also essentially unknown to the researcher. The evolutionary algorithm scans the space heuristically to discover the values that the fitness function will take under different design parameters. Depending on the choice of meta-parameters, the algorithm can take you to an optimum as the result of this exploration effort, or you can “mindlessly” wander.
As more and more populations are evolved to solve a given problem, the survival of the fittest comes in: the solutions with higher fitness values, which are closer to the peaks, are selected in each generation and are allowed to "breed,” while the low-ranking individuals are “killed.” This process enables the algorithm to get nearer to the peaks with each generation.
In the next generations, the eliminated individuals are replaced through a method called “reproduction” to keep the population size consistent — this is of course not always necessary, but it is usually implemented in one way or another to prevent extinctions. Different methods, such as “sexual selection,” the individuals exchanging their genetic material randomly without sexual selection, or random filings might also be utilized to spice up the reproduction process. For example, EAs with sexual selection may make use of similarity matrices to pick similar and dissimilar individuals to breed them.
Except for the random filings, a mechanism called crossover, which in evolutionary biology involves the swapping of genetic material in the germline, is used to randomly shuffle the genes of individuals between chromosomes during reproduction, allowing the new offspring to be a mixture of their parents' genes, much like what happens during sexual reproduction in animals, such as humans.
When a new population is achieved, the genes of the individuals are randomly and usually slightly changed through mutation to add variation according to the design decisions the researcher makes, allowing novel yet similar-to-parents solutions to arise.
These individuals are then again tested for their fitness. Given enough time and computing power, the algorithm can create countless individuals, and it only stops when it comes across the chromosome design it considers to be the best, which can happen in as early as 100 generations, or may take thousands of generations, or when a programmer stops it manually.
Thanks to this process, an evolutionary algorithm can generate remarkable solutions that are comparable to, or in most cases, even better than the best human engineering achievements. For instance, evolutionary algorithms can generate complicated and intricate circuit board designs that far exceed the imagination of an electric engineer and thereby accomplish electronic tasks that are yet to be resolved, or they can generate a boom tower design that goes way beyond the simple truss systems used by civil engineers, thereby reducing the tip vibration frequencies to a point that had been unimaginable due to the lack of engineering theories to foresee such solutions.
What's holding engineers back and how they can be overcome
But as we’ve stated before, they haven’t quite had their moment. There are several reasons for that, with the main one being the fact that they can be really costly.
"Let's say we have a population made out of 1,000 individuals, and we want to test them for 1,000 generations. Unlike in machine learning where you update your nodes when a new training item is introduced, every single one of these individuals has to actually be tested, be played out in every single generation to obtain a score on the fitness function — this creates an incredible calculation and simulation burden which is why many engineers and computer scientists are not very keen on evolutionary algorithms. This is still the case," says Bakırcı-Taylor. However, he also points out there are some critical possibilities that may change that.
The first one might be the emergence of quantum computers. When supercomputers become advanced enough to fit into our pockets, an evolutionary algorithm that takes three months to simulate can be done in three hours or days, potentially increasing their popularity and usefulness among science communities, he explains.
The second one could be the discovery of something that strengthens the algorithmic foundation of evolutionary algorithms. "We've seen this happen before. We first realized that machines could learn, but it was only when we discovered and advanced artificial neurons and sigmoid functions that the field gained practicality. Maybe there is a tool or function out there waiting to be discovered that can make evolutionary algorithms a breeze to use — we simply don't know."
Bakırcı-Taylor's dissertation, ‘An Analysis of Evolutionary Algorithm Parameters to Estimate the Behavior of the Algorithmic Evolution: A Case Study Using Line Following Robots and Simulations’, focuses on the third possibility: That already-established knowledge in evolutionary biology may not have reached engineering yet. Deciding on an appropriate set of parameter values, such as the population size, the natural selection rate, and mutation rate, is one of the biggest difficulties of applying an evolutionary algorithm to a given problem.
There aren't many guidelines explaining how to select each parameter, and with nearly endless engineering problems to be solved and parameter combinations that can be selected, this spells trouble for researchers since a lot of trial-and-error should be expected. Imagine waiting for weeks or months to find that your evolutionary algorithm is getting stuck at finding a solution because your meta-parameter selection was wrong, so you have to start over with new possibilities, without knowing if those will work or not.
"Parameter selection has been discussed for many years and has turned into a highly complex topic with deep mathematics. However, we still don't have practical instructions that can be used by scientists," Bakırcı-Taylor says. "My main thought was whether we can approach this subject experimentally instead of fully analytically. I wondered if we could come up with a way of saying, for example, 'if your goal is this, if your system has this many degrees of freedom to be optimized, natural selection pressure should be this high, or mutation should be dialed back, etc.' This is a really big question."
Evolutionary algorithm parameters interact in highly non-linear ways, so taking on this challenge wasn't an easy feat. Since it was going to be an experiment-focused research, Bakırcı-Taylor needed a tool that would enable him to run experiments frequently and rapidly. It also needed to be simple enough to not cause extra difficulties during the tests, so Line Follower Robots (LFR), which he previously used during his time in university to teach high school students control methods, seemed the obvious choice.
Most of the analysis and data collection was done through computer simulations; however, the custom-built, real-world LFR enabled them to illustrate the selected results. A custom track with a black line over a white background was designed, and LFR's fitness on the track was determined.
Through the so-called ceteris paribus experiments, they used the same platforms repeatedly by modifying only a single parameter and keeping all the other experimental parameters the same. This enabled them to examine the effect of a particular parameter on the final output. The fitness was measured according to how well the robot followed the line, and the meta parameters affected the chances of finding well-performing individuals.
By using eight computers and parallel processing features, the researchers created a cluster and tested multiple populations by varying these parameters: the robot size, sensor-base distance, initial conditions, natural and sexual selection, mutation size and rate, crossover rate, population regeneration technique, and presence and strength of elitism.
Testing each parameter change took 3 to 7 days since they weren’t using a supercomputer. Bakırcı-Taylor also added that this process might have taken him longer than usual since he isn't a computer scientist and could not optimize the code to run even faster.
Their results showed that certain rates and parameters followed specific trajectories on which they could base some suggestions.
"One of the most interesting results we got was related to the mutation-selection balance which is a key concept widely known in biology," Bakırcı-Taylor explains. "Mutations increase diversity within the population; however, when they happen too fast, the population can be harmed in the long run in relation to the selection pressure. This requires a balance between the two."
The researchers saw that the populations started behaving worse and worse when they exaggerated the mutations. "If the selection pressure is not strong enough, it won't be able to weed out the mutation fast enough. Deleterious mutations will start to accumulate, and fitness, in general, will begin to decline,” he said. This creates a balance between the two important parameters of the algorithm.
“But the key is, a background mutation rate constantly creates new variations that are not advantageous or disadvantageous — but when a new problem is faced by the generation, say down the line on the track they needed to complete, those old mutations can turn into advantageous selection elements and evolution can quickly, and seemingly ‘cleverly’, overcome the problem.”
Another result showed the natural selection rate shouldn't be smaller than, or exceed, a certain amount since too low a natural selection rate could eliminate potential solutions that can improve over time; and too high a rate would produce a large number of solutions, including very poor ones. This optimal rate was between 15 percent and 85 percent in their case, which is interesting according to Bakırcı-Taylor, as the success rates did not change in a statistically significant manner in this range, signaling that the exact selection rate may not be a crucial factor, as long as there is some selection. If this can be generalized, it would make decisions much easier for others.
"While this was expected, we saw there was a 'sweet spot' for natural selection," says Bakırcı-Taylor. This was the plateau in the middle of the best fit plot obtained over many, many experiments. Providing the mutation rate is high enough to generate variation, a mediocre natural selection rate, which was between 25% and 75% in this case, should work just fine. “And the cool thing is that you can quickly see whether this is the case for your algorithms, by examining the results of the first few generations.”
These findings are not enough for the development of software that can magically tell you all the needed parameters; however, they certainly add a lot. The researchers believe that the conducted parameter sweeps are promising experimental methods that could be further used to find sweet spots for various parameters in evolutionary algorithms.
Moreover, future research where a supercomputer sweeps all parameters could theoretically reveal various connections, paving the way to a generalized theory of algorithmic evolution. “It is really a task of optimizing the optimizer,” Bakırcı-Taylor says.
What does the future hold?
When the researchers do take on the challenge, evolutionary algorithms can be quite fruitful, as we've mentioned before. The difficulties aside, many fields have been influenced by the advances in evolutionary algorithms, with control engineering being one of the first. EAs have been used to optimize electromagnetic suspension systems such as the ones used in maglev trains that work on the principle of magnetic repulsion between the cars and the track.
Aside from their famous antenna, NASA's Evolvable Systems Group has also built evolutionary algorithms that design chips that can fix themselves, circuits, coevolutionary algorithms, and schedules for satellite fleets.
Moreover, in one of the most recent examples, NASA's next-gen spacesuit, the Extravehicular Mobility Unit, will be designed with a helping hand from AI. Designing a spacesuit has its many engineering challenges, and meeting the aggressive 2024 timeline for the Artemis missions means the engineers can't spend weeks debating every aspect of the ideal designs. So they are piloting new software that can rapidly produce new designs through technology that combines generative adversarial networks and evolutionary algorithms.
Such software could be more widely used in the future, as engineers start to rely on AI to come up with the best designs. But how would a future designed using evolutionary algorithms look like? Well, it would probably have a lot of holes.
These alien-looking designs were developed by a team of designers, engineers, and scientists to celebrate the 20th anniversary of the Volkswagen Innovation and Engineering Center in California. In a bid to mix the past and future of Volkswagen while illustrating its core ideals, the team took a classic 1962 VW Bus and retrofitted it with the latest technology using generative design.
The result certainly didn’t look like anything built by human engineers. There is a good reason for that.
"This intricate design is much more elastic and capable of dispersing load distribution. It is nearly impossible for an engineer to come up with such a design. We cannot compete with evolutionary algorithms when designing because we just cannot calculate as efficiently — we simply do not have the theory! Of course, we can calculate it after we know the design, but predicting which one will be the best is the most critical point. We are good at generating solutions for our own needs, but we are limited in creating the first successful product," explains Bakırcı-Taylor.
When you think about the fact that, in contrast to the standard-looking alloy wheels it started with, Volkswagen cut 18 percent of the weight with this design, the future starts looking very much like Swiss cheese. "We may start seeing such designs more frequently because there is no limit to how much they increase productivity. I think evolutionary algorithms will be in a more central position in the future, especially in design and R&D."
So what does the future hold for evolutionary algorithms and how far can they take us? In his book Evrenin Karanlığında Evrimin Işığı (The Light of Evolution in the Darkness of the Universe), Bakırcı-Taylor claims that machines utilizing evolutionary algorithms will be the beginning of the technological singularity. This will not happen with some engineer somewhere coming up with a single equation, which may or may not be complex, and an Artificial General Intelligence emerging just like that. This is because our machine learning methods rely on extremely simplistic models of neurons, and learning based on this simplicity cannot achieve the complexity of human brains, just like overly simplified engineering designs are bound to perform worse than their evolutionarily designed counterparts, he argues.
"I think that we need to provide AI technologies with tools to evolve their own learning methods. When we give them their neural networks with predefined structures, we are limited by our unimpressive engineering skills — when compared to nature, at least. But as humans, we did not gain intelligence by someone handing us our own neural network. This network evolved from the simplest building blocks, through natural selection, over generations. And even our brain is not the final product, every species and every part of their bodies keep evolving, even right now! If we want to build a truly general AI, we must equip it with tools to evolve itself. Learning is one thing, evolving how to learn is another. What today's AI is lacking is this base layer of evolution," he says.
"If I'm right and if we can overcome some of the hurdles, in a couple of decades evolutionary algorithms should be the ‘next big thing.’ If I'm wrong, we might find a brand new approach, since obviously the current deep learning methods are great at special artificial intelligence but are not enough to yield a general one. What that approach could be, of course, I don't know. I just look at the history of engineering and realize that the main pattern is following the lead of nature to come up with better techniques. And if our goal is to create the most complicated, most beautiful, most powerful technologies of all times, we cannot just go with learning, we have to substantiate it with evolution. Because that is how it has worked in nature for the past 4 billion years. We'll see."
With AI developing at its current dizzying rate, evolutionary algorithms and evolutionary computation might be what will drive our technological abilities beyond deep learning and enable AI to become fully 'creative.'
It is undeniable that engineers generally favor more traditional design processes where they can be fully in control of everything with mathematics and certified methods of calculation. An approach where they let an algorithm do the job of figuring out what works or not might seem like uncharted territory; however, when achieving a level of creativity that rivals nature becomes a necessity, the right thing might be to take your hands off of the wheel and let evolution do what it does best.