A survey of the theory, techniques, and applications of coevolutionary algorithms. Although interest in coevolutionary algorithms has been growing in recent years there has been very little in the way of review, benchmarking, or comparative analysis done in the field. We present a history of coevolutionary research, a summary of the current state of the field, and suggestions for future work.
Evolutionary computation (including genetic algorithms, evolutionary strategies, and genetic programming among other techniques) is based upon using the principles of biological evolution to solve computational problems. In traditional evolutionary computation a population of solutions is evaluated by an objective fitness function, that is, each individual's fitness is evaluated independent of the others. Using selection, reproduction, and variation operators, the fitness of the population is improved until a suficient solution is found. Genetic algorithms have been used in many different application areas including: function optimization, creating intelligent agents, evolving images, and simulations of artificial life [10,7,13].
One of the limitations of this traditional approach is the need to specify an objective fitness function. For many problems, such as multiplayer boardgames and the development of highly intelligent agents, it is often impractical to specify an objective fitness function.
Coevolutionary algorithms are a subset of evolutionary algorithms where the fitness of an individual is determined, at least in part, by the other individuals in the population. That is to say, coevolutionary algorithms have subjective fitness functions.
Take, for example, the game of Chess. How do you objectively quantify a ``good'' chess player. If we were to play 100 chess games against a Grand Master we would most likely lose all 100. If, on the other hand, we were to play 100 games against a complete beginner we would probably win most of them. Amongst a population of novices we would be considered an expert, while amongst experts we would be considered a novice. To objectively evaluate a single player's strategy the player would need to be compared against every other Chess strategy possible, an impractical task.
Coevolution has three primary goals: to provide better solutions to optimization problems by maintaining a more diverse set of solutions, to provide solutions to problems with subjective fitness functions, and to drive towards open-ended evolution [9,15,13].
The first major work to make use of subjective fitness functions, and coevolution, was Robert Axelrod's 1984 work on coevolving automata to play the Iterated Prisoner's Dilemma (IPD) game. Axelrod evolved automata to play Prisoner's Dilemma and used tournaments to determine the fitness of individual automata.
In 1991 Thomas Ray[13] used indirect subjective fitness functions to evolve digital organisms. Ray's work was one of the first to use co-evolution to attempt to create open ended evolution. Although the organisms created using co-evolution exhibited complex host-parasite relations the system was ultimately self limiting. It did demonstrate, however, the ability of a co-evolutionary system to develop a simple monoculture of digital organisms into a complicated and diverse ecology.
In 1992 Gerald Tesauro[17] developed neural networks for backgammon using Temporal Difference Learning and self play. Networks were trained by playing themselves over and over. Although not true coevoluton, lacking a population of solutions, this technique leverages the randomness inherent in backgammon to provide a variety of opponents even in self play.
W. D. Hillis's 1990 paper [9] on co-evolving parasites to improve the simulated evolution of sorting networks is often considered the first ``official'' work on co-evolutionary algorithms. Hillis evolved sorting networks, represented as binary genes, using both evolutionary and coevolutionary algorithms. Hillis demonstrated that by co-evolving sorting networks and test cases he was able to evolve better sorting networks than by simply evolving sorting networks by themselves.
Although Hillis's work was not the first to make use of subjective fitness functions it was seminal for several reasons. It was the first to compare a co-evolutionary algorithm with an evolutionary algorithm for the same problem: developing sorting networks. It was the first to use a host-parasite (predator-prey) model, that is, a model where the solutions being evolved were solving two different problems (sorting and testing sorting networks). Finally, it was the first co-evolutionary algorithm to use a spatially based competitive architecture. Individuals in the host (sorting network) population were stored in a grid and evaluated against parasites (test cases) stored in a similar location on an identical parasite grid. Parasites were evaluated in the reverse manner.
The work of Hillis, Axelrod, Ray, and Tesauro inspired a wave of papers on coevolutionary algorithms including a mention in Koza's 1992 text on Genetic Programming [11], Angeline and Pollack's 1993 paper on co-evolving Tic-Tac-Toe players [1], and many more in 1994 including Sims work evolving the morphology of 3D creatures, Smith's work on Othello [16], and Potter and DeJong's work on Cooperative Coevolutionary Genetic Algorithms (CCA or CCGA) [12].
In a relatively brief chapter of his text on Genetic Programming [11] Koza demonstrated the coevolution of individuals using a Genetic Programming representation to solve a simple discrete game. The text introduced a new competitive architecture for two population co-evolution, each individual was scored according to their average performance against every individual in the opposing population, and successfuly evolved the optimal (minimax) solution.
Angeline and Pollack co-evolved Tic-Tac-Toe players, represented with Genetic Programming (GP), using a single tournament competitive architecture. While still using a competitive approach to coevolution, Angeline and Pollack introduced several new concepts: primarily by addressing the role of competitve architecture (competitive approach), the tournament competitive architecture, and the Tic-Tac-Toe domain.
Smith and Gray co-evolved the parameters of Alpha-Beta searches for Othello Players. A new competitive architecture, random pairing, as well as a strategy for analyzing co-evolutionary population structures were introduced.
Sims co-evolved 3D creatures. Although, Sims suggested several new competitive architecutres he eventually used an all versus best architecture, where the best was the best individual in the other population from the preceeding generation. The 3D creatures were represented as a combination of directed graphs, one for the brain, and one for the body. The resulting creatures learned how to evolve brains, bodies, and race one another to an objective.
In 1994 coevolution really took off and over the next several years a myriad of coevolutionary techniques were tried against a vast array of applications. Everything from tag [14] to image classifiers [Hugues]. From neural networks [Michalewicz] to robotics [Bullock95].
As the research into coevolutionary techniques deepened and diversified problems and challenges were noticed. The most common problems encountered by coevolutionary researchers were: cycling, mediocre stable-states, collusion, forgetting, disengagement, and focusing. Watson and Pollack's 2001 paper on coevolution on a minimal substrate [20] provides an excellent system for exploring and understanding these problems. Many of these problems arise from the subjective nature of fitness in coevolutionary algorithms.
In 1997 Rosin and Belew proposed three ``remedies'' for the collective maladies of coevolution: a hall of fame, shared sampling, and competitive fitness sharing. These techniques were designed to increase the probability of objective fitness growth in a co-evolutionary algorithm by tuning the credit assignment process and building a repository of diverse solutions.
The theory of co-evolutionary algorithms has largely concentrated on divining the conditions under which the objective fitness of a co-evolutionary population can be made to monotonically increase.
In the past decade the applications, techniques, and theory of co-evolutionary algorithms has evolved steadily. While there have been no quantum advancements in the theory or techniques of co-evolution, there has been good progress in all areas.
This survey is broken down into four main sections: concepts, theory, applications, and summary. The concepts section discusses the basic concepts of coevolutionary algorithms including competive and cooperative coevolution, the challenges inherent in coevolution, and the proposed remedies. The theory section provides an overview of the theoretical works on coevolution. The applications section provides an overview of the problems to which coevolution has been applied ranging from classifiers to artificially intelligent agents. The summary section provides a general overview of the field of coevolution and proposes future directions for the field.
There are two basic types of coevolutionary algorithms: competitive and cooperative. In competitive coevlution an individuals fitness is based on how well the individual performs in competition with other individuals. The first coevolutionary algorithms were competitive ([9,15]). In cooperative coevolution an individuals fitness is based on how well it works with other individuals to solve a problem [12]. Competitive and cooperative coevolutionary algorithms differ primarily in the way fitness is evaluated and the number of populations present in the algorithm.
In a competitive co-evolutionary algorithm one or more populations of individuals is competitively evolved to produce superior individuals. The fitness of an individual is dependent on the number of other individuals it competes against successfully.
Competitive coevolution is often used to evolve agents in game or simulation domains such as Checkers [3] and Predator-Prey simulations [4,5,18].
The type of competitive co-evolution used is often dependent on the nature of the problem being co-evolved. In co-evolutonary situations where all of the individual solutions are solving the same problem, Tic-Tac-Toe or Checkers for example, only a single species of individual is needed. In single species co-evolution a single population is often sufficient (CITE), although a larger number of populations can be used to help maintain diversity (CITE).
In a co-evolutionary situation where there are two problems being solved simultaneously and where each requires a separate representation (two species co-evolution) a minimum of two populations is the common choice. As in single-species co-evolution more populations can be used to maintain greater diversity.
One of the challenges coevolution shares with objective evolution is picking the right fitness function. A competitive fitness function has two main parts: deciding which individuals compete against one another (the competitive architecture [14]), and deciding how to score matches. A large variety of competitive architectures have been used. Some popular examples include:
The type of comparison is usually determined by the nature of problem.
Types of Comparisons:
Types of Scoring:
One of the first examples of multiple population coevolution was presented by Hillis in 1990. He co-evolved sorting networks and test cases based on spatial pairing. The multiple populations can consist of individuals with the same representation, or individuals with distinct representations. These populations can contain individuals competing to solve the same problem, or competing against one another.
Complete Bipartite Each individual in one population is compared against every individual in another.
Random Pairings Each individual in one population is randomly paired against one other individual in a different population.
| Comp. Fit. Func. | Matches/Gen. | Opponents | Reference |
| Complete | [6,8]) | ||
| Spatial Pairing | [9] | ||
| All vs Best | Same Species[15], Diff. Species [5] | ||
| Grouped Double Tournament | [19] |
Determing the appropriate competitive fitness function for a particular problem can be difficult. Many factors often play a role in picking the appropriate function. In many instances the time to perform a comparison plays a significant role.
Although several researchers (Sims [15], etc.) have acknowledged the variety of competitive architectures available, very few have actually compared architectures. Often a particular architecture is presented with little or no justification [19]. In other instances ([2]) the specification of the competitive architecture is often incomplete.
Without a comparative analysis it's difficult to know the significance the choice of competitive architecture plays in co-evolution.
Cooperative coevolution was first proposed by Potter and De Jong in 1994 [12] and studied in the function optimization domain.
The framework for cooperative coevolution is as follows:
The promise of open ended evolution has not yet been achieved.
The concept of fitness sharing has been around for several decades. Phenotypic fitness sharing was first proposed by (CITE) and then used by (CITE) in co-evolutionary algorithms. A type of fitness sharing specific to co-evolutioanry algorithms is competitive fitness sharing. Competitive fitness sharing, introduced by Rosin and Belew (CITE), weights the results of competitive contests based on how rare or unique an individuals strategy is. Defeating a rare individual that no one else can beat gives a significant advantage, while beating an individual that everyone in the population can beat counts as only a small fitness score.
A Coevolutionary Memory is a technique for storing successful individuals from generation to generation (exempt from normal selection pressures???).
A Hall of Fame is a collection of individuals from the population who represent a sampling of successful strategies from different points in the evolutionary process.
LAPCA - Layered Pareto Coevolutionary Archive
A solution concept is a formalism with which to describe and understand the incentive structures of agents that interact strategically. All coevolutionary algorithms implement some solution concept, whether by design or by accident, and optimize according to it. Failures to obtain the desiderata of "complexity" or "optimality" often indicate a dissonance between the implemented solution concept and that required by our envisaged goal.
We make the following contributions: 1) We show that solution concepts are the critical link between our expectations of coevolution and the outcomes actually delivered by algorithm operation, and are therefore crucial to explicating the divergence between the two, 2) We provide analytic results that show how solution concepts bring our expectations in line with algorithmic reality, and 3) We show how solution concepts empower us to construct algorithms that operate more in line with our goals.
[GOES HERE]
There are three basic application areas: using coevolution to improve the performance of traditional co-evolutionary algorithms (PERFORMANCE ENHANCING), using coevolution to solve problems without an objective fitness function (SUBJECTIVE FITNESS), and using coevolution to achieve open ended evoltuion (OPEN-ENDED).
[Juille96] -
|
In a previous SAB paper [10], we presented the scientific rationale for simulating the coevolution of pursuit and evasion strategies. Here we present.
|
[Ray91] - An approach to the synthesis of life
[Waters99] -
|
Subjective Fitness - AKA relative Fitness (KOZA)- AKA coupled fitness
[CCGA] - Cooperative Coevolutionary Genetic Algorithms [12] [ING] - Intransitive Numbers Game (Watson and Pollack) [cite] - Citation not in my Bibliography [IWC] - Incorrect Word Choice
This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.71)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 1 corev
The translation was initiated by Sushil Louis on 2008-05-28