next_inactive up previous


Abstract:

A survey of the theory, techniques, and applications of coevolutionary algorithms. Although interest in coevolutionary algorithms has been growing in recent years there has been very little in the way of review, benchmarking, or comparative analysis done in the field. We present a history of coevolutionary research, a summary of the current state of the field, and suggestions for future work.

Introduction

Evolutionary computation (including genetic algorithms, evolutionary strategies, and genetic programming among other techniques) is based upon using the principles of biological evolution to solve computational problems. In traditional evolutionary computation a population of solutions is evaluated by an objective fitness function, that is, each individual's fitness is evaluated independent of the others. Using selection, reproduction, and variation operators, the fitness of the population is improved until a suficient solution is found. Genetic algorithms have been used in many different application areas including: function optimization, creating intelligent agents, evolving images, and simulations of artificial life [10,7,13].

One of the limitations of this traditional approach is the need to specify an objective fitness function. For many problems, such as multiplayer boardgames and the development of highly intelligent agents, it is often impractical to specify an objective fitness function.

Coevolutionary algorithms are a subset of evolutionary algorithms where the fitness of an individual is determined, at least in part, by the other individuals in the population. That is to say, coevolutionary algorithms have subjective fitness functions.

Take, for example, the game of Chess. How do you objectively quantify a ``good'' chess player. If we were to play 100 chess games against a Grand Master we would most likely lose all 100. If, on the other hand, we were to play 100 games against a complete beginner we would probably win most of them. Amongst a population of novices we would be considered an expert, while amongst experts we would be considered a novice. To objectively evaluate a single player's strategy the player would need to be compared against every other Chess strategy possible, an impractical task.

Coevolution has three primary goals: to provide better solutions to optimization problems by maintaining a more diverse set of solutions, to provide solutions to problems with subjective fitness functions, and to drive towards open-ended evolution [9,15,13].

A Brief History

Early use of Subjective Fitness Functions

The first major work to make use of subjective fitness functions, and coevolution, was Robert Axelrod's 1984 work on coevolving automata to play the Iterated Prisoner's Dilemma (IPD) game. Axelrod evolved automata to play Prisoner's Dilemma and used tournaments to determine the fitness of individual automata.

In 1991 Thomas Ray[13] used indirect subjective fitness functions to evolve digital organisms. Ray's work was one of the first to use co-evolution to attempt to create open ended evolution. Although the organisms created using co-evolution exhibited complex host-parasite relations the system was ultimately self limiting. It did demonstrate, however, the ability of a co-evolutionary system to develop a simple monoculture of digital organisms into a complicated and diverse ecology.

In 1992 Gerald Tesauro[17] developed neural networks for backgammon using Temporal Difference Learning and self play. Networks were trained by playing themselves over and over. Although not true coevoluton, lacking a population of solutions, this technique leverages the randomness inherent in backgammon to provide a variety of opponents even in self play.

The First Coevolutionary Algorithms

W. D. Hillis's 1990 paper [9] on co-evolving parasites to improve the simulated evolution of sorting networks is often considered the first ``official'' work on co-evolutionary algorithms. Hillis evolved sorting networks, represented as binary genes, using both evolutionary and coevolutionary algorithms. Hillis demonstrated that by co-evolving sorting networks and test cases he was able to evolve better sorting networks than by simply evolving sorting networks by themselves.

Although Hillis's work was not the first to make use of subjective fitness functions it was seminal for several reasons. It was the first to compare a co-evolutionary algorithm with an evolutionary algorithm for the same problem: developing sorting networks. It was the first to use a host-parasite (predator-prey) model, that is, a model where the solutions being evolved were solving two different problems (sorting and testing sorting networks). Finally, it was the first co-evolutionary algorithm to use a spatially based competitive architecture. Individuals in the host (sorting network) population were stored in a grid and evaluated against parasites (test cases) stored in a similar location on an identical parasite grid. Parasites were evaluated in the reverse manner.

The work of Hillis, Axelrod, Ray, and Tesauro inspired a wave of papers on coevolutionary algorithms including a mention in Koza's 1992 text on Genetic Programming [11], Angeline and Pollack's 1993 paper on co-evolving Tic-Tac-Toe players [1], and many more in 1994 including Sims work evolving the morphology of 3D creatures, Smith's work on Othello [16], and Potter and DeJong's work on Cooperative Coevolutionary Genetic Algorithms (CCA or CCGA) [12].

In a relatively brief chapter of his text on Genetic Programming [11] Koza demonstrated the coevolution of individuals using a Genetic Programming representation to solve a simple discrete game. The text introduced a new competitive architecture for two population co-evolution, each individual was scored according to their average performance against every individual in the opposing population, and successfuly evolved the optimal (minimax) solution.

Angeline and Pollack co-evolved Tic-Tac-Toe players, represented with Genetic Programming (GP), using a single tournament competitive architecture. While still using a competitive approach to coevolution, Angeline and Pollack introduced several new concepts: primarily by addressing the role of competitve architecture (competitive approach), the tournament competitive architecture, and the Tic-Tac-Toe domain.

Smith and Gray co-evolved the parameters of Alpha-Beta searches for Othello Players. A new competitive architecture, random pairing, as well as a strategy for analyzing co-evolutionary population structures were introduced.

Sims co-evolved 3D creatures. Although, Sims suggested several new competitive architecutres he eventually used an all versus best architecture, where the best was the best individual in the other population from the preceeding generation. The 3D creatures were represented as a combination of directed graphs, one for the brain, and one for the body. The resulting creatures learned how to evolve brains, bodies, and race one another to an objective.

The Diaspora

In 1994 coevolution really took off and over the next several years a myriad of coevolutionary techniques were tried against a vast array of applications. Everything from tag [14] to image classifiers [Hugues]. From neural networks [Michalewicz] to robotics [Bullock95].

Problems and Solutions

As the research into coevolutionary techniques deepened and diversified problems and challenges were noticed. The most common problems encountered by coevolutionary researchers were: cycling, mediocre stable-states, collusion, forgetting, disengagement, and focusing. Watson and Pollack's 2001 paper on coevolution on a minimal substrate [20] provides an excellent system for exploring and understanding these problems. Many of these problems arise from the subjective nature of fitness in coevolutionary algorithms.

In 1997 Rosin and Belew proposed three ``remedies'' for the collective maladies of coevolution: a hall of fame, shared sampling, and competitive fitness sharing. These techniques were designed to increase the probability of objective fitness growth in a co-evolutionary algorithm by tuning the credit assignment process and building a repository of diverse solutions.

Building a Theory

The theory of co-evolutionary algorithms has largely concentrated on divining the conditions under which the objective fitness of a co-evolutionary population can be made to monotonically increase.

Applications, Applications, and More Applications

In the past decade the applications, techniques, and theory of co-evolutionary algorithms has evolved steadily. While there have been no quantum advancements in the theory or techniques of co-evolution, there has been good progress in all areas.

Organization of this survey

This survey is broken down into four main sections: concepts, theory, applications, and summary. The concepts section discusses the basic concepts of coevolutionary algorithms including competive and cooperative coevolution, the challenges inherent in coevolution, and the proposed remedies. The theory section provides an overview of the theoretical works on coevolution. The applications section provides an overview of the problems to which coevolution has been applied ranging from classifiers to artificially intelligent agents. The summary section provides a general overview of the field of coevolution and proposes future directions for the field.

Concepts

There are two basic types of coevolutionary algorithms: competitive and cooperative. In competitive coevlution an individuals fitness is based on how well the individual performs in competition with other individuals. The first coevolutionary algorithms were competitive ([9,15]). In cooperative coevolution an individuals fitness is based on how well it works with other individuals to solve a problem [12]. Competitive and cooperative coevolutionary algorithms differ primarily in the way fitness is evaluated and the number of populations present in the algorithm.

Competetive Coevolution

In a competitive co-evolutionary algorithm one or more populations of individuals is competitively evolved to produce superior individuals. The fitness of an individual is dependent on the number of other individuals it competes against successfully.

Competitive coevolution is often used to evolve agents in game or simulation domains such as Checkers [3] and Predator-Prey simulations [4,5,18].

The type of competitive co-evolution used is often dependent on the nature of the problem being co-evolved. In co-evolutonary situations where all of the individual solutions are solving the same problem, Tic-Tac-Toe or Checkers for example, only a single species of individual is needed. In single species co-evolution a single population is often sufficient (CITE), although a larger number of populations can be used to help maintain diversity (CITE).

In a co-evolutionary situation where there are two problems being solved simultaneously and where each requires a separate representation (two species co-evolution) a minimum of two populations is the common choice. As in single-species co-evolution more populations can be used to maintain greater diversity.

Competitive Architectures

One of the challenges coevolution shares with objective evolution is picking the right fitness function. A competitive fitness function has two main parts: deciding which individuals compete against one another (the competitive architecture [14]), and deciding how to score matches. A large variety of competitive architectures have been used. Some popular examples include:

The type of comparison is usually determined by the nature of problem.

Types of Comparisons:

Types of Scoring:

$n$-Population Fitness Functions

One of the first examples of multiple population coevolution was presented by Hillis in 1990. He co-evolved sorting networks and test cases based on spatial pairing. The multiple populations can consist of individuals with the same representation, or individuals with distinct representations. These populations can contain individuals competing to solve the same problem, or competing against one another.

Complete Bipartite Each individual in one population is compared against every individual in another.

Random Pairings Each individual in one population is randomly paired against one other individual in a different population.


Comp. Fit. Func. Matches/Gen. Opponents Reference
Complete $n^2$ $n$ [6,8])
Spatial Pairing $n-1$ $log_2 n$ [9]
All vs Best $n$ $1$ Same Species[15], Diff. Species [5]
Grouped Double Tournament     [19]

Analysis of Competitive Fitness Functions

Determing the appropriate competitive fitness function for a particular problem can be difficult. Many factors often play a role in picking the appropriate function. In many instances the time to perform a comparison plays a significant role.

Although several researchers (Sims [15], etc.) have acknowledged the variety of competitive architectures available, very few have actually compared architectures. Often a particular architecture is presented with little or no justification [19]. In other instances ([2]) the specification of the competitive architecture is often incomplete.

Without a comparative analysis it's difficult to know the significance the choice of competitive architecture plays in co-evolution.

Cooperative Coevolution

Cooperative coevolution was first proposed by Potter and De Jong in 1994 [12] and studied in the function optimization domain.

The framework for cooperative coevolution is as follows:

  1. A problem is separated into components.
  2. A subpopulation (subspecies) is maintained for each component.
  3. A solution is created by assembling components from each subpopulation.
  4. Fitness of an individual in a subpopulation is based on the fitness of the solution it is a part of.
  5. The number of subpopulations can also evolve.
  6. Each subpopulation is evolved independently.

Cooperative Fitness Functions

Challenges

The promise of open ended evolution has not yet been achieved.

Remedies

Fitness Sharing

The concept of fitness sharing has been around for several decades. Phenotypic fitness sharing was first proposed by (CITE) and then used by (CITE) in co-evolutionary algorithms. A type of fitness sharing specific to co-evolutioanry algorithms is competitive fitness sharing. Competitive fitness sharing, introduced by Rosin and Belew (CITE), weights the results of competitive contests based on how rare or unique an individuals strategy is. Defeating a rare individual that no one else can beat gives a significant advantage, while beating an individual that everyone in the population can beat counts as only a small fitness score.

Shared Sampling

Coevolutionary Memory

A Coevolutionary Memory is a technique for storing successful individuals from generation to generation (exempt from normal selection pressures???).

A Hall of Fame is a collection of individuals from the population who represent a sampling of successful strategies from different points in the evolutionary process.

LAPCA - Layered Pareto Coevolutionary Archive

Theory

Solution Concepts

[Ficici2004]-

A solution concept is a formalism with which to describe and understand the incentive structures of agents that interact strategically. All coevolutionary algorithms implement some solution concept, whether by design or by accident, and optimize according to it. Failures to obtain the desiderata of "complexity" or "optimality" often indicate a dissonance between the implemented solution concept and that required by our envisaged goal.

We make the following contributions: 1) We show that solution concepts are the critical link between our expectations of coevolution and the outcomes actually delivered by algorithm operation, and are therefore crucial to explicating the divergence between the two, 2) We provide analytic results that show how solution concepts bring our expectations in line with algorithmic reality, and 3) We show how solution concepts empower us to construct algorithms that operate more in line with our goals.

Non Local Adaptation

[GOES HERE]

Applications

There are three basic application areas: using coevolution to improve the performance of traditional co-evolutionary algorithms (PERFORMANCE ENHANCING), using coevolution to solve problems without an objective fitness function (SUBJECTIVE FITNESS), and using coevolution to achieve open ended evoltuion (OPEN-ENDED).

Performance Enhancing Coevolution

[Juille96] -

Summary Analysis


Table 1: A summary of research papers in the area of performance enchancing coevolution
Application Papers
Optimization  
Classifiers Juille96


Subjective Fitness Coevolution

General Competitive Coevolution

Board Games

Pursuit and Evasion

In a previous SAB paper [10], we presented the scientific rationale for simulating the coevolution of pursuit and evasion strategies. Here we present.

Summary Analysis


Table 2: A summary of research papers in the area of subjective fitness coevolution
Application Papers
Games  
Board Games  
Checkers Fogel
Chess  
Go  
Othello  
Tic-Tac-Toe  
Continuous Games  
   
   
Predator Prey  


Open-Ended Coevolution

[Ray91] - An approach to the synthesis of life

[Waters99] -

Summary Analysis


Table 3: A summary of research papers in the area of open-ended coevolution.
Application Papers
Tierra Ray
Avida Adami


Summary

Conclusion and Discussion

Future Directions

Glossary

Subjective Fitness - AKA relative Fitness (KOZA)- AKA coupled fitness

[CCGA] - Cooperative Coevolutionary Genetic Algorithms [12] [ING] - Intransitive Numbers Game (Watson and Pollack) [cite] - Citation not in my Bibliography [IWC] - Incorrect Word Choice

Acknowledgments

Bibliography

1
P. J. Angeline and J. B. Pollack.
Competitive environments evolve better solutions for complex tasks.
In Proceedings of the Fifth International Conference on Genetic Algorithms, pages 264-270, 1993.

2
F. Azuaje.
A computational evolutionary approach to evolving game strategy and cooperation.
Transactions on Systems, Man and Cybernetics, 32(5), 2002.

3
K. Chellapilla and D. B. Fogel.
Evolving an expert checkers playing program without using human expertise.
IEEE Transactions on Evolutionary Computation, 5(4):422-428, 2001.

4
D. Cliff and G. F. Miller.
Protean behavior in dynamic games: Arguments for the co-evolution of pursuit-evasion tactics.
In Proceedings of the Third International Conference in Simulation of Adaptive Behavior, 1994.

5
D. Cliff and G. F. Miller.
Co-evolution of pursuit and evasion ii: Simulation methods and results.
In Proceedings of the Fourth International Conference in Simulation of Adaptive Behavior, 1996.

6
E. D. de Jong and A. Bucci.
Deca: Dimension extracting coevolutionary algorithm.
In Proceedings of the 2006 Genetic and Evolutionary Computation Conference, pages 313-320, 2006.

7
K. A. DeJong.
An Analysis of the Behavior of a Class of Genetic Adaptive Systems.
University of Michigan, 1975.

8
S. G. Ficici.
A game-theoretic investigation of selection methods in two-population coevolution.
In Proceedings of the 2006 Genetic and Evolutionary Computation Conference, pages 321-328, 2006.

9
D. Hillis.
Co-evolving parasites improves simulated evolution as an optimization procedure.
Physica D: Nonlinear Phenomena, 42:228-234, 1990.

10
J. H. Holland.
Outline for a logical theory of adaptive systems.
Journal of the Association for Computing Machinery, 3:297-314, 1962.

11
J. R. Koza.
Genetic Programming: On the Programming of Computers by Means of Natural Selection.
The MIT Press, Cambridge, Massachusetts, 1992.

12
M. A. Potter and K. De Jong.
A cooperative coevolutionary approach to function optimization.
In Y. Davidor, H.-P. Schwefel, and R. Männer, editors, Parallel Problem Solving from Nature - PPSN III, pages 249-257, Berlin, 1994. Springer.

13
T. S. Ray.
An approach to the synthesis of life.
In Artificial Life II, pages 371-408, 1991.

14
C. Reynolds.
Competition, coevolution, and the game of tag.
In Artificial Life IV, pages 56-69, 1994.

15
K. Sims.
Evolving 3d morphology and behavior by competition.
In Artificial Life IV, pages 28-39, 1994.

16
R. E. Smith and B. Gray.
Co-adaptive genetic algorithms: An example in othello strategy.
In The Proceedings of the Florida Artificial Intelligence Research Symposium, 1994, 1994.

17
G. Tesauro.
Practical issues in temporal difference learning.
Machine Learning, 8:257-277, 1992.

18
M. Wahde and M. G. Nordhal.
Coevolving pursuit-evasion strategies in open and confined regions.
In Artificial Live VI, pages 472-476, 1998.

19
M. Waters and J. Sheppard.
Genetic programing and co-evolution with exogenous fitness in an artificial life environment.
In Proceedings of the 1999 Congress on Evolutionary Computation, pages 1640-1648, 1999.

20
R. A. Watson and J. B. Pollack.
Coevolutionary dynamics in a minimal substrate.
In L. Spector, E. D. Goodman, A. Wu, W. B. Langdon, H.-M. Voigt, M. Gen, S. Sen, M. Dorigo, S. Pezeshk, M. H. Garzon, and E. Burke, editors, Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001), pages 702-709, San Francisco, California, USA, 7-11 2001. Morgan Kaufmann.

About this document ...

A Comprehensive Survey of Coevolutionary Algorithms Research

This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.71)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 1 corev

The translation was initiated by Sushil Louis on 2008-05-28


next_inactive up previous
Sushil Louis 2008-05-28