next up previous
Next: Conclusion and Future Work Up: Genetic Algorithms with Memory Previous: Methodology

Results and Analysis

Table 5 compares the average performance across all our TSP problems. Column one specifies how we modified the original problem, while column two and three show the percentage distance of the first and the last generation tour length from the optimal tour length. From this table, we can see that after injecting individuals and running GAs with previous information, we can always get better results than when running GAs with random initialization on our problems.

   table163
Table 3: Average distance from optimal for all problems

We provide details on each modified problem and use performance graphs on the 76 cities TSP to illustrate our results. Performance figures for the rest of the TSP problems are available in the Appendix.

  1. Same problem (same).
    We do not change the problem letting tex2html_wrap_inline1231 be the original problem and solving the original problem twice. The first time we run the RGA 10 times with 10 random seeds, while the second time we run the NRGA 10 times with injection from the RGA solutions. Figure 7 shows the optimal solution, the average performance from running the RGA 10 times and the average performance from running the NRGA 10 times for the 76 cities TSP. The performances from the other problems can be found in the Appendix. As expected, injecting solutions from the same problem can help NRGAs get better performance especially in early generations.

       figure177
    Figure 7: Performance on 76 cities problem with injection from solutions to the same problem

  2. Change one city (diff1).
    We slightly change the original problem by randomly selecting one city and changing its location by randomly changing its x and y coordinates by tex2html_wrap_inline1225 to tex2html_wrap_inline1319 . Note that the problem size remains the same. Figure 8 compares the average performances on the 76 cities TSP. Now, since tex2html_wrap_inline1231 and the original problem are different, the performance during early generations is not as good as the performance when injecting solutions from the same problem. However, the performance is still better than with random initialization.

       figure185
    Figure 8: Performance on 76 cities problem with injection from solutions to the changing one city problem

  3. Change two cities (diff2).
    We randomly select two cities and change their locations by randomly changing their x and y coordinates by tex2html_wrap_inline1225 to tex2html_wrap_inline1319 . The average performance from the 76 cities TSP is shown in Figure 9. The performance remains better than that of an RGA. When changing two cities instead of changing one, the two problems are less similar which leads to the early difference in performance between diff1 and diff2. During earlier generations, the performance of diff1 is always better than that of diff2. However, during later generations, this is not always true and diff2 actually results in better performance on some of our benchmarks.

       figure197
    Figure 9: Performance on 76 cities problem with injection of solutions to the changing two cities problem

  4. Add one city (add1).
    To modify the original problem, we not only change the location of cities, we also change the size of the problems. We do this by adding or deleting one city. When adding one city, we generate a new city and let its coordinates be between the maximal and minimal coordinates of all cities. We run the RGA to solve the modified problem and save the best individuals. For each of these individuals, we delete the added city so the changed individuals are legal tours for the original problem. Then we inject these changed individuals to solve the original problem. After comparing the average performance on the 76 cities problem (as shown in Figure 10), we can see that adding one city can help GAs get performance similar to that obtained when injecting solutions from the same problem.

       figure206
    Figure 10: Performance on 76 cities problem with injection from solutions to the adding one city problem

  5. Delete one city (del1).
    We also randomly delete one city, run GAs to solve the modified problem and save the best individuals. For each one of these individuals, we randomly select one site and insert the deleted city, so that the changed individuals become legal tours for the original problem. We then inject these new individuals to solve the original problem. Figure 11 compares the average performances on the 76 cities problem. Since we insert the deleted city at a random point on a tour, the performance during the first few generations is not as good as when injecting solutions from other similar problems but the performance is still better than an RGA.

       figure213
    Figure 11: Performance on 76 cities problem with injection from solutions to the deleting one city problem

Figure 12 and Figure 13 show the optimal solution  [Reinelt 96] and our best solution by running the NRGA on the 76 cities TSP. The edges are not all the same, but the lengths are very close

   figure222
Figure 12: Best solution of 76 cities TSP. Tour length = 538

   figure227
Figure 13: Our best solution of 76 cities TSP with previous information. Tour length = 549

Analysis

As Table  5 shows, after injecting solutions of previous similar problems, the performance is always better than running the GA with random initialization.

When using the GA to solve TSPs, the absolute location of a city in a string is less important than the relative location of a city with respect to a tour. For example, for a 3 cities tour, (1 2 0) and (0 1 2) mean the same tour. However, pairs of cities, that is edges, are now important. We believe the reason we can always get good performance during early generations after injecting individuals and running NRGAs with previous information is that those injected individuals contain good edges from similar problems. The combined system (GAs + CBR) is able to learn and remember good partial tours and thus takes less time to solve a new problem.

Short, highly fit subsets of strings (building blocks) play an important role in the action of genetic algorithms because they combine to form better strings  [Goldberg 89]. Edges are important in creating building blocks for the traveling salesman problems. After we inject individuals contain good edges, learned from solving similar problems, the NRGA can combine these good edges and quickly approach the optimum. This may explain why changing problem size by adding a city still results in better performance compared to deleting a city. Our results show that injecting individuals from the same problem and the add1 problem can help the NRGA get better performance than when injecting individuals from other problems. In both these cases, we use information from all cities and the length of each edge remains the same. This implies that a different sized problem with information from all cities and all edges can help the NRGA get better performance than a same sized problem with some different edges.


next up previous
Next: Conclusion and Future Work Up: Genetic Algorithms with Memory Previous: Methodology

Sushil J. Louis
Fri Mar 21 15:03:50 PST 1997