Implementing a Greedy Crossover Operator for Cooperative Coevolution of Neural Networks
Work was done towards a new crossover operator for the “Hierarchical-Enforced SubPopulations” (H-ESP) neuroevolution algorithm. The new operator takes advantage of the per-neuron fitness evaluations available in H-ESP Layer 1 to systematically replace the least-fit neurons in champion (Layer 2) networks. To implement this operator, two other changes were required: First, rather than evaluate each neuron only once per generation, each is used in two different networks and tested with a half-fidelity simulation. This gives a finer-grain indication of each neuron’s individual contribution to network fitness. Second, to maintain current evaluations of any Layer 2 neurons not dual-listed in Layer 1, all such nodes are reinjected back into LI. By itself, the fine-grain neuron fitness measure provides a significant boost in performance: the algorithm optimizes faster, and completes successfully more often. Using LI reinjection provides a smaller but still noticeable improvement beyond this. Surprisingly, the node replacement operator itself does not provide a noticeable net benefit on top of these two (supposedly corollary) changes: the penalty incurred by evaluating additional networks appears to cancel any benefit granted by intelligent crossover. Although the improvements did not come from the direction expected, this work has successfully extended the H-ESP algorithm. Moreover, the extensions are not limited to neuroevolution: Any cooperative coevolution paradigm could benefit from the lessons learned here.