Abstract
This contribution revisits an earlier discovered observation that the average performance of a pop ulation of neural networks that are evolved to solve one task is improved by lifetime learning on a different task. Two extant, and very different, explanations of this phenomenon are examined- dynamic correlation, and relearning. Experimental results are presented which suggest that neither of these hypotheses can fully explain the phenomenon. A new explanation of the effect is proposed and empirically justified. This explanation is based on the fact that in these, and many other relat ed studies, real-valued neural network outputs are thresholded to provide discrete actions. The effect of such thresholding produces a particular type of fitness landscape in which lifetime learn ing can reduce the deleterious effects of mutation, and therefore increase mean population fitness.
Get full access to this article
View all access options for this article.
