Some time ago, while visiting ABC group at Max PLanck Institute for Human Development in Berlin, I got interested in evolutionary basis of forgetting. This was partly inspired by a great paper of Lael Schooler that was a senior scientist there at a time. In 1991 together with John Anderson he has written a paper on rational account of forgetting where they convincingly argue that memory reflects statistics of the environment. In other words, brain tries to predict the probability it will need a certain memory in the future. Me and Pantelis Analytis, who was doing a PhD there at the time, had a slightly different idea, that non-stationarity of the environment could explain forgetting. We tried thinking of an interesting way of tackling this question, but we didn’t manage to produce anything concrete. Couple of papers appeared lately that managed to the justice to this idea, see here and here.

I kept thinking about evolutionary basis of cognition, however, and I was happy that two ambitious undergrad students at UPF who were interested in evolutionary learning have found me - Nil Adell Mill and David Pere Tomas. Together, we tried to identify the characteristics of environments that favour evolution of learning capacity. We didn’t manage to get some serious results, which was largely my fault - the topic was not close enough to my area of expertise to provide them with a quality guidance (supervision lesson learnt!). Anyhow, it was a small surprise that there is relatively little literature on this topic. Out of these, one stuck with me in particular:

G. E. Hinton, and S. J. Nowlan (1987) “How learning can guide evolution, Complex Systems, 1 (3), 495-502. [pdf]

The article presents a simple, yet very insightful computational example of interaction between individual and evolutionary learning, illustrating a clear benefit of learning at the individual level for evolutionary search. The authors use an extreme scenario where the optimization landscape is completely flat except for a single peak. In such an environment, curvature of the optimization surface does not provide any guidance and evolutionary learning alone cannot find the maximum. Through a simulation the authors show that such a hostile environment can be tackled by a combination of evolutionary and individual learning. In addition, if organisms can learning during their lifetime, those organisms whose genomes are closer to the targeted peak are going to be able to find the peak with individual learning. Such organisms will have higher fitness and transmit their genes to the next generation. In effect, capacity for individual learning “creates a hill” to the peak that evolutionary learning can climb, as illustrated in Figure 1 in the article (note that captions of the figures are switched, caption of Figure 1 is incorrectly placed under Figure 2, and vice versa). Such indirect influence of individual learning on evolutionary learning is often called Baldwin effect.

Result is very neat, so for fun I coded up the simulations. If you have R already installed, you can find the code on Github and fire up the simulations yourself. I have followed the description laid out in the article, mostly in the caption of Figure 1. Each of 1000 agents has 20 alleles in its genome, randomly initiated to 0, 1 or ? (with probabilities of 0.25, 0.25 and 0.5). 0s and 1s are set by evolutionary learning - through a crossover of two parent genomes that are selected proportional to their success in finding the targeted combination of alleles (which is a simple sequence of randomly initiated 0s and 1s) associated with a reward. Alleles with ? are switches that individual agent can set to 0s and 1s in an attempt to find the targeted genome. And here are the results; figure below corresponds to Figure 2 from the article.

The evolution of the relative frequencies of the three possible types of allele. Correct alleles are those that are set by evolutionary learning and correspond to the targeted pattern. Incorrect alleles represent the proportion of incorrectly set alleles, while Undecided represent the proportion of alleles left for individual learning. Proportions are means of 100 simulation runs, and barely visible grey ribbons are standard errors.

What you can see from the figure is that there is an obvious increase in the proportion of alleles correctly set by the evolutionary learning (close to 75% by generation 50), while the proportion of incorrect alleles decreases to zero by generation 50. This is clear evidence of individual learning guiding evolutionary search, even though there is no direct transmission of knowledge acquired during individual learning from one generation to the next. Note that for evolutionary learning alone the proportion of correct alleles would be unlikely to change at all.

Qualitatively, the results are very similar to the one from the article. There is one important difference, however. In the original simulations, the proportion of Undecided alleles that are left to individual learning is relatively constant, close to initial 50%. The authors point this out as an interesting result, suggesting that there is little selective pressure to specify last few connections by evolutionary learning because those can be quickly set through individual learning. In contrast, my results above show that the proportion of Undecided alleles is continuously decreasing. To make sure that this is really a long run trend I have let populations evolve for 1000 generations. Figure below shows that by generation 1000 the proportion of Undecided alleles drops to 9%. Even though the decrease is smaller with each generation, it does not seem to stop and flatten out by the end of the simulation.

The evolution of the relative frequencies of the three possible types of allele, with a larger number of generations. Proportions are means of 100 simulation runs, and barely visible grey ribbons are standard errors.

What is the source of this particular difference in results? My explanation is that the original implementation is flawed as there actually is a selection pressure for eliminating individual learning after some time. The fitness function described in the article is such that the smaller the number of steps an agent takes in an individual lifetime to reach the peak, the higher the probability of mating and transmitting genes to the next generation. The number of steps is a direct function of the number of undecided alleles in the genome. Hence, there is a selection pressure for decreasing the number of Undecided alleles.

Does this mean that individual learning is doomed to be eradicated over time? Not quite, at least not for more realistic scenarios. This decrease in usefulness of individual learning occurs in static environments where targeted genome stays fixed, as in these simulations. In more realistic scenarios, where environments are non-stationary and the targeted genome would be changing over time, individual learning would have a more stable role.

I have written up this small exercise as a proper computational replication and submitted it to ReScience the other day. ReScience is an exciting project - a Github based journal that promotes “…new and open-source implementations in order to ensure that the original research is reproducible.” Implementing models from just reading the paper is often not that easy - equations do not equal implementation and a lot of details might be missing. This creates unnecessary barrier for learning, as I quickly realized when I started out with computational modelling during my PhD. Hence, I’m really happy that ReScience has been started and I can contribute to transmission of the computational modelling “voodoo”.

Repository of the whole replication can be found here, and the article accompanying it is here.

Update, Sep 5, 2017: I ran into a very good discussion on limitations of simulations from Hinton & Nowlan (1987) paper [link]. Very informative read and pointer to a host of useful references.

Update, Sep 14, 2017: Replication has just been published at ReScience, and you can even see the whole review process! [link]:

Stojic, Hrvoje. “[Re] How learning can guide evolution”. ReScience, 3 (1), 2017. DOI: 10.5281/zenodo.890884