Evolutionary Algorithm – The Surprising and Incredibly Useful Alternative to Neural Networks

Pranav Dar 07 May, 2019 • 3 min read

Overview

  • The evolutionary algorithm technique could significantly change the way we build deep learning models
  • It has been around for a number of years and the latest research has been done by researchers from the University of Toulouse
  • Their algorithm outperformed deep learning systems in Atari games, and did so in a far quicker time

 

Introduction

Neural networks have become the be all and end all of all machine learning models. No matter which research blog you read about, DeepMind, Google AI, Facebook’s FAIR, etc., most of the latest research has neural networks at the core of the system.

From facial recognition and object detection to beating humans in board and video games, neural networks have developed an aura and power of their own. The concept has been around for decades, but has gained massive popularity in recent years thanks to advanced in technology and hardware. These neural nets are essentially based on how our brain works.

But a new type of algorithm, called Evolutionary Algorithm, has been developed that could significantly change the way we build and design deep learning models. Instead of trying to map the neurons like in a human brain, this approach is based on evolution – the process that has shaped the human brain itself. This evolutionary algorithm has been used to beat deep learning powered machines in various Atari games.

 

How does it work?

The evolutionary algorithm approach begins with generating code at a completely random rate (tons of versions of code actually). These code pieces are then tested to check whether the intended goal has been achieved. As you can imagine, most of the code pieces are scrappy and make no sense because of their random nature.

But eventually some pieces of code are found that are better than the rest. These pieces are then used to reproduce a new generation of code (which is not identical to the original code because that would defeat the purpose). As new code is generated, it is continuously tested and this process keeps repeating until such a code is found that is better than anything else at solving the problem. Can you now understand how this relates to the evolution of the human brain?

The algorithm outperformed deep learning systems by a comfortable margin. The best part? It did so in a much quicker fashion than any deep learning system there!

Read more about this algorithm in MIT’s Technology Review article and also ensure you read the highly detailed research paper. This paper was published by Dennis Wilson and his colleague at the University of Toulouse.

 

Our take on this

This evolutionary approach has been around for a while but due to the advancements in deep learning, it has taken a back seat. This research has already brought some attention to it. Apart from taking less training time, the code is fairly easy to interpret because the evolved approach means smaller code blocks. And interpretability is a MAJOR issue these days.

Are data scientists working on deep learning missing out on this technique? This research certainly puts the evolutionary algorithm right in the middle of the debate. It’s definitely worth checking out.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 07 May 2019

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

John W
John W 25 Jul, 2018

Awesome post! I believe a data scientist should always learn some new tricks or...alternatives so algorithms. The Evolutionary Algorithm seems awesome and how it's developed and the possibility to change the way we build and design deep learning models

Sean O'Connor
Sean O'Connor 29 Jul, 2018

Here are some ideas: “ Quantization is the enemy of evolution It is fortunate that biological systems are heavily quantized, especially in bacteria and viruses. An atom is there or not, discrete point mutations are there or not, a plasmid is there or not . If the cost landscape where not so heavily quantized we simply wouldn’t exist. The crossover mechanism higher animals use is a weak optimizer but it does make the cost landscape less rough than what asexual microbes have to contend with. Hence we can adapt to pathogens despite having a far longer time between generations and a far lower population count. It is also true (I think) that having a larger genome reduces the roughness of the cost landscape by giving more degrees of freedom. In a non-quantized artificial system a perturbation in any of the basis directions gives a smoothly changing alteration in cost. A mutation in all dimensions gives a new cost that is a summary measure of multiple clues. Following mutations downhill in cost means following multiple clues about which way to go. If there were quantization in many basis directions a small movement in those directions would give you not information about whether such a movement was good or bad. You would get not clues in those directions, less clues overall, which is obviously detrimental. A point here being that artificial evolution on digital computers can be far more efficient than biological evolutions. If you accept that back propagation is in some sense a form of evolution (at a slight stretch) then you can see that a GPU cluster can build in a few weeks the capacity to do vision that took biological evolution many millions of years to create. I have some kind of code here: https://github.com/S6Regen/Thunderbird ” “Here’s a link for crossover being a weak optimizer: https://youtu.be/WoamKUfisVM It is actually to allow non-lethal mixability of traits. And that ends up implementing the multiplicative weights update algorithm, or so they say. You might ask then, why are fungi not more lethal pathogens given what I said and that they reproduce by crossover. I don’t really know but I presume it has to do with crossover being a weak optimizer and maybe they have a smaller number of genes than a large animal. A neural network can have squashing activation functions or non-squashing ones. What I noticed from my experiments with associative memory is that squashing type activation functions result in attractor states/error correction/(soft) quantization. That seems to be difficult for evolution to deal with, especially if you use hard binary threshold activation functions (the ultimate squashing function.) One the other hand nets with non-squashing activation are very easy to evolve. And result in reasoning in sparse patterns. Of course, just because evolution favors non-squashing activation functions does not mean they are the best possible ones to use. It could be squashing ones are if you had a suitable algorithm.”

Related Courses