Learn everything about Analytics

Home » THE HISTORY OF NEURAL NETWORKS!

THE HISTORY OF NEURAL NETWORKS!

This article was published as a part of the Data Science Blogathon.
HISTORY OF NEURAL NETWORKS
Photo by Andrew Neel on Unsplash

As the craze for deep learning is becoming more and more popular among students as well as industries because of the result it is able to achieve and the huge impact on the welfare of the society and businesses, let us today try to explore its roots, how it all started, and why was it even required in the first place. Understanding history would help us appreciate its being. Here we will try to cover some of the most important events in history and not every event because firstly it is not a history lecture xD, and secondly most of us would lose track, hence I’ll try to keep it precise and interesting.

 

The seed is planted!

Let’s start from the very beginning when the idea first came into being. You might be thinking that the Deep Learning technique has recently flourished so it would have begun some 20-30 years ago maybe, but let me tell you that it all started about 78 years ago. Yes, you read that right, the history of Deep Learning is often traced back to 1943 when Walter Pitts and Warren McCulloch created a computer model that supported the neural networks of the human brain. They used a mixture of algorithms and arithmetic they called “threshold logic” to mimic the thought process.

Since that point, Deep Learning has evolved steadily, with only two significant breaks in its development. Both were tied to the infamous Artificial Intelligence Winters.

 

Seed Sprouting is visible!

During the Cold War when the American scientists were trying to translate Russian to English and a lot of research was done on intelligent machines by some of the greatest mathematicians like Alan Turing (often known as the Father of Modern Computing) who created the Turing Test for the testing the intelligence of a machine. Frank Rosenblatt, a mathematician came up with the very first neural network-based model called Perceptron in the year 1958. This is similar to the machine learning model Logistic Regression with a slightly different loss function.

 

Inspiration: Biological Neuron

It is clear from history that we always get inspired by the nature and this case is no different. This is highly inspired by nature and the biology of our brain. At that time, they had a very basic understanding of the working of neurons in our brains. So let me first introduce to you the biological neuron.

If we touch the surface level of a biological neuron then it consists of mainly 3 parts, Nucleus, Dendrites, and Axons. The electrical signals/impulses are received by these dendrites connected to the nucleus where some processing is done by the nucleus itself and finally it sends out a message in the form of the electrical signal to the rest of the connected neurons through axons. This is the simplest explanation of the working of a biological neuron, the people studying biology would be aware of how massively complex structure it is and exactly how it works.

So those mathematicians and scientists came up with the way to represent this biological neuron mathematically where there are n inputs to a body and each having some weights since all the inputs may not be equally important to produce the output. This output is nothing but applying a function after taking the sum of the products of these inputs and their respective weights. Since this idea of the perceptron is far from the complex reality of a biological neuron, we can say that it is loosely inspired by biology.

 

It’s a Sapling now!

Now came the era when people asked why we can’t create a network of connected neurons which is again inspired by the biological brain of living creatures like human beings, monkeys, ants, etc. basically having a structure of interconnected neurons. A lot of attempts were made from the year 1960s, but this was made successful in a seminal paper in 1986 by a group of mathematicians, one of which was Geoffrey Hinton (he has phenomenal contributions in the field of machine learning and AI).

So they came up with the idea of the Backpropagation algorithm. In a nutshell, we can remember this algorithm as a chain rule of differentiation. This not only made the training of Artificial Neural Network possible but also created an AI Hype where people talked about it all day and thought that in the coming 10 years it would be possible for a machine to think like a human.

Even though it created such hype, it got washed away in the 1990s and this came period came to be known as the AI Winter because people hyped so much about it but the actual effect was marginal at that time. What do you think could be the reason for it? Before I disclose to you the reason behind it, I would like you to give it a shot.

Think…

Think…

Okay, here you go.

HISTORY OF NEURAL NETWORKS sapling
Photo by Lorenzo Herrera on Unsplash

 

Powdery Mildew on the plant!

Even though the mathematicians came up with this beautiful algorithm of Backpropagation, due to the lack of computational power in the 1990s and the lack of data, this hype eventually died after the Department of Defense US stopped the funding for AI seeing the marginal impact over the years after being hyped so much. So the machine learning algorithms like SVM, Random Forest, and GBDT evolved and became extremely popular from 1995 to 2009.

 

Mature Tree with Flowers!

While everybody moved to the algorithms like SVM and all, Geoffrey Hinton still believed that true intelligence would be achieved only through Neural Networks. So for almost 20 years i.e. from 1986 to 2006, he worked on neural networks. And in 2006 he came up with a phenomenal paper on training a deep neural network. This is the beginning of the era known as Deep Learning. This paper by Geoffrey Hinton did not receive much popularity until 2012.

You might wonder what is it that made deep neural networks extremely popular in 2012. So in 2012, Stanford conducted a competition called ImageNet, one of the hardest problems back then consist of millions of images and the task was to identify the objects from the given image. I would like you to recall that in 2012, people had an enormous amount of data, and also the computation was very powerful compared to what was present in the 1980s. The deep neural network or Deep Learning for that matter outperformed every machine learning algorithm in this competition.

This was the moment when the big tech giants like Google, Microsoft, Facebook, and others started seeing the potential in Deep Learning and started investing heavily in this technology.

Mature Tree with Flowers
Photo by Bram Van Oost on Unsplash

 

Mature Tree with Fruits!

Today if I talk about the use cases of Deep Learning then you might know some of the popular voice assistants like Google Assistant, Siri, Alexa are all powered by deep learning. Also, Tesla’s self-driving cars are possible because of advances in deep learning. Apart from this it also has its applications in the healthcare sector. I strongly believe there is still a lot of potential in Deep Learning which we would experience in the coming years.

Here are some of my social profiles you may want to visit:

LinkedIn: https://bit.ly/3ltBarT

Github: https://bit.ly/3rQAYoH

Medium: https://bit.ly/3a66Jn1

 

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

You can also read this article on our Mobile APP Get it on Google Play