Deep Learning: How It Started, What Are The Different Variants & How It Caused Severe Disruption To All Existing Technology?

Mrinal Singh Last Updated : 15 Jun, 2021
5 min read

This article was published as a part of the Data Science Blogathon

deep learning image

Evolution Of Deep Learning

Deep Mind company in 2015 was acquired by a subsidiary of Alphabet Inc. In 2016, Deep Mind developed one of the most powerful deep learning algorithms named AlphaGo(that learns how to play video games), surpassing a human professional Go player, Lee Sedol, the world champion. Since then, the popularity of deep learning has surged many folds. It is expected that in 2020 and 2021, the adoption of deep knowledge in high technological advancements will be more and more!

How Deep Learning Is Different Than Other Conventional Machine Learning Algorithms? What Is It Actually?

Indeed Deep Learning Mimics The Human Brain, Which Is A Core Part Of AI. The Execution Is Supported By High-End GPU/TPU Processing Applied To Massive Data Which Interns Help Find The Patterns In Decision Making. Although It Works Better As Unsupervised Learning However In Rare Cases, It Also Works As Supervised Learning. Moreover, This Learning Technique Is More Suitable For Unstructured Data, Especially Imagery Data.

So What’s The Catch Here With Deep Learning?

It Learns The Huge Amount Of Image Data(Satellite Imagery, Brain Tumor Images, Cancer Images, Etc), Which Normally Are Unstructured. Still, The Crazy Part Is That For Normal Humans, This May Take Years To Learn So.

Significant Applications Of Deep Learning?

Deep Learning Has Gained Much Attention In Recent Times. It Is An Advanced Level Of Machine Learning Technique That Combines The Class Of Learning Algorithms With The Use Of Many Layers Of Nonlinear Units.

  • It Can Extract Features From Data, And With The Help Of Supervised And Unsupervised Learning Techniques, It Can Represent Variant Levels Of Abstraction.
  • The Applications Are Primarily Related To Vision, Speech, Natural Language Processing, Driverless Cars, And Gene Expression.
  • Deep Learning Has Been Applied Widely To Image And Vision Analysis. However, Identify Arbitrary Objects, Such As Faces, Humans, Animals, Within A Larger Image, Is A Tough Task, And Recently Deep Learning Is Also Used In These Types Of Image Recognition.
  • Further, It Is Also Applied To Semantic Image Segmentation, Deep Visual Residual Abstraction, And Brain-Computer Interfaces.
  • Ongoing Related Research In Hierarchical Deep Learning Networks As Game Playing Artifact Using Regret Matching Has Been Found Out.
  • It Is Significant To Incorporate Fast GPUs, Which Helps To Learn Algorithms To Gain Experience Quickly, Otherwise, It Will Take Too Much Time For Learning The Pattern Of The Data.

The Popularity Of Mainstream AI Is Mainly Due To The Extensive Use Of Deep Learning.

Significant Applications Of Deep Learning?

Conventional machine learning or other technology fails to deal with unstructured data. Now the reality is that in this world, the majority of the data is unstructured. The second important point is that the feature extraction of the data. In both these two cases, deep learning has resulted positively. Deep understanding, an advanced version of machine learning, has grabbed the whole academia and industries in recent times and is far ahead in feature extraction, especially in extracting the information from unstructured data such as image data. It can detect nonlinear features by nature, and it is capable of making a relationship among arbitrary variables by approximating them using random functions. In the case of feature learning, deep learning is far away from others.

The variant of deep learning architectures such as deep neural networks(DNN), deep belief networks(DBN), recurrent neural networks(RNN), and convolutional neural networks(CNNs) have been exploited in the arena of computer vision, speech recognition, NLP, sound classification and audio data recognition since five years. Therefore, it has a vast industry impact that has been very minutely noticed by industry experts. From developing high-performance workstations, various security products, computer vision, driverless cars, high-level image classification and segmentation, robotics, and many other images related problems and especially data imbalance problems, deep learning’s potential capability has always resulted positively.

The term disruption was first coined by the American scholar Clayton M. Christensen and his collaborators in the early stage of 1995. It is believed that disruptive technology has far impact and will dominate the era of the 21st century.

What Is A Disruptive Technology?

From a business perspective, the theory of innovation disruption is the technology that builds a fresh market. Therefore, it adds value to society, and by this process, people used to give up the existing technology they were using. In simple language, disruptive technology replaces the current one and changes the existing product’s fate, and eventually deletes the current product from the market, which causes a heavy price to the owners of the existing companies. So everything related to the old technology vanishes, such as employees, alliances, product design protocols, customers, and revenues.

Why Deep Learning Can Disrupt The Existing Ones?

Below, I have jotted down a few points that could be why deep learning is a disruptive technology.

  1. Deep learning can provide a solid feature engineering platform even with a massive amount of images or video-related data; that means deep understanding can provide a solution for classification or critical segmentation to the image-related problems.
  2. It can handle unstructured data with a fair amount of efficiency.
  3. The data labelling becomes immaterial to deep learning as it can apply unsupervised algorithms as well.
  4. Feature engineering becomes more accessible now. Thanks to deep learning.
  5. With high-end processors such as GPU/TPU, alongside the easy availability of cloud computing, a fast algorithm is no big deal nowadays; deep understanding has produced excellent results.
  6. Now no fear of the data imbalance phenomenon. Data augmentation and other preprocessing techniques perfectly can be incorporated with a deep learning paradigm.
  7. Python, Keras, and TensorFlow framework ideally provide a quality framework to develop an efficient model for high-level feature engineering problems.
  8. Earlier the available packages for deep learning implementations on python or in R were complex. With the newly developed PyTorch, Tensorflow, and Keras framework of deep learning algorithms such as DNN, CNN, RNN, and Gan, the execution has become much more manageable.
  9. The difficulty with the processing power has also sort of vanishes, especially with free google colab.
  10. Google colab, a Jupyter alike notebook, has considerably reduced the processing time and dynamics of the whole programming environment. Things are much handy and attractive now.

Thanks for Browsing my Article. Kindly comment and don’t forget to share this blog as it will motivate me to deliver more quality blogs on ML & DL-related topics. Thank you so much for your help, cooperation, and support!

About Author

Mrinal Walia is a professional Python Developer with a Bachelors’s degree in computer science specializing in Machine Learning, Artificial Intelligence and Computer Vision. In addition to this, Mrinal is an interactive blogger, author, and geek with over four years of experience in his work. With a background working through most areas of computer science, Mrinal currently works as a Testing and Automation Engineer at Versa Networks, India. My aim to reach my creative goals one step at a time, and I believe in doing everything with a smile.

Medium | LinkedIn | ModularML | DevCommunity | Github

Thanks for Browsing my Article. Kindly comment and don’t forget to share this blog as it will motivate me to deliver more quality blogs on ML & DL-related topics.

Thank you so much for your help, cooperation, and support!

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Data Scientist and a Technical Writer! I will give you the best of Open-Source and AI.

Talks about #chatgpt, #opensource, #contentcreation, #communitybuilding, and #artificialintelligence

Technical Writer | Data Science, ML, AI, Open-Source | Do More with Data - Litmus

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details