Artificial Intelligence Demystified

guest_blog 24 Dec, 2016 • 11 min read

Introduction

Artificial Intelligence has become a very popular term today. There is sure to be at least one article in the newspaper daily on the revolutionary advancements made in the field. But, there seems to be some confusion about what AI really is.

Is it Robotics? Will the Terminator movie actually come true? Or is it something that has crept into our daily lives without us even realizing it?

This article will give you a broad understanding on the buzzwords associated with AI, its applications, the careers & opportunities it has and its future.

What is Artificial Intelligence?

Artificial Intelligence is simply the ability of a computer to exhibit “intelligence”. This intelligence can either mimic human intelligence or observe real world problems and intelligently find solutions for it.

 

10 Major Milestones in the History of AI

Did you know? Chef Watson- a part of IBM’s Watson program-can now cook for you! This AI cooking app uses algorithms to choose a quirky set of ingredients (this can be done by the user too) and comes up with the perfect recipe. So, Bon Appétit!

 

Buzzwords associated with AI

1. Machine Learning

Machine Learning is a field in Data Science, where machines can “learn” themselves, without being explicitly programmed by humans. By analyzing past data called “training data”, the Machine Learning model forms patterns and uses these patterns to learn and make future predictions. The precision of predictions made using ML models has been increasing every day.

The 5 steps in the Machine Learning Process

Machine Learning Techniques

Machine Learning is used in practically every field these days, even though some of the uses may not always be very obvious. The main techniques of Machine Learning are:

  • Classification: Based on training data having observations with known categories, classification predicts the category to which a new observation belongs. Eg: Predicting whether the price of a house would fall under which class- very costly, costly, affordable, cheap or very cheap.
  • Regression: Predicting a value from a continuous data set. Eg: Predicting the price of a house based on various factors such as location, size, time of buying, etc.
  • Clustering: Assigning a set of observations into subsets (i.e.clusters) so that the observations in the same cluster are similar in some sense. Eg: Netflix (An online movie company) having different clusters of viewers, where people with similar viewing habits fall in the same cluster.
  • Recommendation Systems: Uses ML algorithms to help users find new products / services based on data of the user or product / service. Eg: Netflix recommending you a certain movie based on watching patterns of the people in your cluster or Amazon suggesting you products based on popularity
  • Anomaly Detection: Identifying observations that do not conform to an expected pattern or other items in a dataset. Eg: An outlier (i.e. an anomaly) in credit card transactions could be a potential banking fraud.
  • Dimensionality Reduction: The process of reducing the number of random variables under consideration to obtain a set of variables that are significant.

Types of Machine Learning

Popular Machine Learning Algorithms

Today, Machine Learning is probably the most important field in AI. Hence, several Machine Learning algorithms have been devised each solving a particular type of problem. Each algorithm falls into one of the 3 types of learning. The most popular Machine Learning algorithms are :

  1. Linear Regression
  2. Logistic Regression
  3. Support Vector Machines
  4. Decision Trees
  5. Random Forest
  6. Artificial Neural Networks
  7. K-Means Clustering
  8. K-Nearest Neighbour
  9. Naive Bayes Classifier
  10. Ensemble Learning

Did you know? A Scottish cartoonist has taken Machine Learning to an all new level by creating an intelligent program that can write scripts for “Friends”! Through gathering Big Data (dialogues from all the 10 seasons) and using recurrent neural networks, he could create all-new episodes for this popular sitcom series!

 

2. Deep Learning

What is Deep Learning?

Deep Learning is a branch of Artificial Intelligence that is producing life-changing results. Deep Learning means neural networks with a large number of hidden layers. It is an attempt to replicate the functioning of a human brain. Just like the exact functioning of a human brain is unknown, not much is known about the exact working of Deep Learning too. It is like a black box, i.e. where the input and output can be seen and are known, but the internal working is a mystery! Interestingly, Data Scientists believe that if we crack the working of Deep Nets, we will be closer to understanding how a human brain works!

 

Where & How is Deep Learning used?

Today, Deep Learning has applications in Natural Language Processing, Image Recognition (explained in the later section), Spam Filtering, Fraud Detection, etc. This is just a fraction of what Deep Learning can do! Google’s search engine, Facebook’s photo tagging feature, Baidu’s speech recognition – all involve Deep Learning behind the scenes. As these companies invest more and more in this area, the advancements in the field are mind-boggling!

1. Google: Apart from optimizing search results, Google uses Deep Learning in a variety of immensely vital but slightly lesser-known fields. Google Brain and Google DeepMind, the two brainchildren of Google are working quite furiously to achieve greater heights in AI. Google has been actively researching and exploring virtually all aspects of machine learning, including deep learning and more classical algorithms.

AlphaGo, a project of Google’s DeepMind, is perhaps one of the most popular breakthroughs in Deep Learning. Go is a game of stones on-board where you try to make points of territory. It’s a game of intense complexity- it is 10100 times more complex than Chess! The algorithm in Alpha-Go combines Monte-Carlo Tree Search with Deep Neural Networks and uses Reinforcement Learning approach to better its result.

How AlphaGo works: AlphaGo is built using two different neural-network “brains” that cooperate to choose its moves. These brains are multi-layer neural networks which are almost identical in structure to the ones used for classifying pictures for image search engines like Google Image Search. They start with several hierarchical layers of 2D filters that process a Go board position just like the way an image-classifying network processes an image. Roughly speaking, these filters identify patterns and shapes. After this filtering, 13 fully-connected neural network layers produce judgments about the position they see. Broadly, these layers perform classification or logical reasoning.

The networks are trained by repeatedly checking their results and feeding back corrections that adjust the numbers to make the network perform better. This process has a large element of randomness, so it’s impossible to know exactly how the network does its “thinking”, only that it tends to improve after more training.

Did you know? In March 2016, AlphaGo beat the legendary Go player-Lee Sedol-with a score of 4-1, a feat previously believed to be at least a decade away.

 

2. Facebook: Facebook AI Research (FAIR) focuses on using Deep Learning to improve the social networking experience. FB is trying to build more than 1.5 billion AI agents, one agent for every Facebook user. The social media giant formed the Applied Machine Learning team called FBLearner Flow. It combines several machine learning models to process several billion data points drawn from the activities of its 1.5 billion users to make predictions about user behaviour and keep them glued to Facebook for hours!

For example: the algorithms created from FBLearner Flow’s models help to define your news feed, the advertisements you see, the people you may know and many more!

Therefore, in the AI war between Facebook and Google, there isn’t a winner, as the research concentrations and applications are quite different in nature.

 

3. Natural Language Processing

Natural Language Processing is the process by which computers translate human language into a language that the computer can understand. Siri, Cortana and Alexa are all examples of NLP that we use every day. So how does Artificial Intelligence fit into NLP? Here’s how. Consider this. You want to learn a new language. How do you go about doing so? You start by learning new words in the language and understanding the usage. But, you will not really understand what works and what doesn’t, unless you are exposed to the language and learn from the usage. This is exactly how Deep Learning is used in NLP. The computer “learns” by using a technique called “embeddings”, which Deep Learning implements. In this technique, words and phrases are mapped to vectors of real numbers. This mapping is carried out by Neural Networks.

 

How does Siri use Natural Language Processing?

NLP forms the heart and soul of Siri. When a user asks Siri something, the sequence of actions taking place is as follows. Through voice recognition, Siri first uses a discretization algorithm to turn your voice into digital data. Next, your question is routed through Apple servers, and a flowchart is run on it to find a possible solution. This step is easy enough for simple sentences like “What is the weather like today?”. But it becomes difficult when sentences like “Will Larry be attending the meeting today?” are asked because it is quite difficult for a machine to understand such a complex thought process. This is where NLP comes into play. NLP breaks the command down into tokens and uses syntactic analyzers to parse through and understand the sentence. In addition to this, Machine Learning algorithms are used to optimize the results and learn from the past results. Finally, the results are produced to the user.

Did you know? Robots can now socialize! Kismet, an emotionally intelligent robot from MIT’s AI Lab affective computing experiment, can interact by recognizing human body language and voice tone.

 

4. Pattern Recognition

As the name suggests, Pattern Recognition is a part of Artificial Intelligence which deals with recognizing patterns in data. It’s used for quality and process control. Applications includes self-driving cars, neuroscience, cancer treatment and energy physics.

 

How do self-driving cars use pattern recognition?

The much talked about “Self-Driving Cars” collects and analyzes Big Data from sensors and maps to identify pedestrians, vehicles and other objects based on their shape, size and pattern. After predicting what all the objects around it might do next, it is then designed to safely drive around them. The technologies used are radar, lidar, GPS, odometry, and computer vision.

 

How energy physics uses pattern recognition?

It is used to associate the energy depositions in a multi-component, non-magnetic high-energy particle detector. Higgs detection is a great example of pattern recognition in particle-physics.

Did you know? Self-driving cars have many versions. Google has removed steering wheels and pedals and is improving on the different levels of autonomy that can be achieved. Whereas, Tesla and Baidu are making advancements in this technology by slowly adding autonomous features that enable efficient driving in different environments. Tesla has come up with a conventional car having Autopilot (i.e. self-driving) capabilities at a safety level which is much greater than that of a human driver.

 

4. Image Analysis

Image Analysis involves extracting meaningful information from images. The idea is to imitate the human visual cortex using Machine Learning algorithms like neural networks. Handwriting recognition, automatic image recognition and geomorphologic (form or surface features of the earth or another celestial body) terrain feature classification are some popular forms.

The ImageNet challenge is a competition started in 2010. Here research teams submit programs that classify and detect objects and scenes. Since then, there has been excellent progress in image processing. In 2010, a good visual recognition program had around 40% classification error rate. In 2015, a deep convolutional neural net program for image recognition had about a 3.5% classification error rate!

 

How Facebook uses Image Analysis?

Image Analysis forms a big part of Facebook’s auto-tagging feature. A facial recognition software is used to detect the categories of users’ friends to match the newly uploaded pictures with the ones that have been tagged elsewhere. This software uses Machine Learning algorithms like neural nets. The algorithm is fed with large amounts of training data and the machine then learns to classify and recognize people in the uploaded images and suggests to you friends who could be there with you in the photo. So Facebook is heavily investing in AI. They recently acquired FacioMetrics, a facial image analysis start-up-to delve deeper into AI research.

These are the just some of the main advantages and applications of AI. The field is huge and has a lot more to this!

 

Careers and Opportunities in AI

Until a few years ago, Artificial Intelligence was mainly used by the military and the government with the help of a select few professionals in the field. But now more and more people are educating themselves and are becoming proficient in the field. They are now realizing the improvement that AI can make in a business. Today, AI is used practically in every field. Some of the possible career opportunities in AI include-

  • Artificial General Intelligence : In the hierarchy, this profession would be placed right on top. Companies like DeepMind are working on this field. Generally, impeccable PhD candidates that have a stellar research background are chosen.
  • Data Scientist : Data Science is probably the most sought-after profession in Artificial Intelligence. The plus point here is that it doesn’t have a learning curve that is too steep. Machine Learning forms the heart of Data Science. People wanting to join the field must learn Statistics, some Programming and obtain Domain Knowledge.
  • Data Mining and Analysis : After a Data Scientist, a Data Analyst is probably the most popular job. Similar to a Data Scientist, an analyst however places lesser emphasis on Statistics. Hence, most people with diverse backgrounds, with a strong desire and ability to learn could apply for these jobs.
  • Machine Learning Researcher : A field that probably not many outside the Computer Science and Electrical background could handle. In fact, without a PhD, even a Computer Scientist would be quite handicapped here! It involves discovering new areas in Machine Learning to deal with an uncharted territory of complex problems. The job primarily involves a lot of research.
  • Machine Learning Application : This involves applying Machine Learning effectively to areas where it is already being used. A graduate or a master’s student could apply for these jobs.

While there are many other AI jobs, these are the most talked about ones under a broad umbrella. This is a wonderful time for anyone to start working in AI. The field is just getting started. Even if you are a beginner, learning new things every day and scaling up is the key.

 

The Future of AI

Artificial Intelligence is undoubtedly changing the world. It is making lives easier. But, as the efficiency of AI increases, so does the growing concern that it is changing the world too much, with the fear that machine intelligence would soon surpass human intelligence. The fear that the Terminator and Matrix (movies on AI) will become a reality is increasing too. So, to what extent are these fears warranted? Is there any truth to them at all?

Ominously, the answer is yes. Don’t get us wrong. We don’t mean that there will be a machine uprising in the near future making humans obsolete, an inferior species. As of now, Artificial General Intelligence is a myth. It does not exist. AI still does not have the human cognitive abilities and may not so in the near future. But, we cannot entirely write off the possibility of this happening. It is certainly possible, even if it is very unlikely. Maybe in a few decades, or by the end of this century or many centuries later. Artificial General Intelligence and Superintelligence could become a reality.

Superintelligence is the ability of a machine to seamlessly perform every task that a human can perform, and better! Thanks to their perfect recall (computers have an eidetic memory as opposed to humans), and ability to multitask, they will fare far better than humans at practically everything. The book “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom talks about exactly this- Superintelligence as a possible concept.

 

Does AI have the power to automate you?

Recently, there has been a lot of talk around AI automating humans and disrupting millions of jobs. As of now, machines are good at tasks that involve Big Data and a great amount of iteration. Machines don’t have intuition and can’t match humans’ ability to take decisions in tricky situations.

Example- machines can analyse huge amounts of data far more accurately and quickly than a human can, but the final decision that a Data Scientist is always a mix of data and intuition, which comes with experience.

 

End Notes

AI has been surrounded by quite a lot of controversy. On one hand, companies (not only limited to tech giants) are investing millions in AI research and development. On the other hand, Stephen Hawking has voiced his concern that AI could be the end of mankind. Elon Musk & Bill Gates have also agreed with this.

However, in the debate of AI being a boon or a bane, we believe boon will always win. This isn’t because we are being ignorant about the catastrophic situations that will unfold if superintelligence is achieved. It is because steps are already being taken to prevent the potential hazards AI may bring along with it. AI’s progress will continue only if it is in alignment with general human interest. So, don’t fear it! Go ahead and just enjoy the revolution.

So which camp do you belong to? The pro-AI camp or the AI-against one? Do you think AI will disrupt more jobs than it actually creates? We would love to hear your opinion.

 

About the Author

Rahul is a Data Scientist at UpX Academy. An alum of SP Jain School of Management, Rahul loves everything AI, ML and predictive analytics. An in-house data analytics expert at UpX Academy, Rahul’s goal number#1 is to make UpX Academy’s students fall in love with data science irrespective of the background that they come from. He can be reached out at [email protected]

 

 

Got expertise in Business Intelligence  / Machine Learning / Big Data / Data Science? Showcase your knowledge and help Analytics Vidhya community by posting your blog.

guest_blog 24 Dec 2016

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Pietz
Pietz 16 Jun, 2017

great article. i would have liked some references to papers and other ressources when claims were made.

Benoit
Benoit 14 Oct, 2017

Excellent article, well written ! This field really needs demystification

Karthi
Karthi 22 Dec, 2017

Hi, Thank you for your article...Its more helpful for taking class about AI...

Adam
Adam 15 Jun, 2023

Interesting post, thanks for sharing.

Deep Learning
Become a full stack data scientist