The Essential NLP Guide for data scientists (with codes for top 10 common NLP tasks)

NSS 18 May, 2020 • 9 min read

Introduction

Organizations today deal with huge amount and wide variety of data – calls from customers, their emails, tweets, data from mobile applications and what not. It takes a lot of effort and time to make this data useful. One of the core skills in extracting information from text data is Natural Language Processing (NLP).

Natural Language Processing (NLP) is the art and science which helps us extract information from text and use it in our computations and algorithms. Given then increase in content on internet and social media, it is one of the must have still for all data scientists out there.

Whether you know NLP or not, this guide should help you as a ready reference for you. Through this guide, I have provided you with resources and codes to run the most common tasks in NLP.

Once you have gone through this guide, feel free to have a look at our video course on Natural Language Processing (NLP).

 

Why did I create this Guide?

After having been working on NLP problems for some time now, I have encountered various situations where I needed to consult hundred of different of sources to study about the latest developments in the form of research papers, blogs and competitions for some of the common NLP tasks.

So, I decided to bring all these resources to one place and make it a One-Stop solution for the latest and the important resources for these common NLP tasks. Below is the list of tasks covered in this article along with their relevant resources. Let’s get started.

 

Table of Contents

  1. Stemming
  2. Lemmatisation
  3. Word Embeddings
  4. Part-of-Speech Tagging
  5. Named Entity Disambiguation
  6. Named Entity Recognition
  7. Sentiment Analysis
  8. Semantic Text Similarity
  9. Language Identification
  10. Text Summarisation

 

1. Stemming

What is Stemming?: Stemming is the process of reducing the words(generally modified or derived) to their word stem or root form. The objective of stemming is to reduce related words to the same stem even if the stem is not a dictionary word. For example, in the English language-

  1. beautiful and beautifully are stemmed to beauti 
  2. good, better and best are stemmed to good, better and best respectively

Paper: The original paper by Martin Porter on Porter Algorithm for stemming.

Algorithm: Here is the Python implementation of Porter2 stemming algorithm.

Implementation: Here is how you can stem a word using the Porter2 algorithm from the stemming library.

2. Lemmatisation

What is Lemmatisation?: Lemmatisation is the process of reducing a group of words into their lemma or dictionary form. It takes into account things like POS(Parts of Speech), the meaning of the word in the sentence, the meaning of the word in the nearby sentences etc. before reducing the word to its lemma. For example, in the English Language-

  1. beautiful and beautifully are lemmatised to beautiful and beautifully respectively.
  2. good, better and best are lemmatised to good, good and good respectively.

Paper 1: This paper discusses different methods for performing lemmatisation in great detail. A must read if you want to know hoe traditional lemmatisers work.

Paper 2: This is an excellent paper which addresses the problem of lemmatisation for variation rich languages using Deep Learning.

Dataset: This is the link for Treebank-3 dataset which you can use if you wish to create your own Lemmatiser.

Implementation: Below is an implementation of an English Lemmatiser using spacy.

#!pip install spacy
#python -m spacy download en
import spacy
nlp=spacy.load("en")
doc="good better best"

for token in nlp(doc):
    print(token,token.lemma_)

 

3. Word Embeddings

What is Word Embeddings?: Word Embeddings is the name of the techniques which are used to represent Natural Language in vector form of real numbers. They are useful because of computers’ inability to process Natural Language. So these Word Embeddings capture the essence and relationship between words in a Natural Language using real numbers. In Word Embeddings, a word or a phrase is represented in a fixed dimension vector of length say 100.

So for example-

A word “man” might be represented in a 5-dimension vector as

 

where each of these numbers is the magnitude of the word in a particular direction.

 

Blog: Here is an article which explains Word Embeddings in great detail.

Paper: A very good paper which explains Word Vectors in detail. A must-read for an in-depth understanding of Word Vectors.

Tool: A browser based tool for visualising Word Vectors.

Pre-trained Word Vectors: Here is an exhaustive list of pre-trained Word Vectors in 294 languages by facebook.

Implementation: Here is how you can obtain pre-trained Word Vector of a word using the gensim package.

Download the Google News pre-trained Word Vectors from here.

#!pip install gensim
from gensim.models.keyedvectors import KeyedVectors
word_vectors=KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin',binary=True)
word_vectors['human']

Implementation: Here is how you can train your own word vectors using gensim

sentence=[['first','sentence'],['second','sentence']]
model = gensim.models.Word2Vec(sentence, min_count=1,size=300,workers=4)

 

4. Part-Of-Speech Tagging

What is Part-Of-Speech Tagging?: In Simplistic terms, Part-Of-Speech Tagging is the process of marking up of words in a sentence as nouns, verbs, adjectives, adverbs etc. For example, in the sentence-

“Ashok killed the snake with a stick”

The Parts-Of-Speech are identified as –

Ashok PROPN
killed VERB
the DET
snake NOUN
with ADP
a DET
stick NOUN
. PUNCT

Paper 1: This paper by choi aptly titled The Last Gist to the State-of-the-Art presents a novel method called Dynamic Feature Induction which achieves state-of-the-art on POS Tagging task

Paper 2: This paper presents performing unsupervised POS Tagging using Anchor Hidden Markov Models.

Implementation: Here is how we can perform POS Tagging using spacy.

#!pip install spacy
#!python -m spacy download en 
nlp=spacy.load('en')
sentence="Ashok killed the snake with a stick"
for token in nlp(sentence):
   print(token,token.pos_)

 

5. Named Entity Disambiguation

What is Named Entity Disambiguation?: Named Entity Disambiguation is the process of identifying the mentions of entities in a sentence. For example, in the sentence-

“Apple earned a revenue of 200 Billion USD in 2016”

It is the task of Named Entity Disambiguation to infer that Apple in the sentence is the company Apple and not a fruit.

Named Entity, in general, requires a knowledge base of entities which it can use to link entities in the sentence to the knowledge base.

Paper 1: This paper by Huang makes use of Deep Semantic Relatedness models based on Deep Neural Networks along with Knowledgebase to achieve a state-of-the-art result on Named Entity Disambiguation.

Paper 2: This paper by Ganea and Hofmann make use of Local Neural Attention along with Word Embeddings and no manually crafted features.

 

6. Named Entity Recognition

What is Named Entity Recognition?: Named Entity Recognition is the task of identifying entities in a sentence and classifying them into categories like a person, organisation, date, location, time etc. For example, a NER would take in a sentence like –

“Ram of Apple Inc. travelled to Sydney on 5th October 2017”

and return something like

Ram
of
Apple ORG
Inc. ORG
travelled
to
Sydney GPE
on
5th DATE
October DATE
2017 DATE

Here, ORG stands for Organisation and GPE stands for location.

The problem with current NERs is that even state-of-the-art NER tend to perform poorly when they are used on a domain of data which is different from the data, the NER was trained on.

 

 

Paper: This excellent paper makes use of bi-directional LSTMs and combines Supervised and Unsupervised learning methods to achieve a state-of-the-art result in Named Entity Recognition in 4 languages.

Implementation: Here is how you can perform Named Entity Recognition using spacy.

import spacy
nlp=spacy.load('en')sentence="Ram of Apple Inc. travelled to Sydney on 5th October 2017"
for token in nlp(sentence):
   print(token, token.ent_type_)

 

7. Sentiment Analysis

What is Sentiment Analysis?: Sentiment Analysis is a broad range of subjective analysis which uses Natural Language processing techniques to perform tasks such as identifying the sentiment of a customer review, positive or negative feeling in a sentence, judging mood via voice analysis or written text analysis etc. For example-

“I did not like the chocolate ice-cream” – is a negative experience of ice-cream.

“I did not hate the chocolate ice-cream” – may be considered as a neutral experience

There is a wide range of methods which are used to perform sentiment analysis starting from taking a count of negative and positive words in a sentence to using LSTMs with Word Embeddings.

Blog 1: This article focuses on performing sentiment analysis on movie tweets

Blog 2: This article focuses on performing sentiment analysis of tweets during the Chennai flood.

Paper 1: This paper takes the Supervised Learning method approach with Naive Bayes method to classify IMDB reviews.

Paper 2: This paper makes use of Unsupervised Learning method with LDA to identify aspects and sentiments of user-generated reviews. This paper is outstanding in the sense that it addresses the problem of shortage of annotated reviews.

Repository: This is an awesome repository of the research papers and implementation of sentiment analysis in various languages.

Dataset 1: Multi-Domain sentiment dataset version 2.0

Dataset 2: Twitter Sentiment analysis Dataset

Competition: A very good competition where you can check the performance of your models on the sentiment analysis task of movie reviews of rotten tomatoes.

Perform Twitter Sentiment Analysis your self.

8. Semantic Text Similarity

What is Semantic Text Similarity?: Semantic Text Similarity is the process of analysing similarity between two pieces of text with respect to the meaning and essence of the text rather than analysing the syntax of the two pieces of text. Also, similarity is different than relatedness.

For example –

Car and Bus are similar but Car and fuel are related.

Paper 1: This paper presents the different approaches to measuring text similarity in detail. A must read paper to know about the existing approaches at a single place.

Paper 2: This paper introduces CNNs to rank a pair of two short texts

Paper 3: This paper makes use of Tree-LSTMs which achieve a state-of-the-art result on Semantic Relatedness of texts and Semantic Classification.

 

9. Language Identification

What is Language Identification?: Language identification is the task of identifying the language in which the content is in.  It makes use of statistical as well as syntactical properties of the language to perform this task. It may also be considered as a special case of text classification.

Blog: In this blog post by fastText, they introduce a new tool which can identify 170 languages under 1MB of memory usage.

Paper 1: This paper discusses 7 methods of language identification of 285 languages.

Paper 2: This paper describes how Deep Neural Networks can be used to achieve state-of-the-art results on Automatic Language Identification.

 

10.  Text Summarisation

What is Text Summarisation?: Text Summarisation is the process of shortening up of a text by identifying the important points of the text and creating a summary using these points. The goal of Text Summarisation is to retain maximum information along with maximum shortening of text without altering the meaning of the text.

Paper 1: This paper describes a Neural Attention Model based approach for Abstractive Sentence Summarization.

Paper 2: This paper describes how sequence-to-sequence RNNs can be used to achieve state-of-the-art results on Text Summarisation.

Repository: This repository by Google Brain team has the codes for using a sequence-to-sequence model customised for Text Summarisation. The model is trained on Gigaword dataset.

Application: Reddit’s autotldr bot uses Text Summarisation to summarise articles into the comments of a post. This feature turned out to be very famous amongst the Reddit users.

Implementation: Here is how you can quickly summarise your text using the gensim package.

from gensim.summarization import summarize

sentence="Automatic summarization is the process of shortening a text document with software, in order to create a summary with the major points of the original document. Technologies that can make a coherent summary take into account variables such as length, writing style and syntax.Automatic data summarization is part of machine learning and data mining. The main idea of summarization is to find a subset of data which contains the information of the entire set. Such techniques are widely used in industry today. Search engines are an example; others include summarization of documents, image collections and videos. Document summarization tries to create a representative summary or abstract of the entire document, by finding the most informative sentences, while in image summarization the system finds the most representative and important (i.e. salient) images. For surveillance videos, one might want to extract the important events from the uneventful context.There are two general approaches to automatic summarization: extraction and abstraction. Extractive methods work by selecting a subset of existing words, phrases, or sentences in the original text to form the summary. In contrast, abstractive methods build an internal semantic representation and then use natural language generation techniques to create a summary that is closer to what a human might express. Such a summary might include verbal innovations. Research to date has focused primarily on extractive methods, which are appropriate for image collection summarization and video summarization."

summarize(sentence)

 

End Notes

So this was all about the most common NLP tasks along with their relevant resources in the form of blogs, research papers, repositories and applications etc. If you feel, there is any great resource on any of these 10 tasks that I have missed or you want to suggest adding another task, then please feel free to comment with your suggestions and feedback.

We have also got a great course, NLP using Python, for you if you want to become an NLP practioner.

Happy Learning!

Learnengage, compete, and get hired!

NSS 18 May 2020

I am a perpetual, quick learner and keen to explore the realm of Data analytics and science. I am deeply excited about the times we live in and the rate at which data is being generated and being transformed as an asset. I am well versed with a few tools for dealing with data and also in the process of learning some other tools and knowledge required to exploit data.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Manasi
Manasi 26 Oct, 2017

Thanks for a great article. All the aspects of NLP have been nicely summarized !!

Nitesh
Nitesh 26 Oct, 2017

I think couple of mentioned links are missing the article, specifically under Sentiment analysis section.Please check & correct. Thanks!

Anjaneya Nidubrolu
Anjaneya Nidubrolu 26 Oct, 2017

Wonderful...thanks for sharing.

Sankar
Sankar 26 Oct, 2017

Hi NSS, Thanks for the essential article in NLP, it has good amount of resources. In my humble opinion, there are few corrections in this article need to be updated. These are .... In section 1. Stemming : Your example "2. good, better and best are stemmed to good, better and best respectively " is not appropriate. In section 2. Lemmatisation: Again given examples are not sufficient to explain the concept. "1. beautiful and beautifully are lemmatised to beautiful and beautifully respectively." && "2. good, better and best are lemmatised to good, good and good respectively." In section 7. Sentiment Analysis: URLs or links are missing for the Blog1: and Blog2 texts. Just i am trying to improve the accuracy of article. Excuse me if i am wrong.

Adelson
Adelson 26 Oct, 2017

I would suggest to add a 11th about topic modeling techniques, as LDA, LSI and NMF. Great article!

Vikram Murthy
Vikram Murthy 27 Oct, 2017

hi ..this is a nice summary of all things NLP :) .. having dirtied my hand in text for a while i would request you to add a disclaimer to the overall blog stating that all the stuff mentioned above is for narrow purposes. Let me explain ..take summarization for e.g. - just pick a random para from wiki and run the same code above and you ll see what i am talking about. Same thing for NER etc. So if practitioners really want decent accuracy they have to first generate./ gather data set from the domain , train each of these algos/variants and then only bother running them. Telling them that these libraries are plug and play is a little misleading. I hope you don't get offended. This is purely from the perspective of keeping things real :)

Chinmoy Kathar
Chinmoy Kathar 27 Oct, 2017

Hi there... Really great article. Can you please add something inder the topic "Sentence categorization". Although it is not a small task, but it'll be really helpful. Thanks a lot.

Amber
Amber 27 Oct, 2017

This is great compilation. Would love to see an article on text predictive analytics,

Sonia
Sonia 27 Oct, 2017

Thanks for sharing such wonderful article... on NLP

Flávio Marchi
Flávio Marchi 30 Oct, 2017

Great article, helped expanding my view of the subject for my Grad Work. :)

Shehzad
Shehzad 30 Oct, 2017

Hi! you've explained very good. I've found here a lot of best knowledge to understand for me.

Anirban Dutta
Anirban Dutta 01 Nov, 2017

Great work again. Not a day goes by, when I don't recommend AnalyticsVidhya to someone. Keep up the great work, Saurav and team.

Mahesh Medam
Mahesh Medam 09 Nov, 2017

I liked the article. I work in Python. I have doubt regarding which module should I use for NLP. If I ask you which module in Python would cover most of the NLP tasks. Which one would it be ?

suresh
suresh 05 Dec, 2017

Excellent article. Thanks for putting it together.

sirivella madhu
sirivella madhu 11 May, 2018

greate introduction to nlp and spacy thank you..

Akash
Akash 19 May, 2018

Excellent article, although please include Topic Modelling and LDA

Related Courses