Pre-Processing of Text Data in NLP

Nishtha Arora 15 Jun, 2021 • 7 min read

This article was published as a part of the Data Science Blogathon

Introduction

In today’s life, a large amount of raw data is available in every sector in the form of text, audio, videos, etc. This data can be used to analyze a wide range of factors which can be used further to make some decisions or predictions. But for this, the raw data has to organize or summarized for getting better outcomes.

Here comes the role of NLP, which is a branch of data science that helps to analyze, organize or summarize the text data so that useful information can be fetched out from it and can be used to make business decisions.

NLP consists of a systematic process to organize the massive data and help to solve the numerous automated tasks in various fields like – machine translation, speech recognition, automatic summarization, etc.

So, in this article, we are going to study the major applications of NLP along with the pre-processing of text data using NLP.

Table of Contents –

  • Applications of NLP
  • Tokenization
  • Normalization
               Stemming
               Lemmatization
  • Removing Stop- words
  • Part of the Speech tag
  • Conclusion

Applications of NLP

NLP offers a wide range of applications in which it processes the text data and extracts useful information from it. Some of its applications are-

  1. Text Summarization

News articles, blogs, books, reports, etc. are generally very large documents and it is quite difficult for a human to read lengthy documents to get out the basic information given in the document. So to reduce this problem, text summarization is used. With the help of NLP, a short, concise and meaningful summary can be generated from these documents that help the human to understand the given document in a short time by reading its summary.

      2. Sentiment Analysis

Customer review is the major part of our data. It can be about any product, website, article, or movie. So to analyze the customer review sentiment analysis can be used. With the help of sentiment analyze we can classify the customer review as a positive or a negative review.

         3.  Chatbots

Nowadays we usually see chatbots on every website that gives an automatic response to customer queries. The major advantage of chatbots is that they give responses within seconds and help customers to give basic information.

Pre-processing of data

The first step to make an NLP model is the pre-processing of data. The text data that we have is in raw form and can contain many errors along with much undesirable text due to which it will not give our results with accurate efficiency. So to get better outcomes it is necessary to pre-processed our data and makes it better to understand and analyze.

The various involves in pre-processing of data are –

1)Tokenization –

In this step, we decompose our text data into the smallest unit called tokens. Generally, our dataset consists long paragraph which is made up of many lines and lines are made up of words. It is quite difficult to analyze the long paragraphs so first, we decompose the paragraphs into separate lines and then lines are decomposed into words.

import nltk
from nltk import tokenize
text = "NLP is a systematic proces that help to do various task. It is used to analyze, organize and summarize the data"
#decomposing the paragraph into lines
lines=tokenize.sent_tokenize(text)
print(lines)
print("total line in the given paragraph",len(lines))
['NLP is a systematic proces that help to do various task.', 'It is used to analyze, organize and summarize the data']
total line in the given paragraph 2
#decomposing the text into words
words=tokenize.word_tokenize(text)
print(words)
print("Total number of words in the paragraph are",len(words))
['NLP', 'is', 'a', 'systematic', 'proces', 'that', 'help', 'to', 'do', 'various', 'task', '.', 'It', 'is', 'used', 'to', 'analyze', ',', 'organize', 'and', 'summarize', 'the', 'data']
Total number of words in the paragraph are 23

2) Normalization –

Most of the dataset contains many words that are generated from a single word by adding some suffix or prefix. These conditions can cause redundancy in our dataset and it will not give a better output. So it is an important task to convert those words into their root form that also decreases the count of unique words in our dataset and improves our outcomes.

In NLP, Two methods are used to perform the normalization of the dataset:-

       a) Stemming –

Stemming is used to remove any kind of suffix from the word and return the word in its original form that is the root word but sometimes the root word that is generated is a non-meaningful word or it does not belong to the English dictionary.

Eg- the words ”playful”, ”played”, ”playing” will be converted to “play” after stemming.

#importing module for stemming
from nltk.stem.porter import PorterStemmer
ps = PorterStemmer()
words=["playing","playful","played"]
print('Original words-', words)
stem_words=[]
for word in words:
    root_word= ps.stem(word)
    stem_words.append(root_word)
print("After stemming -", stem_words)
Original words- ['playing', 'playful', 'played']
After stemming - ['play', 'play', 'play']

      b) Lemmatization –

Lemmatization is similar to stemming but it works with much better efficiency. In lemmatization, the word that is generated after chopping off the suffix is always meaningful and belongs to the dictionary that means it does not produce any incorrect word. The word generated after lemmatization is also called a lemma.

#importing module for lemmatization
from nltk.stem import WordNetLemmatizer
wml = WordNetLemmatizer()
words_orig=["cries","crys","cried"]
print('Original words-', words_orig)
lemma_words=[]
for word in words_orig:
    tokens = wml.lemmatize(word)
    lemma_words.append(tokens)
print("After lemmatization", lemma_words)
Original words- ['cries', 'crys', 'cried']
After lemmatization ['cry', 'cry', 'cried']

Difference between lemmatization and stemming-

Lemmatization is a better way to obtain the original form of any given text rather than stemming because lemmatization returns the actual word that has some meaning in the dictionary.

Eg-
“increases” word will be converted to “increase” in case of lemmatization while “increase” in case of stemming.
from nltk.stem.porter import PorterStemmer 
from nltk.stem import WordNetLemmatizer
ps = PorterStemmer()
wml = WordNetLemmatizer()
lemma_words=[]
stem_words=[]
words=["increases"]
print('Original words-', words)
for word in words:
    root_word= ps.stem(word)
    stem_words.append(root_word)
    tokens = wml.lemmatize(word)
    lemma_words.append(tokens)
print("After stemming -", stem_words)
print("After lemmatization", lemma_words)
Original words- ['increases']
After stemming - ['increas']
After lemmatization ['increase']

3) Removing Stop-words

Stop words are those words in any language that helps to combine the sentence and make it meaningful. for eg. In the English language various words like “I, am, are, is to, etc. are all known as stop-wards. But these stop-words are not that much useful for our model so there is a need to remove these stop-words from our dataset so that we can focus on only important words rather than these supporting words.

# stop-words in english language
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
print(stop_words)
[‘i’, ‘me’, ‘my’, ‘myself’, ‘we’, ‘our’, ‘ours’, ‘ourselves’, ‘you’, “you’re”, “you’ve”, “you’ll”, “you’d”, ‘your’, ‘yours’, ‘yourself’, ‘yourselves’, ‘he’, ‘him’, ‘his’, ‘himself’, ‘she’, “she’s”, ‘her’, ‘hers’, ‘herself’, ‘it’, “it’s”, ‘its’, ‘itself’, ‘they’, ‘them’, ‘their’, ‘theirs’, ‘themselves’, ‘what’, ‘which’, ‘who’, ‘whom’, ‘this’, ‘that’, “that’ll”, ‘these’, ‘those’, ‘am’, ‘is’, ‘are’, ‘was’, ‘were’, ‘be’, ‘been’, ‘being’, ‘have’, ‘has’, ‘had’, ‘having’, ‘do’, ‘does’, ‘did’, ‘doing’, ‘a’, ‘an’, ‘the’, ‘and’, ‘but’, ‘if’, ‘or’, ‘because’, ‘as’, ‘until’, ‘while’, ‘of’, ‘at’, ‘by’, ‘for’, ‘with’, ‘about’, ‘against’, ‘between’, ‘into’, ‘through’, ‘during’, ‘before’, ‘after’, ‘above’, ‘below’, ‘to’, ‘from’, ‘up’, ‘down’, ‘in’, ‘out’, ‘on’, ‘off’, ‘over’, ‘under’, ‘again’, ‘further’, ‘then’, ‘once’, ‘here’, ‘there’, ‘when’, ‘where’, ‘why’, ‘how’, ‘all’, ‘any’, ‘both’, ‘each’, ‘few’, ‘more’, ‘most’, ‘other’, ‘some’, ‘such’, ‘no’, ‘nor’, ‘not’, ‘only’, ‘own’, ‘same’, ‘so’, ‘than’, ‘too’, ‘very’, ‘s’, ‘t’, ‘can’, ‘will’, ‘just’, ‘don’, “don’t”, ‘should’, “should’ve”, ‘now’, ‘d’, ‘ll’, ‘m’, ‘o’, ‘re’, ‘ve’, ‘y’, ‘ain’, ‘aren’, “aren’t”, ‘couldn’, “couldn’t”, ‘didn’, “didn’t”, ‘doesn’, “doesn’t”, ‘hadn’, “hadn’t”, ‘hasn’, “hasn’t”, ‘haven’, “haven’t”, ‘isn’, “isn’t”, ‘ma’, ‘mightn’, “mightn’t”, ‘mustn’, “mustn’t”, ‘needn’, “needn’t”, ‘shan’, “shan’t”, ‘shouldn’, “shouldn’t”, ‘wasn’, “wasn’t”, ‘weren’, “weren’t”, ‘won’, “won’t”, ‘wouldn’, “wouldn’t”]

 

#decomposing the paragraph into words
from nltk.tokenize import word_tokenize
#text="It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife."
text="NLP consists of a systematic process to organize the massive data and help to solve the numerous automated tasks in various fields like – machine translation, speech recognition, automatic summarization etc. "
words=word_tokenize(text)
print("Total words in the paragraph-",len(words))
print(wordsselc)
#importing the stop-wards module
from nltk.corpus import stopwords
filter_words = []
Stopwords = (stopwords.words('english'))
#removing the stop-words
for w in words:
    if w not in Stopwords:
         filter_words.append(w)
print("Total words after removing stop-words-",len(filter_words))
print(filter_words)
Total words in the paragraph- 34
['NLP', 'consists', 'of', 'a', 'systematic', 'process', 'to', 'organize', 'the', 'massive', 'data', 'and', 'help', 'to', 'solve', 'the', 'numerous', 'automated', 'tasks', 'in', 'various', 'fields', 'like', '–', 'machine', 'translation', ',', 'speech', 'recognition', ',', 'automatic', 'summarization', 'etc', '.']
Total words after removing stop-words- 26
['NLP', 'consists', 'systematic', 'process', 'organize', 'massive', 'data', 'help', 'solve', 'numerous', 'automated', 'tasks', 'various', 'fields', 'like', '–', 'machine', 'translation', ',', 'speech', 'recognition', ',', 'automatic', 'summarization', 'etc', '.']

4) Part of Speech tag (POS)

In this process, each token is listed according to its part of speech that whether it is a noun, adjective, verb, etc.
Some basics tags used for part of speech –

Lable(tags)           Part of Speech

NN                                Singular Noun

NNP                             Proper Noun

JJ                                 Adjective

VBD                             Past Tense Verb

IN                                Preposition

DT                               Determiner

Filter_words=['NLP', 'consists', 'systematic', 'process', 'organize', 'massive', 'data', 'help', 'solve', 'numerous', 'automated', 'tasks', 'various', 'fields', 'like', '–', 'machine', 'translation', ',', 'speech', 'recognition', ',', 'automatic', 'summarization', 'etc', '.']
from nltk import pos_tag
pos = pos_tag(filter_words)

print(pos)
[('NLP', 'NNP'), ('consists', 'VBZ'), ('systematic', 'JJ'), ('process', 'NN'), ('organize', 'VBP'), ('massive', 'JJ'), ('data', 'NNS'), ('help', 'NN'), ('solve', 'VBP'), ('numerous', 'JJ'), ('automated', 'VBN'), ('tasks', 'NNS'), ('various', 'JJ'), ('fields', 'NNS'), ('like', 'IN'), ('–', 'NNP'), ('machine', 'NN'), ('translation', 'NN'), (',', ','), ('speech', 'NN'), ('recognition', 'NN'), (',', ','), ('automatic', 'JJ'), ('summarization', 'NN'), ('etc', 'NN'), ('.', '.')]

Conclusion

The pre-processing of text data is the first and most important task before building an NLP model.
The pre-processing of text data not only reduces the dataset size but also helps us to focus on only useful and relevant data so that the future model would have a large percentage of efficiency.
With the help of pre-processing techniques like tokenization, stemming, lemmatization, removing stop-words, and part of speech tag we can remove all the irrelevant text from our dataset and make our dataset ready for further processing or model building.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Nishtha Arora 15 Jun 2021

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Natural Language Processing
Become a full stack data scientist