In Natural Language Processing (NLP), Lemmatization and Stemming play crucial roles as Text Normalization techniques. These fundamental methods are employed to prepare words, text, and documents for subsequent processing. When comparing stemming vs lemmatization, it’s essential to recognize their distinct approaches in simplifying and standardizing language, enhancing the efficiency of various NLP applications.
Languages such as English, Hindi consists of several words which are often derived from one another. Further, Inflected Language is a term used for a language that contains derived words. For instance, word “historical” is derived from the word “history” and hence is the derived word.
There is always a common root form for all inflected words. Further, degree of inflection varies from lower to higher depending on the language.
To sum up, root form of derived or inflected words are attained using Stemming and Lemmatization.
The package namely, nltk.stem is used to perform stemming via different classes. We import PorterStemmer from nltk.stem to perform the above task.
For instance, ran, runs, and running are derived from one word i.e., run, therefore the lemma of all three words is run. Lemmatization is used to get valid words as the actual word is returned.
WordNetLemmatizer is a library that is imported from nltk.stem which looks for lemmas of words from the WordNet Database.
Note: Before using the WordNet Lemmatizer, WordNet corpora has to be downloaded from NLTK downloader.
In this article, you will explore what stemming and lemmatization are, understand the differences between stemming vs lemmatization, and learn how these techniques are applied in NLP for effective text processing.
This article was published as a part of the Data Science Blogathon.
It is the process of reducing infected words to their stem. For instance, in figure 1, stemming with replace words “history” and “historical” with “histori”. Similarly, for the words finally and final.
Stemming is the process of removing the last few characters of a given word, to obtain a shorter form, even if that form doesn’t have any meaning in machine learning.
In NLP use cases such as sentiment analysis, spam classification, restaurant reviews etc., getting base word is important to know whether the word is positive or negative. Stemming is used to get that base word.
This section will help you in stemming of paragraph using NLTK which can be used in various use cases such as sentiment analysis, etc.
So let’s get started:
Note: It is highly recommended to use google colab to run this code.
Import libraries that will be required for stemming.
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
The paragraph will be taken as input and used for stemming.
paragraph = """
I have three visions for India. In 3000 years of our history,
people from all over the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
the French, the Dutch, all of them came and looted us, took over what was ours.
Yet we have not done this to any other nation. We have not conquered anyone.
We have not grabbed their land, their culture,
their history and tried to enforce our way of life on them.
"""
Before, stemming, tokenization is done so as to break text into chunks. In this case, paragraph to sentences for easy computation.
As can be seen from output paragraph is divided into sentences based on “.” .
In the code given below, one sentence is taken at a time and word tokenization is applied i.e., converting sentence to words. After that, stopwords (such as the, and, etc) are ignored and stemming is applied on all other words. Finally, stem words are joined to make a sentence.
Note: Stopwords are the words that do not add any value to the sentence.
Python Code:
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
paragraph = """I have three visions for India. In 3000 years of our history, people from all over the world have come and invaded us, captured our lands, conquered our minds. From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British, the French, the Dutch, all of them came and looted us, took over what was ours. Yet we have not done this to any other nation. We have not conquered anyone. We have not grabbed their land, their culture, their history and tried to enforce our way of life on them. """
sentences = nltk.sent_tokenize(paragraph)
print(sentences)
print("\n\n Result after Stemming \n\n")
stemmer = nltk.PorterStemmer()
# Stemming
for i in range(len(sentences)):
words = nltk.word_tokenize(sentences[i])
words = [stemmer.stem(word) for word in words if word not in set(stopwords.words('english'))]
sentences[i] = ' '.join(words)
print(sentences)
From the above output, we can see that stopwords such as have, for have been removed from sentence one. The word “visions” have been converted to “vision, “history” to “histori” by stemming.
The purpose of lemmatization is same as that of stemming but overcomes the drawbacks of stemming. In stemming, for some words, it may not give may not give meaningful representation such as “Histori”. Here, lemmatization comes into picture as it gives meaningful word.
Lemmatization takes more time as compared to stemming because it finds meaningful word/ representation. Stemming just needs to get a base word and therefore takes less time.
Stemming has its application in Sentiment Analysis while Lemmatization has its application in Chatbots, human-answering.
On similar lines of stemming, we will import libraries get input for lemmatization.
import nltk
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
paragraph = """I have three visions for India. In 3000 years of our history, people from all over
the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
the French, the Dutch, all of them came and looted us, took over what was ours.
Yet we have not done this to any other nation. We have not conquered anyone.
We have not grabbed their land, their culture,
their history and tried to enforce our way of life on them.
"""
sentences = nltk.sent_tokenize(paragraph)
print(sentences)
The difference between stemming and lemmatization comes in this step where WordNetLemmatizer() is used instead of PorterStemmer(). Rest of steps are the same.
lemmatizer = WordNetLemmatizer()
# Lemmatization
for i in range(len(sentences)):
words = nltk.word_tokenize(sentences[i])
words = [lemmatizer.lemmatize(word) for word in words if word not in set(stopwords.words('english'))]
sentences[i] = ' '.join(words)
print(sentences)
In above output, it can be noticed that although word “visions” have been converted to “vision” but word “history” remained “history” unlike stemming and thus retained its meaning.
Stemming | Lemmatization |
Stemming is a process that stems or removes last few characters from a word, often leading to incorrect meanings and spelling. | Lemmatization considers the context and converts the word to its meaningful base form, which is called Lemma. |
For instance, stemming the word ‘Caring‘ would return ‘Car‘.
| For instance, lemmatizing the word ‘Caring‘ would return ‘Care‘. |
Stemming is used in case of large dataset where performance is an issue. | Lemmatization is computationally expensive since it involves look-up tables and what not. |
Stemming is a linguistic normalization process in natural language processing and information retrieval. Its primary goal is to reduce words to their base or root form, known as the stem. Stemming helps group words with similar meanings or roots together, even if they have different inflections, prefixes, or suffixes.
The process involves removing common affixes (prefixes, suffixes) from words, resulting in a simplified form that represents the word’s core meaning. Stemming is a heuristic process and may only sometimes produce a valid word. Still, it is effective for tasks like information retrieval, where the focus is on matching the essential meaning of words rather than their grammatical correctness.
For example:
Stemming algorithms use various rules and heuristics to identify and remove affixes, making them widely applicable in text-processing tasks to enhance information retrieval and analysis.
Lemmatization is a linguistic process that involves reducing words to their base or root form, known as the lemma. The goal is to normalize different inflected forms of a word so that they can be analyzed or compared more easily. This is particularly useful in natural language processing (NLP) and text analysis.
Here’s how lemmatization generally works:
Lemmatization is distinct from stemming, another text normalization technique. While stemming involves chopping off prefixes or suffixes from words to obtain a common root, lemmatization aims for a valid base form through linguistic analysis. Lemmatization tends to be more accurate but can be computationally more expensive than stemming.
Here’s a step-by-step process to help you decide between stemming and lemmatization for your text preprocessing task.
One thing to note is that a lot of knowledge and understanding about the structure of language is required for lemmatization. Hence, in any new language, creating a stemmer is easier than a lemmatization algorithm. When considering stemming vs lemmatization, it becomes evident that stemming focuses on removing prefixes and suffixes to achieve word stems, making it a more straightforward process, while lemmatization involves understanding the root form of words, demanding a deeper linguistic comprehension.
Lemmatization and Stemming are the foundation of derived (inflected) words and hence the only difference between lemma and stem is that lemma is an actual word whereas, the stem may not be an actual language word.
Lemmatization uses a corpus to attain a lemma, making it slower than stemming. Further, to get the proper lemma, you might have to define a parts-of-speech. Whereas, in stemming a step-wise algorithm is followed making it faster.
Hope you like the article! Lemmatization in NLP helps simplify words to their basic forms. Knowing the difference between stemming and lemmatization is important for better understanding and working with language in text analysis.
A. The choice depends on the specific use case. Lemmatization produces a linguistically valid word while stemming is faster but may generate non-words.
A. As an AI language model, I can perform both stemming and lemmatization based on the task’s requirements or context.
A. Stemming chops off word endings without considering linguistic context, making it computationally faster. Lemmatization analyzes word forms to determine the base or dictionary form, which takes more processing time.
A. Stemming and lemmatization are used in natural language processing tasks such as information retrieval, text mining, sentiment analysis, and search engines to reduce words to their base or root forms for better analysis and understanding.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.