What is Chunking in Natural Language processing?

Nithyashree V 20 Oct, 2021 • 5 min read
This article was published as a part of the Data Science Blogathon

Dear readers,

In this blog, I will be discussing chunking both theoretically and practically in Python.

So, let’s begin…

NOTE: For the implementation, its better to use the Python IDLE as the output is a drawing of a tree which pops up in a separate window.

Agenda

  • What is chunking?
  • Where is chunking used?
  • Types of chunking
  • Implementation of chunking in Python
  • Results

What is chunking?

Chunking is defined as the process of natural language processing used to identify parts of speech and short phrases present in a given sentence.

Recalling our good old English grammar classes back in school, note that there are eight parts of speech namely the noun, verb, adjective, adverb, preposition, conjunction, pronoun, and interjection. Also, in the above definition of chunking, short phrases refer to the phrases formed by including any of these parts of speech.

For example, chunking can be done to identify and thus group noun phrases or nouns alone, adjectives or adjective phrases, and so on. Consider the sentence below:

“I had burgers and pastries for breakfast.”

In this case, if we wish to group or chunk noun phrases, we will get “burgers”, “pastries” and “lunch” which are the nouns or noun groups of the sentence.

Where is chunking used?

Why would we want to learn something without knowing where it is widely used?! Looking at the applications discussed in this section of the blog will help you stay curious till the end!

Chunking is used to get the required phrases from a given sentence. However, POS tagging can be used only to spot the parts of speech that every word of the sentence belongs to.

When we have loads of descriptions or modifications around a particular word or the phrase of our interest, we use chunking to grab the required phrase alone, ignoring the rest around it. Hence, chunking paves a way to group the required phrases and exclude all the modifiers around them which are not necessary for our analysis. Summing up, chunking helps us extract the important words alone from lengthy descriptions. Thus, chunking is a step in information extraction.

Interestingly, this process of chunking in NLP is extended to various other applications; for instance, to group fruits of a specific category, say, fruits rich in proteins as a group, fruits rich in vitamins as another group, and so on. Besides, chunking can also be used to group similar cars, say, cars supporting auto-gear into one group and the others which support manual gear into another chunk and so on.

Types of Chunking

There are, broadly, two types of chunking:

  • Chunking up
  • Chunking down

Chunking up:

Here, we don’t dive deep; instead, we are happy with just an overview of the information. It just helps us get a brief idea of the given data.

Chunking down:

Unlike the previous type of chunking, chunking down helps us get detailed information.

So, if you just want an insight, consider “chunking up” otherwise prefer “chunking down”.

Implementation of chunking in Python

Imagine a situation in which you want to extract all the verbs from the given text for your analysis. Thus, in this case, we must consider the chunking of verb phrases. This is because our objective is to extract all verb phrases from the given piece of text. Chunking is done with the help of regular expressions.

Don’t worry if it’s the first time you are coming across the term, “regular expressions”. The below table is here, at your rescue:

Symbol

Meaning

Example

*

The preceding character can occur zero or more times meaning that the preceding character may or may not be there.

ab* matches all inputs starting with ab and then followed by zero or more number of b’s. The pattern will match ab, abb ,abbb and so on.

+

The preceding character should occur at least once.

a+ matches a,aa,aaa and so on.

?

The preceding character may not occur at all or occur only once meaning the preceding character is optional.

ab? matches ab,abb but not abbb and so on.

The above table includes the most common regular expressions used. Regular expressions are very useful in the command line especially while deleting, locating, renaming, or moving files.

Anyways, for this implementation, we will only be using *. Feel free to look at the above table to familiarize yourself with the symbol!

We will be performing chunking using nltk, the most popular NLP library. So, let us first import it.

import nltk

Let’s consider the below sample text which I created on my own. Feel free to replace the below with any sample text you like to implement chunking!

sample_text="""
Rama killed Ravana to save Sita from Lanka.The legend of the Ramayan is the most popular Indian epic.A lot of movies and serials have already
been shot in several languages here in India based on the Ramayana.
"""

Clearly, the data has to be sentence tokenized and then word tokenized before we proceed. Tokenization is nothing but the process of breaking down the given piece of text into smaller units such as sentences, in the case of sentence tokenization and words, in the case of word tokenization.

Followed by tokenization, POS(part-of-speech) tagging is done for each word, in which the part-of-speech of every word will be identified. Now, we are interested only in the verb part-of-speech and wish to extract the same.

Hence, specify the part-of-speech of our interest using the required regular expression as follows:

VB: {}

tokenized=nltk.sent_tokenize(sample_text)
for i in tokenized:
  words=nltk.word_tokenize(i)
  # print(words)
  tagged_words=nltk.pos_tag(words)
  # print(tagged_words)
  chunkGram=r"""VB: {}"""
  chunkParser=nltk.RegexpParser(chunkGram)
  chunked=chunkParser.parse(tagged_words)
  chunked.draw()

The regular expression(RE) is enclosed within angular brackets() which in turn are enclosed within curly brackets({ and }).

NOTE: Specify the RE according to the required POS

VB stands for the verb POS. The dot succeeding the VB means to match any character following VB. The question mark after the dot specifies that any character after B must occur only once or must not occur at all. However, from the table which we saw previously, this character is optional. We have framed the regular expression in this manner because, in NLTK, verb phrases include the following POS tags:

POS

Meaning

VB

Verb in its base form

VBD

verb in its past tense

VBG

verb in its present tense

VBN

verb in its past participle form

VBP

Verb in its present tense but not in third person singular

VBZ

Verb in its present tense and is third person singular

Thus, verb phrases can belong to any of the above POS. That’s why the regular expression is framed as VB.? which includes all of the above categories. RegexpParser package is used to check if a POS satisfies our required pattern which we have mentioned using the RE previously.

The entire code can be seen as follows:

import nltk
nltk.download('averaged_perceptron_tagger')
sample_text="""
Rama killed Ravana to save Sita from Lanka.The legend of the Ramayan is the most popular Indian epic.A lot of movies and serials have already
been shot in several languages here in India based on the Ramayana.
"""
tokenized=nltk.sent_tokenize(sample_text)
for i in tokenized:
  words=nltk.word_tokenize(i)
  # print(words)
  tagged_words=nltk.pos_tag(words)
  # print(tagged_words)
  chunkGram=r"""VB: {}"""
  chunkParser=nltk.RegexpParser(chunkGram)
  chunked=chunkParser.parse(tagged_words)
  chunked.draw()

Results

chunking result
chunking result

Finally, we obtain a tree form of the POS of the words along with the words whose POS matches with the given RE. The snapshot of the output obtained for the sample text passed by us can be seen in the above figures.

Observe that the words which satisfy our RE for verb phrases alone are clearly highlighted in the output. Hence, chunking of verb phrases has been performed successfully.

Hope you found my article useful.

Thank You!

References

1. Implementing chunking in Python

2. Theory behind chunking

3. Full list of POS available in NLP

About Me

I am Nithyashree V, a final year BTech Computer Science and Engineering student. I love learning such cool technologies and putting them into practice, especially observing how they help us solve society’s challenging problems. My areas of interest include Artificial Intelligence, Data Science, and Natural Language Processing.

Here is my LinkedIn profile: My LinkedIn

You can read my other articles on Analytics Vidhya from here.

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.
Nithyashree V 20 Oct 2021

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Related Courses

Natural Language Processing
Become a full stack data scientist