spaCy is my go-to library for Natural Language Processing (NLP) tasks. I’d venture to say that’s the case for the majority of NLP experts out there!
Among the plethora of NLP libraries these days, spaCy really does stand out on its own. If you’ve used spaCy for NLP, you’ll know exactly what I’m talking about. And if you’re new to the power of spaCy, you’re about to be enthralled by how multi-functional and flexible this library is.
The factors that work in the favor of spaCy are the set of features it offers, the ease of use, and the fact that the library is always kept up to date.
This tutorial is a crisp and effective introduction to spaCy and the various NLP features it offers. I’d advise you to go through the below resources if you want to learn about the various aspects of NLP:
- Certified Natural Language Processing (NLP) Course
- Ines Montani and Matthew Honnibal – The Brains behind spaCy
- Introduction to Natural Language Processing (Free Course!)
Getting Started with spaCy
If you are new to spaCy, there are a couple of things you should be aware of:
- spaCy’s Statistical Models
- spaCy’s Processing Pipeline
Let’s discuss each one in detail.
spaCy’s Statistical Models
These models are the power engines of spaCy. These models enable spaCy to perform several NLP related tasks, such as part-of-speech tagging, named entity recognition, and dependency parsing.
I’ve listed below the different statistical models in spaCy along with their specifications:
- en_core_web_sm: English multi-task CNN trained on OntoNotes. Size – 11 MB
- en_core_web_md: English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl. Size – 91 MB
- en_core_web_lg: English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl. Size – 789 MB
Importing these models is super easy. We can import a model by just executing spacy.load(‘model_name’) as shown below:
import spacy nlp = spacy.load('en_core_web_sm')
spaCy’s Processing Pipeline
The first step for a text string, when working with spaCy, is to pass it to an NLP object. This object is essentially a pipeline of several text pre-processing operations through which the input text string has to go through.
As you can see in the figure above, the NLP pipeline has multiple components, such as tokenizer, tagger, parser, ner, etc. So, the input text string has to go through all these components before we can work on it.
Let me show you how we can create an nlp object:
You can use the below code to figure out the active pipeline components:
Output: [‘tagger’, ‘parser’, ‘ner’]
Just in case you wish to disable the pipeline components and keep only the tokenizer up and running, then you can use the code below to disable the pipeline components:
Let’s again check the active pipeline component:
When you only have to tokenize the text, you can then disable the entire pipeline. The tokenization process becomes really fast. For example, you can disable multiple components of a pipeline by using the below line of code:
spaCy in Action
1. Part-of-Speech (POS) Tagging using spaCy
In English grammar, the parts of speech tell us what is the function of a word and how it is used in a sentence. Some of the common parts of speech in English are Noun, Pronoun, Adjective, Verb, Adverb, etc.
POS tagging is the task of automatically assigning POS tags to all the words of a sentence. It is helpful in various downstream tasks in NLP, such as feature engineering, language understanding, and information extraction.
Performing POS tagging, in spaCy, is a cakewalk:
He –> PRON
went –> VERB
to –> PART
play –> VERB
basketball –> NOUN
So, the model has correctly identified the POS tags for all the words in the sentence. In case you are not sure about any of these tags, then you can simply use spacy.explain() to figure it out:
2. Dependency Parsing using spaCy
Every sentence has a grammatical structure to it and with the help of dependency parsing, we can extract this structure. It can also be thought of as a directed graph, where nodes correspond to the words in the sentence and the edges between the nodes are the corresponding dependencies between the word.
Performing dependency parsing is again pretty easy in spaCy. We will use the same sentence here that we used for POS tagging:
went –> ROOT
to –> aux
play –> advcl
basketball –> dobj
spacy.explain("nsubj"), spacy.explain("ROOT"), spacy.explain("aux"), spacy.explain("advcl"), spacy.explain("dobj")
‘adverbial clause modifier’,
3. Named Entity Recognition using spaCy
Let’s first understand what entities are. Entities are the words or groups of words that represent information about common things such as persons, locations, organizations, etc. These entities have proper names.
For example, consider the following sentence:
In this sentence, the entities are “Donald Trump”, “Google”, and “New York City”.
Let’s now see how spaCy recognizes named entities in a sentence.
over $71 billion MONEY
Output: ‘Nationalities or religious or political groups’
4. Rule-Based Matching using spaCy
Rule-based matching is a new addition to spaCy’s arsenal. With this spaCy matcher, you can find words and phrases in the text using user-defined rules.
It is like Regular Expressions on steroids.
While Regular Expressions use text patterns to find words and phrases, the spaCy matcher not only uses the text patterns but lexical properties of the word, such as POS tags, dependency tags, lemma, etc.
Let’s see how it works:
So, in the code above:
- First, we import the spaCy matcher
- After that, we initialize the matcher object with the default spaCy vocabulary
- Then, we pass the input in an NLP object as usual
- In the next step, we define the rule/pattern for what we want to extract from the text.
Let’s say we want to extract the phrase “lemon water” from the text. So, our objective is that whenever “lemon” is followed by the word “water”, then the matcher should be able to find this pattern in the text. That’s exactly what we have done while defining the pattern in the code above. Finally, we add the defined rule to the matcher object.
Now let’s see what the matcher has found out:
matches = matcher(doc) matches
Output: lemon water
So, the pattern is a list of token attributes. For example, ‘TEXT’ is a token attribute that means the exact text of the token. There are, in fact, many other useful token attributes in spaCy which can be used to define a variety of rules and patterns.
I’ve listed below the token attributes:
|unicode||The exact verbatim text of a token.|
|unicode||The exact verbatim text of a token.|
|unicode||The lowercase form of the token text.|
| ||int||The length of the token text.|
| ||bool||Token text consists of alphabetic characters, ASCII characters, digits.|
| ||bool||Token text is in lowercase, uppercase, titlecase.|
| ||bool||Token is punctuation, whitespace, stop word.|
| ||bool||Token text resembles a number, URL, email.|
| ||unicode||The token’s simple and extended part-of-speech tag, dependency label, lemma, shape.|
|unicode||The token’s entity label.|
Let’s see another use case of the spaCy matcher. Consider the two sentences below:
- You can read this book
- I will book my ticket
Now we are interested in finding whether a sentence contains the word “book” in it or not. It seems pretty straight forward right? But here is the catch – we have to find the word “book” only if it has been used in the sentence as a noun.
In the first sentence above, “book” has been used as a noun and in the second sentence, it has been used as a verb. So, the spaCy matcher should be able to extract the pattern from the first sentence only. Let’s try it out:
matches = matcher(doc1) matches
matches = matcher(doc2) matches
This was a quick introduction to give you a taste of what spaCy can do. Trust me, you will find yourself using spaCy a lot for your NLP tasks. I encourage you to play around with the code, take up a dataset from DataHack and try your hand on it using spaCy.
And if you’re cpmletely new to NLP and the various tasks you can do, I’ll again suggest going through the below comprehensive course: