A Brief Guide to Emotion Cause Pair Extraction in NLP

Nivetha B 21 Nov, 2022 • 7 min read

Introduction

With the rapid growth of social network platforms, more and more people tend to share their experiences and emotions online. So, the task of emotion analysis of online texts is crucial in Natural Language Processing. Sometimes, it is also important to know the cause of the observed emotion. You may be wondering why we need to identify the cause of a particular emotion. To understand that, for instance, Samsung wants to know why people love or hate Note 7 rather than the distribution of different emotions. In such situations, one must understand the cause behind the emotion.
As the name suggests, this task of extracting the emotion and the emotion’s cause is known as ECPE (Emotion Cause Pair Extraction). ECPE is an emergent natural language processing task that builds on ECE (Emotion Cause Extraction), which entails annotating the document to extract the causes of the emotion. The ECE and the ECPE are two different approaches to extracting the emotion and the emotion’s cause. Without further delay, let’s get into the details of ECE and ECPE.

Example of Emotion Cause Pair Extraction

Example of Emotion Cause Pair Extraction

Figure 1 depicts the general working of the ECPE task. It is necessary to begin by numbering the clauses in a document before identifying the emotion and its cause. Clauses are what make up a document. ECPE might therefore be considered the work at the clause level. Figure 1 shows an example statement: “Last week, I lost my phone while shopping; I feel sad now.” The first clause refers to last week, the second to the day I lost my phone while shopping, and the third to how I feel. Extracting the emotion and its cause comes next after the numbering is complete. The person’s current emotion is sadness and the cause why he is experiencing sadness is that he lost his phone while shopping. I hope this illustration explains ECPE in general. Without further ado, let’s get deeper into the subject.

What is the difference between ECE and Emotion Cause Pair Extraction?

Before getting into ECPE, let’s first understand the previous approach to ECPE and its drawbacks. ECE aims to extract the possible causes for the given emotion clauses. It requires the emotional signals to be annotated in a document. To get a clear idea about ECE and ECPE, let’s look at a comparison example depicted in Figure 2.

difference between ECE and Emotion Cause Pair Extraction | Emotion Cause Pair Extraction

The document in figure 2 depicts the statement, “Yesterday morning, a policeman visited the old man with the lost money, and told him that the thief was caught. The old man was very happy and deposited the money in the bank.”, with the first clause to be “yesterday morning,” “a policeman visited the old man with the lost money” being the second clause and so on. The next step is to extract the cause clauses in the case of ECE with the emotions that have already been annotated. In the given example, the emotion “happy” is already annotated in the document, and now the task is to identify the cause clauses associated with the emotions. Whereas in the case of ECPE, the emotion “happy” is not annotated. So the task of the ECPE is to extract both the emotion clause and its cause clause.

Shortcomings of ECE

You may be wondering what ECE’s limitations are now that you know the distinctions between ECE and ECPE; here are two major shortcomings:

  • Emotions must be annotated before cause extraction in the test set, limiting ECE’s applications in real-world scenarios.
  • The second is that the way first to annotate the emotion and then extract the causes is mutually indicative. These are the two main justifications for the new ECPE approach.

 

Workflow of Emotion Cause Pair Extraction

Let’s first look at the definition of the emotion-cause pair extraction task. Given a document consisting of multiple clauses d = [c1, c2, …, c|d|], the goal of ECPE is to extract a set of emotion-cause pairs in d:

P = {..., (ce, cc ), ...}

Where ce is an emotion clause and cc is the corresponding cause clause. Having understood the problem statement, let’s get into the approach to the ECPE task.

The ECPE task has been proposed as a two-step framework, which performs individual emotion extraction and cause extraction and then performs emotion-cause pairing and filtering. Now, the goal of the first step is to extract a set of emotion clauses E = {c1e, …, cme} and a set of cause clauses C = {c1c, …, cnc} for each document. The next step is to pair the emotion set E and the cause set C by applying a Cartesian product to them, yielding a set of emotion-cause pairs. Then a filter is trained to eliminate the pairs that do not contain a causal relationship between the emotion and the cause. This is how ECPE operates as a whole.

A Brief Overview of the Approach

Having discussed the workflow of ECPE, let’s now get into its approach. As already mentioned, a two-step approach has been proposed to extract the emotion-cause pairs. The first step is to extract a set of emotions and a set of causes, and the second is to perform emotion-cause pairing and filtering. Looking into the approach to the first task, two kinds of multi-task learning networks i.e., Independent Multi-task Learning and Interactive Multi-task Learning can be used to accomplish the first task. Let’s explore the approaches in more detail.

Independent Multi-task learning

A document is a collection of clauses d = [c1, c2, …, c|d|] . A clause ci, in turn, is a collection of words ci = [wi, 1, wi, 2, …, wi, |ci|]. A Hierarchical Bi-LSTM network with two layers is employed to capture the word-clause document structure. The lowest layer comprises a collection of word-level Bi-LSTM modules that correspond to a single clause and gather contextual data for each word. The hidden state of the jth word in the ith clause hi,j is obtained based on a bi-directional LSTM. The attention mechanism is then adopted to get a clause representation si. This is all about the lower layer.
Now, moving on to the upper layer, the upper layer comprises two components: emotion extraction and cause extraction. The outputs of the lowest layer are fed as the input to the upper layer, i.e., the independent clause representations obtained at the lower layer [s1, s2, …, s|d| ].

Model for Independent Multi-task Learning(Indep) | Emotion Cause Pair Extraction

The context-aware representation of phrase ci is what the hidden states of the two-component Bi-LSTM, rie and ric, represent. These hidden states are then fed to the softmax layer, which predicts emotions and causes.

The superscripts e and c denote the emotion and cause, respectively. Then the loss of the model can be obtained as a weighted sum of two components.

where Le and Lc are the cross-entropy error of emotion prediction and cause prediction, respectively, and λ is a tradeoff parameter.

Interactive Multi-task learning:

The two components in the upper layer are not independent of one another in interactive multi-task learning, in contrast to independent multi-task learning. Since emotion and cause extraction are not mutually independent, the interactive multi-task learning approach being not independent, gives us better results than independent multi-task learning. On the one hand, providing emotions can help better discover the causes; on the other hand, knowing causes may also help more accurately extract emotions.

Two models for Interactive Multi-task Learning

There are two methods called Inter-EC and Inter-CE. The method using emotion extraction to improve cause extraction is called Inter-EC, and the method using cause extraction to improve emotion extraction is called Inter-CE. Figure 4 depicts the two models proposed for interactive multi-task learning. The working of the lower layer is the same as that of independent multi-task learning. The upper layer comprises two components used to predict emotion extraction and cause extraction interactively.
Looking in detail at Inter-EC, the outputs of the lower layer [s1, s2, …, s|d| ], the independent clause representations are fed as input to the first component for emotion extraction. The hidden state of clause-level Bi-LSTM rie is used as a feature to predict the distribution of the ith clause . The predicted label of the ith clause is embedded as a vector Yie, which is used for the second component. For the cause extraction, the second component takes as inputs, whererepresents the concatenation operation. The hidden state of clause-level Bi-LSTM ric is used to predict the distribution of the ith clause . The loss function is the same as that used for independent multi-task learning.


Conclusion

This blog has been a tutorial to understand a basic overview of Emotion Cause Pair Extraction. In summarising the article, let’s note that we first discussed what Emotion Cause Pair Extraction is, its historical approach, why we need it, and how it works. There are many techniques to extract emotions and causes, like Graph Convolutional Networks. Although there are many more ways than the one covered in this article, this one is the most popular.

Nivetha B 21 Nov 2022

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear