The DataHour Synopsis: Text-Based Classification Using AI

Ankita Acharya 16 May, 2022 • 6 min read

Introduction

Analytics Vidhya has long been at the forefront of imparting data science knowledge to its community. With the intent to make learning data science more engaging to the community, we began with our new initiative- “DataHour”.

DataHour is a series of webinars by top industry experts where they teach and democratize data science knowledge. On 23rd April 2022, we were joined by Miss Ria Nag for a DataHour session on “Text-Based Classification Using Artificial Intelligence, AI” 

Ria is a leader and a mentor in Data Science and Machine Learning. She has been working for the past three years at Oracle as a data scientist leading to the development of end-to-end development and the launch of Oracle Construction Intelligent Cloud Services. Oracle Construction Intelligent Cloud Services is a new suite of artificial intelligence, AI  and analytics applications to enable informed project decisions in the engineering and construction industry. She has multiple US patents in the field of machine learning and NLP. She has been the recipient of the AnalyticsIndia Magazine under 40 data scientist award.

 

Here is the recording of the session : 

 

Are you excited to dive deeper into the world of Artificial Intelligence? We got you covered. Here you’ll get major highlights of this session: Text-Based Classification Using Artificial Intelligence. 

Introduction

This session of Text-Based Classification on Artificial Intelligence will focus on:

  • What is Text-Based Classification ( and that too using AI)?
  • The real-life use case where we will use this methodology( text-based classification).

 

Let’s first look into Text Classification:

Text Classification is a machine learning technique that assigns a set of predefined categories to open-ended text. It is one of the fundamental tasks in Natural Language  Processing(NLP) with broad applications such as sentiment analysis, topic labeling, spam detection, and intent detection. Any open ended-text/input is provided to the text classification model, and then the model labels the text with a particular category, and then it is shown in the UI(User Interface) as an output with the different categories associated with every text.

Next learning is:

Why use AI for text classification?

Manual classification is time-consuming and expensive, where a person has to go through every record of text and understand which label or category to assign to the text. So, it becomes very expensive in the long run. It is also not scalable.

In contrast, AI-based classification is scalable, consistent, and faster. So, I prefer this method more for the classification of text.

There are three types of AI-based text classification:

  • Rule-based systems 
  • Machine learning-based systems  
  • Hybrid systems 

 

Rule-based systems: There are sets of rules based on which an algorithm will classify an open-ended text as a particular category or label.

Machine learning-based systems: Here, the classifier is trained on a labeled dataset, you have to build this classifier, and then this classifier will label every text record.

Hybrid Systems: This is a combination of both rule-based and machine learning-based systems.

Popular Datasets for Training a Classifier

There are a few datasets that you need to consider for training a classifier, and these are:

(A) Topic classification: In this, the classifier assigns a particular topic to every record. Here you can use: 

  • Reuters news dataset: This is probably one the most widely used dataset for text classification; it contains 21,578 news articles from Reuters labeled with 135 categories according to their topics, such as Politics, Economics, Sports, and Business.  
  • 20 Newsgroups: Another popular dataset that consists of ~20,000 documents across 20 different topics. 

(B) Sentiment analysis: In this, you’ll build classifiers for sentiment or star analysis. For this, you can use:

  • Amazon Product Reviews: a well-known dataset that contains ~143 million reviews and star ratings (1 to 5 stars) spanning May 1996 – to July 2014. You can get an alternative dataset for Amazon product reviews here. 
  • IMDB reviews: a much smaller dataset with 25,000 movie reviews labeled as positive and negative from the Internet Movie Database (IMDB).

 

What are the Most Popular Programming Languages for Machine Learning?

These are:

  • Python: There are a few libraries like Scikit-learn, NLTK, Spacy, and deep learning libraries (like Keras, TensorFlow, and PyTorch ); these are popularly used for building your classifier that would classify text for different categories.
  • R: This is another programming language used for ML. The most used library of this language is caret library text-to-work, etc.
  • Java: Some ML experts use this language also.

It’s up to you totally which language you want to use for building your classifiers. 

USE CASE

Application of ML to identify Health and Safety Risks in the Construction Industry

Note: You can go through this case study from (Lattice Journal). Here is the link to this journal Lattice-Association of Data Scientists

The focus or problem statement here is:

Health and Safety issues and incidents that occur in the construction industry lead to:

  •  Budget and cost overruns
  •  Delays in Project delivery Schedule 
  •  Hurt workers, organizations, society, and countries

So, it’s important to mitigate these issues at the early stages to reduce the risk of major accidental happenings later on. For this, we’ll try to develop a solution that would identify any health and safety issues if they arise associated with any open-ended text data. We’ll try to build a classifier that will detect this health and safety issue at the first instance itself, consequently reducing the chances of major accidents in the future.

The solution for the same can be summed up as:

  • For this, we have applied NLP-based state-of-the-art Machine Learning (ML) models to classify text data from textual construction injury reports as well as correspondence data between construction project participants. 
  • The health and safety risk detection subsystem can predict whether the text data is associated with (any impending) risks with high accuracy.

 

The approach followed in the use case:

CRISP-DM (Cross industry standard practices for data management) which involves

  •  Preparation of labeled text dataset for training and testing 
  •  Data preparation and cleaning
  •  Model building 
  •  Model evaluation

Cross industry standard practices for data management| AI

Source: Ms. Ria Naag’s presentation 

Note: It’s a case-by-case study, so, for different classifiers, you need to use different kinds of evaluation metrics. 

 

The Method Used in the Use-case:

Data Preparation

  • Here we have a  labeled dataset of over 40,000 health and safety risk related, and we’ll prepare about 6,000 non-risk related text datasets. 
  • We’ll clean the correspondence text from each record by removal of stop words, punctuations, numbers, and HTML tags, and all words will stem from their root word with all lowercase characters.
  • We’ll vectorize each correspondence to a vector size of 7k features using Gensim’s Doc2Vec in Python to a document embedding matrix in which each row represents a unique correspondence, and each column represents a feature in the vector space.  We do so because the text data which is very close to each other will appear close in vector space. Contrary, the text data which is not similar/close to each other will appear separately from each other in the vector space.

Note: We need to vectorize the data because we can’t feed data directly to the classifier. We must convert it into a numerical dataset. There are two methods of performing the same:

  1. Turn frequency-inverse document frequency matrix
  2. Document embedding (here, we have used this one)

Training and testing: Here, we have set 90% for training and 10% for testing, and classes are in the same ratio as in the training and testing dataset. We have developed three machine learning (ML) models based on a trained dataset which is the Scaled document embedding matrix. This is needed to scale all features so that all the features will be at the same scale. It’s necessary so that none of the features have a differential impact on the final product due to differences in scale.

The Scaled document embedding matrix was used for building 3 base classifiers:

  • Logistic regression with L1 regularization:  L1 regularization makes automatic feature selection because it will shrink the coefficients of the less contributive variables to zero.
  • Gradient Boosting Algorithm using Xgboost.
  • Random forest classifier: This takes only those features that cause a high Gini index or entropy change.

Ensemble Majority Voting classification:

This is an ensemble classifier that we’ll construct over three base classifiers(discussed recently) and all these base classifiers are from different theoretical backgrounds to avoid bias and redundancy. On the basis of each base classifier majority voting, the ensemble model will predict whether the model has associated health and safety issues or not

The ensemble model predicts a txt record to be at risk if any two of the three base classifiers classify the record as a health and safety risk  

Result

Summary of the explained use-case using text-based classification| AI
Source: Ms. Ria Naag’s presentation

After deploying the best ensemble model, the predictions are available to the customer. These are available in the form of dashboards, warnings, recommendations, etc. These explain whatever risks are there in the past in a certain period. Now, it depends on the customer whether he’ll agree with the model predictions or not. To create a new set of labels where customers will add tags according to their case understanding and vice versa. There is a check that is to create a feedback loop in the system. All changes will automatically update until we arrive at the best predictions/result.

Future Case Requirements for Using AI:

  • Classify each Health and safety risk mail depending on the intensity of the risk into high, medium, and low. For example, sudden fire.
  • We would also like to develop classifiers that would identify other types of risks associated with the construction industry using NLP-based state-of-the-art Machine Learning (ML) models. 

Conclusion

I hope you have thoroughly understood what text-based classification is and how to use AI in text classification. Then the use-case has made the concepts more clear and simple for you. Try to apply them further in real-time according to your understanding.

Hope to see you there.

Ankita Acharya 16 May 2022

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear