We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details

Guide to AUC ROC Curve in Machine Learning

Aniruddha Bhandari 30 Sep, 2024
11 min read

Introduction

You’ve built your machine learning model – so what’s next? You need to evaluate and validate how good (or bad) it is, so you can decide whether to implement it. That’s where the AUC ROC curve comes in.

The name might be a mouthful, but it is just saying that we are calculating the “Area Under the Curve” (AUC) of the “Receiver Operating Characteristic” (ROC). Confused? I feel you! I have been in your shoes. But don’t worry, we will see what these terms mean in detail, and everything will be a piece of cake!

AUC ROC curve

For now, just know that the AUC ROC curve helps us visualize how well our machine learning classifier performs. Although it works only for binary classification problems, we will see how we can extend it to evaluate multi-class classification problems. Also, in this article we will cover about the auc area under curve, roc curves and all about the related auc roc curve in this artivle.

We’ll also cover topics like sensitivity and specificity since these are key topics behind the ROC curve (or ROC AUC curve in machine learning).

I suggest going through the article on Confusion Matrix as it will introduce some important terms we will use in this article.

What is the AUC-ROC Curve?

You might have come across the terms ROC and AUC associated with artificial intelligence, deep learning, and convolutional neural networks (CNNs). So what exactly are they?

An ROC curve, or receiver operating characteristic curve, is like a graph that shows how well a classification model performs. It helps us see how the model makes decisions at different levels of certainty. The curve has two lines: one for how often the model correctly identifies positive cases (true positives) and another for how often it mistakenly identifies negative cases as positive (false positives). By looking at this graph, we can understand how good the model is and choose the threshold that gives us the right balance between correct and incorrect predictions.

The Receiver Operator Characteristic (ROC) curve is an evaluation metric for binary classification problems. It is a probability curve that plots the TPR against FPR at various threshold values and essentially separates the ‘signal’ from the ‘noise.’ In other words, it shows the performance of a classification model at all classification thresholds. The Area Under the Curve (AUC) is the measure of the ability of a binary classifier to distinguish between classes and is used as a summary of the ROC curve.

Area Under the Curve

When AUC = 1, the classifier can correctly distinguish between all the Positive and the Negative class points. If, however, the AUC had been 0, then the classifier would predict all Negatives as Positives and all Positives as Negatives.

AUC ROC curve

When 0.5<AUC<1, there is a high chance that the classifier will be able to distinguish the positive class values from the negative ones. This is so because the classifier is able to detect more numbers of True positives and True negatives than False negatives and False positives.

AUC ROC random output [ROC curve]

When AUC=0.5, then the classifier is not able to distinguish between Positive and Negative class points. Meaning that the classifier either predicts a random class or a constant class for all the data points.

So, the higher the AUC value for a classifier, the better its ab

What is ROC Curve?

The ROC (Receiver Operating Characteristic) curve is like a graph that shows how good a model is at telling things apart. It helps us see how often the model correctly identifies positive things and how often it correctly avoids labeling negative things as positive. So, it basically shows how well the model is working for binary classification tasks.

Defining the terms used in AUC and ROC Curve?

  1. AUC (Area Under the Curve): A single metric representing the overall performance of a binary classification model based on the area under its ROC curve.
  2. ROC Curve (Receiver Operating Characteristic Curve): A graphical plot illustrating the trade-off between True Positive Rate and False Positive Rate at various classification thresholds.
  3. True Positive Rate (Sensitivity): Proportion of actual positives correctly identified by the model.
  4. False Positive Rate: The model incorrectly classifies the proportion of actual negatives as positives.
  5. Specificity (True Negative Rate): Proportion of actual negatives correctly identified by the model.

What are Sensitivity and Specificity?

This is what a confusion matrix looks like:

Confusion matrix [ROC curve]

From the confusion matrix, we can derive some important metrics that were not discussed in the previous article. Let’s talk about them here.

Sensitivity / True Positive Rate / Recall

Sensitivity formula [ROC curve]

Sensitivity tells us what proportion of the positive class got correctly classified.

A simple example would be determining what proportion of the actual sick people were correctly detected by the model.

False Negative Rate

False Negative Rate [ROC curve]

False Negative Rate (FNR) tells us what proportion of the positive class got incorrectly classified by the classifier.

A higher TPR and a lower FNR are desirable since we want to classify the positive class correctly.

Specificity / True Negative Rate

Specificity formula [ROC curve]

Specificity tells us what proportion of the negative class got correctly classified.

Taking the same example as in Sensitivity, Specificity would mean determining the proportion of healthy people who were correctly identified by the model.

False Positive Rate

False Positive Rate [ROC curve]

FPR tells us what proportion of the negative class got incorrectly classified by the classifier.

A higher TNR and a lower FPR are desirable since we want to classify the negative class correctly.

Out of these metrics, Sensitivity and Specificity are perhaps the most important, and we will see later on how these are used to build an evaluation metric. But before that, let’s understand why the probability of prediction is better than predicting the target class directly.

Probability of Predictions

A machine learning classification model can be used to naturally predict the data point’s actual class or predict its probability of belonging to different classes, employing an AUC-ROC curve for evaluation. The latter gives us more control over the result. We can determine our own threshold to interpret the result of the classifier, a valuable aspect when considering the nuances of the ROC Curve. This approach is sometimes more prudent than just building a completely new model!

Setting different thresholds for classifying positive classes for data points will inadvertently change the Sensitivity and Specificity of the model. And one of these thresholds will probably give a better result than the others, depending on whether we are aiming to lower the number of False Negatives or False Positives.

Have a look at the table below:

AUC-ROC curve example

The metrics change with the changing threshold values. We can generate different confusion matrices and compare the various metrics that we discussed in the previous section. But that would not be a prudent thing to do. Instead, we can plot roc curves between some of these metrics to quickly visualize which threshold is giving us a better result.

How Does the AUC-ROC Curve Work?

In an AUC-ROC curve, a higher X-axis value indicates a higher number of False positives than True negatives. While a higher Y-axis value indicates a higher number of True positives than False negatives. So, the choice of the threshold depends on the ability to balance False positives and False negatives naturally.

Let’s dig a bit deeper and understand what our ROC curve would look like for different threshold values and how the specificity and sensitivity would vary.

We can try and understand this graph by generating a confusion matrix for each point corresponding to a threshold and talk about the performance of our classifier:

Sample Confusion matrix [ROC curve]

Point A is where the Sensitivity is the highest and Specificity the lowest. This means all the Positive class points are classified correctly, and all the Negative class points are classified incorrectly.

In fact, any point on the blue line corresponds to a situation where the True Positive Rate is equal to False Positive Rate.

All points above this line correspond to the situation where the proportion of correctly classified points belonging to the Positive class is greater than the proportion of incorrectly classified points belonging to the Negative class.

Sample Confusion matrix [ROC curve]

Although Point B has the same Sensitivity as Point A, it has a higher Specificity. Meaning the number of incorrectly Negative class points is lower than the previous threshold. This indicates that this threshold is better than the previous one.

Confusion Matrix AUC ROC [ROC curve]

Between points C and D, the Sensitivity at point C is higher than point D for the same Specificity. This means, for the same number of incorrectly classified Negative class points, the classifier predicted a higher number of Positive class points. Therefore, the threshold at point C is better than point D.

Now, depending on how many incorrectly classified points we want to tolerate for our classifier, we would choose between point B or C to predict whether you can defeat me in PUBG or not.

“False hopes are more dangerous than fears.”–J.R.R. Tolkein

Confusion Matrix [ROC curve]

Point E is where the Specificity becomes highest. Meaning the model classifies no False Positives. The model can correctly classify all the Negative class points! We would choose this point if our problem was to give perfect song recommendations to our users.

Going by this logic, can you guess where the point corresponding to a perfect classifier would lie on the graph?

Yes! It would be on the top-left corner of the ROC Curve graph corresponding to the coordinate (0, 1) in the cartesian plane. Here, both the Sensitivity and Specificity would be the highest, and the classifier would correctly classify all the Positive and Negative class points.

Understanding the AUC-ROC Curve in Python

Now, either we can manually test the Sensitivity and Specificity for every threshold or let sklearn do the job for us. We’re definitely going with the latter!

Let’s create our arbitrary data using the sklearn make_classification method:

Python Code:

import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

# generate two class dataset
X, y = make_classification(n_samples=1000, n_classes=2, n_features=20, random_state=27)

# split into train-test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=27)

print(pd.DataFrame(X))
print(pd.Series(y))

I will test the performance of two classifiers on this dataset:

# train models
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier

# logistic regression
model1 = LogisticRegression()
# knn
model2 = KNeighborsClassifier(n_neighbors=4)

# fit model
model1.fit(X_train, y_train)
model2.fit(X_train, y_train)

# predict probabilities
pred_prob1 = model1.predict_proba(X_test)
pred_prob2 = model2.predict_proba(X_test)

Sklearn has a very potent method, roc_curve(), which computes the ROC for your classifier in a matter of seconds! It returns the FPR, TPR, and threshold values:

from sklearn.metrics import roc_curve

# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(y_test, pred_prob1[:,1], pos_label=1)
fpr2, tpr2, thresh2 = roc_curve(y_test, pred_prob2[:,1], pos_label=1)

# roc curve for tpr = fpr 
random_probs = [0 for i in range(len(y_test))]
p_fpr, p_tpr, _ = roc_curve(y_test, random_probs, pos_label=1)

The AUC score can be computed using the roc_auc_score() method of sklearn:

from sklearn.metrics import roc_auc_score

# auc scores
auc_score1 = roc_auc_score(y_test, pred_prob1[:,1])
auc_score2 = roc_auc_score(y_test, pred_prob2[:,1])
0.9761029411764707 0.9233769727403157

Try this code out in the live coding window below:

# auc roc curve

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

# generate two class dataset
X, y = make_classification(n_samples=1000, n_classes=2, n_features=20, random_state=27)

# split into train-test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=27)



# train models
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier

# logistic regression
model1 = LogisticRegression()
# knn
model2 = KNeighborsClassifier(n_neighbors=4)

# fit model
model1.fit(X_train, y_train)
model2.fit(X_train, y_train)

# predict probabilities
pred_prob1 = model1.predict_proba(X_test)
pred_prob2 = model2.predict_proba(X_test)


from sklearn.metrics import roc_curve

# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(y_test, pred_prob1[:,1], pos_label=1)
fpr2, tpr2, thresh2 = roc_curve(y_test, pred_prob2[:,1], pos_label=1)

# roc curve for tpr = fpr 
random_probs = [0 for i in range(len(y_test))]
p_fpr, p_tpr, _ = roc_curve(y_test, random_probs, pos_label=1)


from sklearn.metrics import roc_auc_score

# auc scores
auc_score1 = roc_auc_score(y_test, pred_prob1[:,1])
auc_score2 = roc_auc_score(y_test, pred_prob2[:,1])

print(auc_score1, auc_score2)

We can also plot the receiver operating characteristic curves for the two algorithms using matplotlib:

# matplotlib
import matplotlib.pyplot as plt
plt.style.use('seaborn')

# plot roc curves
plt.plot(fpr1, tpr1, linestyle='--',color='orange', label='Logistic Regression')
plt.plot(fpr2, tpr2, linestyle='--',color='green', label='KNN')
plt.plot(p_fpr, p_tpr, linestyle='--', color='blue')
# title
plt.title('ROC curve')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')

plt.legend(loc='best')
plt.savefig('ROC',dpi=300)
plt.show();
Binary class ROC curve

It is evident from the plot that the AUC for the Logistic Regression ROC curve is higher than that for the KNN ROC curve. Therefore, we can say that logistic regression did a better job of classifying the positive class in the dataset.

AUC-ROC Curve for Multi-Class Classification

As I said before, the AUC-ROC curve is only for binary classification problems. But we can extend it to multiclass classification problems using the One vs. All technique.

So, if we have three classes, 0, 1, and 2, the ROC for class 0 will be generated as classifying 0 against not 0, i.e., 1 and 2. The ROC for class 1 will be generated as classifying 1 against not 1, and so on.

The ROC curve for multi-class classification models can be determined as below:

# multi-class classification
from sklearn.multiclass import OneVsRestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score

# generate 2 class dataset
X, y = make_classification(n_samples=1000, n_classes=3, n_features=20, n_informative=3, random_state=42)

# split into train/test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# fit model
clf = OneVsRestClassifier(LogisticRegression())
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
pred_prob = clf.predict_proba(X_test)

# roc curve for classes
fpr = {}
tpr = {}
thresh ={}

n_class = 3

for i in range(n_class):    
    fpr[i], tpr[i], thresh[i] = roc_curve(y_test, pred_prob[:,i], pos_label=i)
    
# plotting    
plt.plot(fpr[0], tpr[0], linestyle='--',color='orange', label='Class 0 vs Rest')
plt.plot(fpr[1], tpr[1], linestyle='--',color='green', label='Class 1 vs Rest')
plt.plot(fpr[2], tpr[2], linestyle='--',color='blue', label='Class 2 vs Rest')
plt.title('Multiclass ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive rate')
plt.legend(loc='best')
plt.savefig('Multiclass ROC',dpi=300);    
Multiclass ROC [ROC curve]

Conclusion

I hope you found this article helpful in understanding how powerful the AUC-ROC curve metric is in measuring the performance of a classifier. You’ll use this often in the industry and in data science or machine learning hackathons. Better get familiar with ROC AUC score, AUC machine learning, and ROC Curve formula machine learning.

Hope you like the article and clear about the auc area under curve and also you understand how auc roc curve works. With these you now get complete understanding about the auc roc.

Going further, I would recommend you the following courses that will be useful in building your data science acumen:

Q1. What is the AUC ROC curve?

A. AUC ROC stands for “Area Under the Curve” of the “Receiver Operating Characteristic” curve. The AUC ROC curve is basically a way of measuring the performance of an ML model. AUC measures the ability of a binary classifier to distinguish between classes and is used as a summary of the ROC curve.

Q2. What does the ROC AUC score tell you?

A. The ROC AUC score tells us how efficient the model is. The higher the AUC, the better the model’s performance at distinguishing between the positive and negative classes. An AUC score of 1 means the classifier can perfectly distinguish between all the Positive and the Negative class points. An AUC value of 0 shows that the classifier predicts all Negatives as Positives and vice versa.

Q3. What does ROC AUC 0.5 mean?

A. ROC AUC 0.5 means that the classifier is not working. An AUC value above 0.5 means the classifier can detect more numbers of True positives and True negatives than False negatives and False positives.

Q4. What is the ROC Curve method?

The ROC curve method is a way to see how good a model is at distinguishing between two things. It plots sensitivity (how good it is at catching the right ones) against specificity (how good it is at avoiding the wrong ones) at different levels. It helps us understand how well the model is performing.

Q5. What does a ROC curve of 0.8 mean?

A ROC curve of 0.8 means the model is pretty good at distinguishing between the two things. Specifically, it shows that there’s an 80% chance the model will correctly rank a random positive thing higher than a random negative thing. In simpler terms, it’s a good sign that the model is working well.

Aniruddha Bhandari 30 Sep, 2024

Responses From Readers

Clear

Ogah
Ogah 16 Jun, 2020

Thanks. It a good write up

Jon T
Jon T 16 Jun, 2020

Hi Aniruddha, Thank you for writing such a comprehensive overview of AUC-ROC metrics. I just have a query regarding your example table in the Probability of Predictions section: at the bottom of the table, why is FPR=0, TNR=1 rather than the other way around? Thanks Jon

Mahtab
Mahtab 16 Jun, 2020

Very grateful post for me, I want to say you tha k you so much

Yami
Yami 16 Jun, 2020

p_fpr, p_tpr, _ = roc_curve(y_test, random_probs, pos_label=1) What to use inside random_probs, as it is throwing error

mk1332
mk1332 17 Jul, 2020

Very Nice ! The best explanations for ROC curves and AUC I have found on the internet so far

Sunkara
Sunkara 05 Sep, 2020

Thanks. Very detailed explanation.

Fjona
Fjona 01 Oct, 2020

Hi Aniruddha, Thank you for the post. While trying to implement the code, in this line: auc_score1 = roc_auc_score(testY1, pred_prob1[:,1]) I am getting an error like this: File "train_test_roc.py", line 566, in fpr1, tpr1, thresh1 = roc_curve(testY1, pred_prob1[:,1], pos_label=1) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\utils\validation.py", line 73, in inner_f return f(**kwargs) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\metrics\_ranking.py", line 776, in roc_curve y_true, y_score, pos_label=pos_label, sample_weight=sample_weight) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\metrics\_ranking.py", line 539, in _binary_clf_curve raise ValueError("{0} format is not supported".format(y_type)) ValueError: multilabel-indicator format is not supported Do you think you could help me out how to solve it? Thank you in advance

Sarthak Rana
Sarthak Rana 07 Dec, 2020

Hi Aniruddha, Thanks for the detailed explanation. Just a quick question : What is the purpose and meaning of `pos_label` argument in roc_curve method ? Can you please explain? Thanks, Sarthak

Fernando
Fernando 23 Jan, 2021

Thank you very much for your post. I have a doubt on the probability table: In a binary classifier, that value is always the probability to get a value of True, right?

Monika
Monika 02 Feb, 2021

Very good explanation

Akilu Rilwan
Akilu Rilwan 07 Dec, 2021

Thank you for this nice piece. I just can't understand where you mentioned "Point A is where the Sensitivity is the highest and Specificity the lowest." Isn't that point a [1,1]? i.e. TPR =1 and FPR=1 as well?

hamza
hamza 14 Dec, 2021

Very interresting article thanks a lot

jo
jo 10 Jan, 2022

Great article. However, would like to check if OneVsRestClassifier is required? What is the difference between using OneVsRestClassifier and not using it? I tried out your code and not using it gave a slightly better score.

ken
ken 10 Jan, 2022

Very good article, thanks

Mujeeb
Mujeeb 23 Jan, 2022

Very useful, thank you very much.

Sumit
Sumit 31 Jan, 2022

I am using AUC for landslide susceptibility mapping and have divided the data into two unit, 70% and 30%. So for validation I am only using 30% of the data which were randomly selected form the whole data. But the AUC value for validation has come greater than the testing value. Is it correct? Should the validation AUC value should be lesser than the testing AUC value?

Prashant
Prashant 11 Apr, 2022

Thanks a lot Aniruddha. This was a very useful article in understanding ROC-AUC concept.

Jagdish N. Parate
Jagdish N. Parate 04 Feb, 2023

Thank you......! Nicely Explained.

Joseph
Joseph 02 Dec, 2023

Much thanks for this very insightful article. One question remains unanswered for me though: How do we get the auc_roc_scores for the multiclass? I have tried some codes based on your code for the LR & KN models plus some other searches but it's not working.