Mrinal Singh Walia — Published On June 21, 2021 and Last Modified On June 21st, 2021
Beginner Classification Machine Learning Maths

This article was published as a part of the Data Science Blogathon

In today’s post, we will discuss the merits of ROC curves vs. accuracy estimation.

AROC vs Accuracy vs ROC

                          Photo By @austindistel on Unsplash

Multiple mysteries may bother you while appraising your machine learning models like,

  • Is prediction score a more trustworthy evaluation metric than ROC?
  • What is Area Under Curve ROC, and how to apply it?
  • If my data is deeply imbalanced, should I practice AROC rather than Accuracy or vice versa?

 

Here is a quick summary of our discussion.

  • The “Receiver Operating Characteristic” (ROC) curve is an alternative to Accuracy for evaluating learning algorithms on raw datasets.
  • The ROC curve is a mathematical curve and not an individual number statistic.
  • In particular, this means that the comparison of two algorithms on a dataset does not always produce an apparent order.
  • Accuracy (= 1 – error rate) is a standard method employed to estimate training algorithms. It is a single-number review of completion.
  • AROC is the area beneath the ROC curve. It is a single estimate report of completion.

As perpetually, it depends, but learning the trade-offs within various metrics is crucial for making the accurate decision.

  1. Accuracy: It estimates how many observations, both positive and negative, were accurately classified. You shouldn’t use Precision on imbalanced difficulties. Then, it is obvious to get an extraordinary accuracy score by solely transcribing all comments as the majority class. Considering the accuracy rate is determined on the predicted levels (not prediction rates), we must implement a particular threshold before measuring it. The clear option is the threshold of 0.5, but it can be suboptimal.
  2. ROC/ AROC: When it proceeds to a classification problem, we can calculate an AROC. A ROC curve (receiver operating characteristic curve) is a plot of achievement of a classification model at each classification threshold. It is one of the numerous fundamental evaluation metrics for monitoring any classification model’s achievement.

Examining these metrics is a complex matter because, in machine learning, each works differently on different natural datasets.

It will make some sense if we accept the hypothesis “Performance on past learning problems (roughly) predicts performance on future learning problems.

The ROC vs. accuracy discussion confuses with “is the goal classification or ranking?” because ROC curve creation demands generating a ranking.

Here, we believe the purpose is classification willingly than ranking. (There are several natural problems where we prefer the ranking of instances to classification. In extension, there are numerous natural obstacles where classification is the intention.)

How To Measure ROC Curve:

The ROC curve is generated by measuring and outlining the true positive rate versus the false-positive rate for a particular classifier at a family of thresholds.

True Positive Rate = True Positives / (True Positives + False Negatives)
False Positive Rate = False Positives / (False Positives + True Negatives)
  • The true positive rate is additionally introduced as sensitivity.
  • The false-positive rate is additionally introduced to as Specificity.

 

How To Measure Accuracy Score:

Accuracy is calculated as the division of accurate predictions for the test data. It can be determined easily by dividing the aggregate of true predictions by the product of complete predictions.

Accuracy = True Positive + True Negative / True Positive  + True Negative + False Positive + False Negative.

Arguments for ROC

Specification: The costs of choices are not well specified. The training standards are often not expressed from the corresponding marginal distribution as the test models. ROC curves allow for an adequate comparison over a range of different choice costs and marginal distributions.

Dominance: Standard classification algorithms do not have a dominant structure as the costs vary. We should not say “algorithm A is better than algorithm B” when you do not know the choice costs well enough to be sure.

Just-in-Time use: Any system with a good ROC curve can efficiently be designed with a ‘knob’ that controls the rate of false positives vs. false negatives.

 

AROC inherits the arguments of ROC except for Dominance.

Summarization: Humans do not have the time to understand the complexities of a conditional comparison, so having a single number instead of a curve is valuable.

Robustness: Algorithms with a large AROC are robust against a variation in costs.

 

Accuracy is the traditional approach-Arguments for Accuracy.

Summarization: As for AROC.

Intuitiveness: Within no time, people understand what Accuracy means. Unlike (A)ROC, it is obvious what happens when one additional example is classified wrong.

Statistical Stability: The basic test set bound shows that Accuracy is stable subject to only the IID assumption. It is only valid for AROC (and ROC) when the number in each class is not near zero.

Minimality: In the end, a classifier makes classification decisions. Accuracy directly measures this while (A)ROC compromises this measure with hypothetical alternate choice costs. For the corresponding purpose, evaluating (A)ROC may demand significantly more effort than resolving the problem.

Generality: Accuracy generalizes immediately to multiclass Precision, rank-weighted Precision, and comprehensive (per-example) cost-sensitive classification. ROC curves become problematic when there are just three classes.

 

Although the area beneath the ROC curve (AROC) is no habitual quantity in itself.

I observe that its interpretation as a Wilcoxon-Mann-Whitney statistic, which effectively measures the fraction of positive-negative instance pairs ranked correctly, makes the quantity easier to understand. This interpretation also has other benefits; while generalizing ROC curves to more than two classes is not straightforward, the above interpretation facilitates graceful generalizations of the AROC statistic to multi-category ranking.

Some additional data, more or less relevant to the thread:

a) A subtle and exciting difference between AROC evaluations and computations based on the most significant “standard” loss functions (including 0/1 loss, squared-error, “cost-sensitive classification,” etc.) is that we can evaluate all the standard loss functions for each (example) independently of the others. AROC is defined only for a set of examples.

b) One neat use of AROC is as a base-rate-independent version of the Bayes rate. Specifically, data sets cannot be compared directly to Bayes rates when their base rates differ (by base rate, it means the typical notion of the marginal/unconditional probability of the most probable class). However, their “optimal” AROCs could be connected instantly as assumptions of how divisible the classes are.

Summing-up

  • When your dataset is balanced & all classes are equivalently crucial to you, Accuracy is ordinarily a great start. A further advantage is that it is outspoken to describe it to non-technical stakeholders in your scheme.
  • AROC is scale-invariant because it estimates how well predictions are ranked, preferably than their positive values. AROC is classification-threshold-invariant measures the quality of the model’s predictions irrespective of whatever classification threshold is taken.

 

What To Do Next

One crucial method not yet mentioned in the present discussion is the elegant work by Provost and Fawcett on the ROC Convex Hull as an alternative to both “vanilla” ROC curves and the Area Under Curve summary. Within the ROCCH framework, classifiers with the highest expected utility have curves sitting on the convex hull of all the candidate classifiers’ curves. Expected-cost-optimal regions of the hull’s upper boundary (parametrized by a gradient) are related to the practitioner’s belief about utility and class priors.

Here are a few study materials I suggest to readers for further understanding of the topic:

Thanks for Browsing my Article. Kindly comment and do not forget to share this blog as it will motivate me to deliver more quality blogs on ML & DL-related topics. Thank you so much for your help, cooperation, and support!

 

About Author

Mrinal Walia is a professional Python Developer with a computer science background specializing in Machine Learning, Artificial Intelligence, and Computer Vision. In addition to this, Mrinal is an interactive blogger, author, and geek with over four years of experience in his work. With a background working through most areas of computer science, Mrinal currently works as a Testing and Automation Engineer at Versa Networks, India. My aim to reach my creative goals one step at a time, and I believe in doing everything with a smile.

Medium | LinkedIn | ModularML | DevCommunity | Github

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Leave a Reply Your email address will not be published. Required fields are marked *