This article navigates the challenges of using Linear Regression for classification problems, emphasizing the unsuitability of Mean Squared Error in Logistic Regression. Explore the mathematics behind Log Loss, a pivotal metric for binary classification models, unraveling its significance and interpretation. With a compelling example involving a clothing company’s predictive modeling, discover the importance of choosing the right model and the complications a linear line introduces in logistic contexts. Gain insights into corrected probabilities and the benefits of logarithms, equipping seasoned data scientists and newcomers for nuanced decision-making in classification tasks.
Prerequisites
This article was published as a part of the Data Science Blogathon.
Log loss, also known as logarithmic loss or cross-entropy loss, is a common evaluation metric for binary classification models. It measures the performance of a model by quantifying the difference between predicted probabilities and actual values. Log-loss is indicative of how close the prediction probability is to the corresponding actual/true value (0 or 1 in case of binary classification), penalizing inaccurate predictions with higher values. Lower log-loss indicates better model performance.
Log Loss is the most important classification metric based on probabilities. Itâ€™s hard to interpret raw log-loss values, but log-loss is still a good metric for comparing models. For any given problem, a lower log loss value means better predictions.
Mathematical interpretation:
Log Loss is the negative average of the log of corrected predicted probabilities for each instance.
Let us understand it with an example:
Log loss is a metric evaluating classification model performance by measuring the disparity between predicted and actual probabilities. Rooted in information theory, it penalizes deviations, offering a continuous metric for optimization during model training. Lower log loss values signify better alignment between predicted and actual outcomes, making it a valuable tool for assessing the accuracy of probability estimates in classification problems.
The log loss value is calculated by assessing the likelihood of predicted probabilities matching the actual outcomes in a classification task. For each instance, it computes the negative logarithm of the predicted probability assigned to the correct class. The average of these values across all instances in the dataset yields the final log-loss score. The formula is:
Log Loss=âˆ’N1â€‹âˆ‘i=1Nâ€‹âˆ‘j=1Mâ€‹yijâ€‹â‹…log(pijâ€‹
Here, N is the number of instances, M is the number of classes, yijâ€‹ is a binary indicator (1 if the instance i belongs to class j, 0 otherwise), and pijâ€‹ is the predicted probability that instance i belongs to class j
`Winter is here`. Let’s welcome winters with a warm data science problem ðŸ˜‰
Letâ€™s delve into a case study of a clothing company specializing in jackets and cardigans. The company aims to develop a model capable of predicting whether a customer will purchase a jacket (class 1) or a cardigan (class 0) based on their historical behavioral patterns. This predictive model is crucial for tailoring specific offers to meet individual customer needs. As a data scientist, your task is to assist in building this model, considering metrics such as log loss to ensure the accuracy and reliability of predictions.
When we start Machine Learning algorithms, the first algorithm we learn about is `Linear Regression` in which we predict a continuous target variable.
If we use Linear Regression in our classification problem, we will get a best-fit line like this:
Z = ÃŸX + b
When you extend this line, you will have values greater than 1 and less than 0, which do not make much sense in our classification problem. It will make model interpretation a challenge, introducing the risk of misjudging predictions. That is where Logistic Regression, specifically designed for classification tasks and minimizing the log loss, comes in. If we needed to predict sales for an outlet, then this model could be helpful. But here, where our goal is to classify customers, utilizing logistic regression becomes crucial for optimizing performance and minimizing log loss.
We need a function to transform this straight line in such a way that values will be between 0 and 1:
Å¶ = Q (Z)â€‹
Q (Z) =1â€‹/1+ e^{-z } (Sigmoid Function)
Å¶ =1â€‹/1+ e^{-z}
After transformation, we will get a line that remains between 0 and 1. Another advantage of this function is all the continuous values we will get will be between 0 and 1, making it suitable for applications like log loss, where these values can be utilized as probabilities for making predictions. For example, if the predicted value is on the extreme right, the probability will be close to 1, and if the predicted value is on the extreme left, the probability will be close to 0
Selecting the right model is not enough. You need a function that measures the performance of a Machine Learning model for given data. Cost Function quantifies the error between predicted values and expected values.
`If you canâ€™t measure it, you canâ€™t improve it.`
Another thing that will change with this transformation is Cost Function. In Linear Regression, we use `Mean Squared Error` for cost function given by:-
and when this error function is plotted with respect to weight parameters of the Linear Regression Model, it forms a convex curve which makes it eligible to apply Gradient Descent Optimization Algorithm to minimize the error by finding global minima and adjust weights.
In Logistic Regression Å¶i is a nonlinear function(Å¶=1â€‹/1+ e^{-z}), if we put this in the above MSE equation it will give a non-convex function as shown:
The cost function used in Logistic Regression is Log Loss.
By default, the output of the logistic regression model is the probability of the sample being positive (indicated by 1). For instance, if a logistic regression model is trained to classify a company dataset, the predicted probability column indicates the likelihood of a person buying a jacket. In the given dataset, the log loss for the prediction that a person with ID6 will buy a jacket is 0.94.
In the same way, the probability that a person with ID5 will buy a jacket (i.e. belong to class 1) is 0.1 but the actual class for ID5 is 0, so the probability for the class is (1-0.1)=0.9. 0.9 is the correct probability for ID5.
We will find a log of corrected probabilities for each instance.
As you can see these log values are negative. To deal with the negative sign, we take the negative average of these values, to maintain a common convention that lower loss scores are better.
In short, there are three steps to find Log Loss:
If we summarize all the above steps, we can use the formula:-
Here Yi represents the actual class and log(p(yi)is the probability of that class.
Now Letâ€™s see how the above formula is working in two cases:
wow!! we got back to the original formula for binary cross-entropy/log loss ðŸ™‚
The benefits of taking logarithm reveal themselves when you look at the cost function graphs for actual class 1 and 0 :
A. Log loss is commonly used as an evaluation metric for binary classification tasks for several reasons. Firstly, it provides a continuous and differentiable measure of the model’s performance, making it suitable for optimization algorithms. Secondly, log loss penalizes confident and incorrect predictions more heavily, incentivizing calibrated probability estimates. Finally, log loss can be interpreted as the logarithmic measure of the likelihood of the predicted probabilities aligning with the true labels.
A. The interpretation of a “good” log loss value depends on the specific context and problem domain. In general, a lower log loss indicates better model performance. However, what constitutes a good log loss can vary depending on the complexity of the problem, the availability of data, and the desired level of accuracy. It is often useful to compare the log loss of different models or benchmarks to assess their relative performance.
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
Worth reading and great content.
Very well written blog. Short, crisp and equally insightful
That was thoughtful and nicely explained .
An explanatory content. You saved my day! Thank you!
One of the best pieces of content regarding log loss. Simple and up to the point !!
Very well and to the point, great read!
The example table that explains loss is great.