Detecting and Treating Outliers | Treating the odd one out!
One of the most important steps as part of data preprocessing is detecting and treating the outliers as they can negatively affect the statistical analysis and the training process of a machine learning algorithm resulting in lower accuracy.
1. What are Outliers? 🤔
We all have heard of the idiom ‘odd one out which means something unusual in comparison to the others in a group.
Similarly, an Outlier is an observation in a given dataset that lies far from the rest of the observations. That means an outlier is vastly larger or smaller than the remaining values in the set.
2. Why do they occur?
An outlier may occur due to the variability in the data, or due to experimental error/human error.
They may indicate an experimental error or heavy skewness in the data(heavy-tailed distribution).
3. What do they affect?
In statistics, we have three measures of central tendency namely Mean, Median, and Mode. They help us describe the data.
Mean is the accurate measure to describe the data when we do not have any outliers present.
Median is used if there is an outlier in the dataset.
Mode is used if there is an outlier AND about ½ or more of the data is the same.
‘Mean’ is the only measure of central tendency that is affected by the outliers which in turn impacts Standard deviation.
Consider a small dataset, sample= [15, 101, 18, 7, 13, 16, 11, 21, 5, 15, 10, 9]. By looking at it, one can quickly say ‘101’ is an outlier that is much larger than the other values.
From the above calculations, we can clearly say the Mean is more affected than the Median.
4. Detecting Outliers
If our dataset is small, we can detect the outlier by just looking at the dataset. But what if we have a huge dataset, how do we identify the outliers then? We need to use visualization and mathematical techniques.
Below are some of the techniques of detecting outliers
- Inter Quantile Range(IQR)
4.1 Detecting outliers using Boxplot:
Python code for boxplot is:
import matplotlib.pyplot as plt plt.boxplot(sample, vert=False) plt.title("Detecting outliers using Boxplot") plt.xlabel('Sample')
4.2 Detecting outliers using the Z-scores
Criteria: any data point whose Z-score falls out of 3rd standard deviation is an outlier.
- loop through all the data points and compute the Z-score using the formula (Xi-mean)/std.
- define a threshold value of 3 and mark the datapoints whose absolute value of Z-score is greater than the threshold as outliers.
import numpy as np outliers =  def detect_outliers_zscore(data): thres = 3 mean = np.mean(data) std = np.std(data) # print(mean, std) for i in data: z_score = (i-mean)/std if (np.abs(z_score) > thres): outliers.append(i) return outliers# Driver code sample_outliers = detect_outliers_zscore(sample) print("Outliers from Z-scores method: ", sample_outliers)
The above code outputs: Outliers from Z-scores method: 
4.3 Detecting outliers using the Inter Quantile Range(IQR)
Criteria: data points that lie 1.5 times of IQR above Q3 and below Q1 are outliers.
- Sort the dataset in ascending order
- calculate the 1st and 3rd quartiles(Q1, Q3)
- compute IQR=Q3-Q1
- compute lower bound = (Q1–1.5*IQR), upper bound = (Q3+1.5*IQR)
- loop through the values of the dataset and check for those who fall below the lower bound and above the upper bound and mark them as outliers
outliers =  def detect_outliers_iqr(data): data = sorted(data) q1 = np.percentile(data, 25) q3 = np.percentile(data, 75) # print(q1, q3) IQR = q3-q1 lwr_bound = q1-(1.5*IQR) upr_bound = q3+(1.5*IQR) # print(lwr_bound, upr_bound) for i in data: if (i<lwr_bound or i>upr_bound): outliers.append(i) return outliers# Driver code sample_outliers = detect_outliers_iqr(sample) print("Outliers from IQR method: ", sample_outliers)
The above code outputs: Outliers from IQR method: 
5. Handling Outliers
Till now we learned about detecting the outliers. The main question is WHAT do we do with the outliers?
Below are some of the methods of treating the outliers
- Trimming/removing the outlier
- Quantile based flooring and capping
- Mean/Median imputation
5.1 Trimming/Remove the outliers
In this technique, we remove the outliers from the dataset. Although it is not a good practice to follow.
Python code to delete the outlier and copy the rest of the elements to another array.
# Trimming for i in sample_outliers: a = np.delete(sample, np.where(sample==i)) print(a) # print(len(sample), len(a))
The outlier ‘101’ is deleted and the rest of the data points are copied to another array ‘a’.
5.2 Quantile based flooring and capping
In this technique, the outlier is capped at a certain value above the 90th percentile value or floored at a factor below the 10th percentile value.
# Computing 10th, 90th percentiles and replacing the outliers tenth_percentile = np.percentile(sample, 10) ninetieth_percentile = np.percentile(sample, 90) # print(tenth_percentile, ninetieth_percentile)b = np.where(sample<tenth_percentile, tenth_percentile, sample) b = np.where(b>ninetieth_percentile, ninetieth_percentile, b) # print("Sample:", sample) print("New array:",b)
The above code outputs: New array: [15, 20.7, 18, 7.2, 13, 16, 11, 20.7, 7.2, 15, 10, 9]
The data points that are lesser than the 10th percentile are replaced with the 10th percentile value and the data points that are greater than the 90th percentile are replaced with 90th percentile value.
5.3 Mean/Median imputation
As the mean value is highly influenced by the outliers, it is advised to replace the outliers with the median value.
median = np.median(sample)# Replace with median for i in sample_outliers: c = np.where(sample==i, 14, sample) print("Sample: ", sample) print("New array: ",c) # print(x.dtype)
Visualizing the data after treating the outlier
plt.boxplot(c, vert=False) plt.title("Boxplot of the sample after treating the outliers") plt.xlabel("Sample")
In this blog, we learned about an important phase of data preprocessing which is treating outliers. We now know different methods of detecting and treating outliers.
I hope this blog helps understand the outliers concept. Please do upvote if you like it. Happy learning !! 😊
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.