eddie_4072 — May 19, 2021
Beginner Python Structured Data

This article was published as a part of the Data Science Blogathon   

Introduction

Data Cleaning is the process of finding and correcting the inaccurate/incorrect data that are present in the dataset. One such process needed is to do something about the values that are missing in the dataset. In real life, many datasets will have many missing values, so dealing with them is an important step.

Why do you need to fill in the missing data? Because most of the machine learning models that you want to use will provide an error if you pass NaN values into it. The easiest way is to just fill them up with 0, but this can reduce your model accuracy significantly.

For filling missing values, there are many methods available. For choosing the best method, you need to understand the type of missing value and its significance, before you start filling/deleting the data.

First Look at the Dataset

In this article, I will be working with the Titanic Dataset from Kaggle.

For downloading the dataset, use the following link – https://www.kaggle.com/c/titanic

  • Import the required libraries that you will be using – numpy and pandas.
import pandas as pd
import numpy as np
#importing the dataset into kaggle
df = pd.read_csv("titanic_dataset.csv")
df.head()
Dealing with Missing values dataset

See that the contains many columns like PassengerId, Name, Age, etc.. We won’t be working with all the columns in the dataset, so I am going to be deleting the columns I don’t need.

 

df.drop("Name",axis=1,inplace=True)
df.drop("Ticket",axis=1,inplace=True)
df.drop("PassengerId",axis=1,inplace=True)
df.drop("Cabin",axis=1,inplace=True)
df.drop("Embarked",axis=1,inplace=True)

See that there are also categorical values in the dataset, for this, you need to use Label Encoding or One Hot Encoding.

from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['Sex'] = le.fit_transform(df['Sex'])
newdf=df
#splitting the data into x and y
y = df['Survived']
df.drop("Survived",axis=1,inplace=True)

How to know whether the data has missing values?

Missing values are usually represented in the form of Nan or null or None in the dataset.

df.info() the function can be used to give information about the dataset. This will provide you with the column names along with the number of non – null values in each column.

df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 6 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   Pclass  891 non-null    int64  
 1   Sex     891 non-null    int64  
 2   Age     714 non-null    float64
 3   SibSp   891 non-null    int64  
 4   Parch   891 non-null    int64  
 5   Fare    891 non-null    float64
dtypes: float64(2), int64(4)
memory usage: 41.9 KB

See that there are null values in the column Age.

The second way of finding whether we have null values in the data is by using the isnull() function.

print(df.isnull().sum())
Pclass      0
Sex         0
Age       177
SibSp       0
Parch       0
Fare        0
dtype: int64

See that all the null values in the dataset are in the column – Age.

Let’s try fitting the data using logistic regression.

from sklearn.model_selection import train_test_split
X_train, X_test,y_train,y_test = train_test_split(df,y,test_size=0.3)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,y_train)
---------------------------------------------------------------------------


ValueError: Input contains NaN, infinity or a value too large for dtype('float64').

See that the logistic regression model does not work as we have NaN values in the dataset. Only some of the machine learning algorithms can work with missing data like KNN, which will ignore the values with Nan values.

Now let’s look at the different methods that you can use to deal with the missing data.

The methods I will be discussing are

  1. Deleting the columns with missing data
  2. Deleting the rows with missing data
  3. Filling the missing data with a value – Imputation
  4. Imputation with an additional column
  5. Filling with a Regression Model

1. Deleting the column with missing data

Dealing with Missing values | Delete coloumn

In this case, let’s delete the column, Age and then fit the model and check for accuracy.

But this is an extreme case and should only be used when there are many null values in the column.

updated_df = df.dropna(axis=1)
updated_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 5 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   Pclass  891 non-null    int64  
 1   Sex     891 non-null    int64  
 2   SibSp   891 non-null    int64  
 3   Parch   891 non-null    int64  
 4   Fare    891 non-null    float64
dtypes: float64(1), int64(4)
memory usage: 34.9 KB
from sklearn import metrics
from sklearn.model_selection import train_test_split
X_train, X_test,y_train,y_test = train_test_split(updated_df,y,test_size=0.3)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,y_train)
pred = lr.predict(X_test)
print(metrics.accuracy_score(pred,y_test))
0.7947761194029851

See that we are able to achieve an accuracy of 79.4%.

The problem with this method is that we may lose valuable information on that feature, as we have deleted it completely due to some null values.

Should only be used if there are too many null values.

2. Deleting the row with missing data

If there is a certain row with missing data, then you can delete the entire row with all the features in that row.

axis=1 is used to drop the column with `NaN` values.

axis=0 is used to drop the row with `NaN` values.

updated_df = newdf.dropna(axis=0)
y1 = updated_df['Survived']
updated_df.drop("Survived",axis=1,inplace=True)
updated_df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 714 entries, 0 to 890
Data columns (total 6 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   Pclass  714 non-null    int64  
 1   Sex     714 non-null    int64  
 2   Age     714 non-null    float64
 3   SibSp   714 non-null    int64  
 4   Parch   714 non-null    int64  
 5   Fare    714 non-null    float64
dtypes: float64(2), int64(4)
memory usage: 39.0 KB
from sklearn import metrics
from sklearn.model_selection import train_test_split
X_train, X_test,y_train,y_test = train_test_split(updated_df,y1,test_size=0.3)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,y_train)
pred = lr.predict(X_test)
print(metrics.accuracy_score(pred,y_test))
0.8232558139534883

In this case, see that we are able to achieve better accuracy than before. This is maybe because the column Age contains more valuable information than we expected.

3. Filling the Missing Values – Imputation

Dealing with Missing values | imputation

In this case, we will be filling the missing values with a certain number.

The possible ways to do this are:

  1. Filling the missing data with the mean or median value if it’s a numerical variable.
  2. Filling the missing data with mode if it’s a categorical value.
  3. Filling the numerical value with 0 or -999, or some other number that will not occur in the data. This can be done so that the machine can recognize that the data is not real or is different.
  4. Filling the categorical value with a new type for the missing values.

You can use the fillna() function to fill the null values in the dataset.

updated_df = df
updated_df['Age']=updated_df['Age'].fillna(updated_df['Age'].mean())
updated_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 7 columns):
 #   Column    Non-Null Count  Dtype  
---  ------    --------------  -----  
 0   Survived  891 non-null    int64  
 1   Pclass    891 non-null    int64  
 2   Sex       891 non-null    int64  
 3   Age       891 non-null    float64
 4   SibSp     891 non-null    int64  
 5   Parch     891 non-null    int64  
 6   Fare      891 non-null    float64
dtypes: float64(2), int64(5)
memory usage: 48.9 KB
y1 = updated_df['Survived']
updated_df.drop("Survived",axis=1,inplace=True)
from sklearn import metrics
from sklearn.model_selection import train_test_split
X_train, X_test,y_train,y_test = train_test_split(updated_df,y1,test_size=0.3)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,y_train)
pred = lr.predict(X_test)
print(metrics.accuracy_score(pred,y_test))
0.7798507462686567

The accuracy value comes out to be 77.98% which is a reduction over the previous case.

This will not happen in general, in this case, it means that the mean has not filled the null value properly.

4. Imputation with an additional column

Dealing with Missing values | impute with other column

Use the SimpleImputer() function from sklearn module to impute the values.

Pass the strategy as an argument to the function. It can be either mean or mode or median.

The problem with the previous model is that the model does not know whether the values came from the original data or the imputed value. To make sure the model knows this, we are adding Ageismissing the column which will have True as value, if it is a null value and False if it is not a null value.

updated_df = df
updated_df['Ageismissing'] = updated_df['Age'].isnull()
from sklearn.impute import SimpleImputer
my_imputer = SimpleImputer(strategy = 'median')
data_new = my_imputer.fit_transform(updated_df)
updated_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 7 columns):
 #   Column        Non-Null Count  Dtype  
---  ------        --------------  -----  
 0   Pclass        891 non-null    int64  
 1   Sex           891 non-null    int64  
 2   Age           891 non-null    float64
 3   SibSp         891 non-null    int64  
 4   Parch         891 non-null    int64  
 5   Fare          891 non-null    float64
 6   Ageismissing  891 non-null    bool   
dtypes: bool(1), float64(2), int64(4)
memory usage: 42.8 KB
from sklearn import metrics
from sklearn.model_selection import train_test_split
X_train, X_test,y_train,y_test = train_test_split(updated_df,y1,test_size=0.3)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,y_train)
pred = lr.predict(X_test)
print(metrics.accuracy_score(pred,y_test))
0.7649253731343284

5. Filling with a Regression Model

In this case, the null values in one column are filled by fitting a regression model using other columns in the dataset.

I.E in this case the regression model will contain all the columns except Age in X and Age in Y.

Then after filling the values in the Age column, then we will use logistic regression to calculate accuracy.

from sklearn.linear_model import LinearRegression
lr = LinearRegression()
df.head()
testdf = df[df['Age'].isnull()==True]
traindf = df[df['Age'].isnull()==False]
y = traindf['Age']
traindf.drop("Age",axis=1,inplace=True)
lr.fit(traindf,y)
testdf.drop("Age",axis=1,inplace=True)
pred = lr.predict(testdf)
testdf['Age']= pred

regression

 

traindf['Age']=y
y = traindf['Survived']
traindf.drop("Survived",axis=1,inplace=True)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(traindf,y)

LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
                   intercept_scaling=1, l1_ratio=None, max_iter=100,
                   multi_class='auto', n_jobs=None, penalty='l2',
                   random_state=None, solver='lbfgs', tol=0.0001, verbose=0,
                   warm_start=False)
y_test = testdf['Survived']
testdf.drop("Survived",axis=1,inplace=True)
pred = lr.predict(testdf)

print(metrics.accuracy_score(pred,y_test))
0.8361581920903954

See that this model produces more accuracy than the previous model as we are using a specific regression model for filling the missing values.

We can also use models  KNN for filling the missing values. But sometimes, using models for imputation can result in overfitting the data.

Imputing missing values using the regression model allowed us to improve our model compared to dropping those columns.

But you have to understand that There is no perfect way for filling the missing values in a dataset.

Each of the methods that I have discussed in this blog, may work well with different types of datasets. You have to experiment through different methods, to check which method works the best for your dataset.

Thanks for reading through the article. Hope you now have a clear understanding of how to deal with missing values in your dataset.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

About the Author

Our Top Authors

  • Analytics Vidhya
  • Guest Blog
  • Tavish Srivastava
  • Aishwarya Singh
  • Aniruddha Bhandari
  • Abhishek Sharma
  • Aarshay Jain

Download Analytics Vidhya App for the Latest blog/Article

Leave a Reply Your email address will not be published. Required fields are marked *