*This article was published as a part of the Data Science Blogathon.*

Exploratory Data Analysis(EDA) is one of the most underrated and under-utilized approaches in any Data Science project. *EDA is the first step that data scientists perform where they study the data and extract valuable information and non-obvious insights from the data which ultimately helps during model building.*

* Before you model the data and test it, you need to build a relationship with the data. You can build this relationship by exploring the data, by plotting the data against the target variable, and observe how your data is behaving. This process of analysis before modeling is called Exploratory Data Analysis.*

*In this article, we are going to perform a hands-on EDA on a complex dataset from Kaggle(Advanced House Prediction). The link to the dataset is given below:*

*https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data*

1) Exploratory Data Analysis

2) Feature Engineering

3) Feature Selection

4) Hyperparameter tuning

5) Model Building and deployment

Let us perform on this complex dataset which has around 81 independent features and 1 target variable(sale price). It is a Regression problem statement.

*EDA will contain some basic steps like analyzing missing values, numerical and categorical features’ distribution, outliers, multicollinearity, etc. *We will see each one of the steps one by one.

Most of the time the data we obtain contains missing values and we need to find whether there exists any relationship between missing data and the sale price(target variable). Depending on that we replace the missing value with something like the median of that column.

This is the python code to capture the missing values for a large dataset in a list where we replace the missing value with 1 and replace the non-missing value with 0 and plot against the median sale price to see whether there exists a relationship b/w null values and target variable or not.

Alley 0.9377 % missing values MasVnrType 0.0055 % missing values MasVnrArea 0.0055 % missing values BsmtQual 0.0253 % missing values BsmtCond 0.0253 % missing values BsmtExposure 0.026 % missing values BsmtFinType1 0.0253 % missing values BsmtFinType2 0.026 % missing values FireplaceQu 0.4726 % missing values GarageType 0.0555 % missing values GarageYrBlt 0.0555 % missing values GarageFinish 0.0555 % missing values GarageQual 0.0555 % missing values GarageCond 0.0555 % missing values PoolQC 0.9952 % missing values Fence 0.8075 % missing values MiscFeature 0.963 % missing valuesLotFrontage 0.1774 % missing values

Since there are many missing values, we need to find the relationship between null values and the target variable(sale price)

This is one of the plots which shows that null values of Lot frontage feature have an impact on the target variable as it is increasing with the sale price. So yes, there exists a relationship b/w the two and we need to replace the null values with something substantial like the median of that particular feature.

Since this is a large dataset we need to visualize the different types of variables like date-time(year), discrete and continuous numerical feature, categorical feature, and their behavior with the target variable.

There are 39 numerical features in this dataset. The data type for string or a mix of string and numeric is given as an object which we can check by using the types attribute.

This is the python code to find the year features and see how those four features behave with respect to the target variable.

We can see here that as the yr sold increases, the cost decreases. Now, this has to be an anomaly since it is not possible so we need to do more analysis and come to better conclusions. This just shows the importance of EDA and how it can affect our conclusions.

Instead of comparing the sale price with the yr sold feature, let us compare the sale price and the difference of all year features.

Now we can compare the sale price(median) with the year built and the year of remodification and come to various conclusions like the value on the X-axis increases, the price decreases.

Discrete variables are the variables whose values exist in a particular range or are countable in a finite amount of time.

I have kept the threshold value for unique variables in a feature as 25 and those should not be in the year feature. Now let us see if there exists a relationship b/w discrete features and the target variable.

We can see that one of the features like OverallQuality has a direct relation with the target variable.

These are the type of features whose value can be basically anything till infinity. By using histograms, we analyze their distribution throughout the data set.

*We can see that the distribution we obtained is skewed. During regression problem statements, it is necessary to convert the skewed distribution to a normal distribution as it increases the accuracy of the model.*

*Logarithmic transformation is one of the techniques to convert a skewed distribution to a normal distribution where we take the log of all values of that particular feature and convert it into a whole new log feature itself. *

The outlier is any data point that lies outside of the distribution of the data set.

The presence of outliers in the dataset can hamper the accuracy of the model. Algorithms like linear regression are very sensitive to outliers so it needs to be handled carefully.

The Standard Deviation method is a common method to identify and replace the outliers where any data point which lies outside the 3rd standard deviation is considered to be an outlier. Although that threshold standard deviation can change depending on the size of the data set.

Here in EDA, let us analyze the outliers in the data set using boxplot.

The black dots denote the outliers present which are away from the distribution. The lower line of the rectangular box is 25%ile and the upper line is 75%ile.

So those black dots are the values that need to be removed or replaced which we will see in feature engineering.

The data type for a categorical feature is an object and we can check that with types attribute of pandas.

We generally convert the categorical values of a feature into dummy variables so that our algorithm understands. This is called as One hot encoding. If the cardinality of a particular category is very high, then we do not use one-hot encoding as it might lead to a curse of dimensionality.

The feature is MSZoning and number of categories are 5 The feature is Street and number of categories are 2 The feature is Alley and number of categories are 3 The feature is LotShape and number of categories are 4 The feature is LandContour and number of categories are 4 The feature is Utilities and number of categories are 2 The feature is LotConfig and number of categories are 5 The feature is LandSlope and number of categories are 3 The feature is Neighborhood and number of categories are 25 The feature is Condition1 and number of categories are 9 The feature is Condition2 and number of categories are 8 The feature is BldgType and number of categories are 5 The feature is HouseStyle and number of categories are 8 The feature is RoofStyle and number of categories are 6 The feature is RoofMatl and number of categories are 8 The feature is Exterior1st and number of categories are 15 The feature is Exterior2nd and number of categories are 16 The feature is MasVnrType and number of categories are 5 The feature is ExterCond and number of categories are 5 The feature is Foundation and number of categories are 6 The feature is BsmtQual and number of categories are 5 The feature is BsmtCond and number of categories are 5 The feature is BsmtExposure and number of categories are 5 The feature is BsmtFinType1 and number of categories are 7 The feature is BsmtFinType2 and number of categories are 7 The feature is Heating and number of categories are 6 The feature is HeatingQC and number of categories are 5 The feature is CentralAir and number of categories are 2 The feature is Electrical and number of categories are 6 The feature is KitchenQual and number of categories are 4 The feature is Functional and number of categories are 7 The feature is FireplaceQu and number of categories are 6 The feature is GarageType and number of categories are 7 The feature is GarageFinish and number of categories are 4 The feature is GarageQual and number of categories are 6 The feature is GarageCond and number of categories are 6 The feature is PavedDrive and number of categories are 3 The feature is PoolQC and number of categories are 4 The feature is Fence and number of categories are 5 The feature is SaleType and number of categories are 9 The feature is SaleCondition and number of categories are 6

**The threshold value of categories that I have chosen for this case to perform one-hot encoding is 10.**

**Now let us check whether there exists any relationship between the categorical features and the median of the target variable(sale price).**

In any dataset, whenever the independent features are internally correlated with each other, it hampers the accuracy of the model because the individual contribution of the features cannot be obtained. This is called Multicollinearity.

This is a huge problem when it comes to algorithms like linear and logistic regression.

We use the correlation matrix with heatmap to visualize the relationship of all the independent features with each other by their correlation coefficient values.

Generally, 0.7 is taken as the threshold which means if any 2 features have a correlation above 0.7, one of the two features can be dropped.

These were some important steps to perform in Exploratory Data Analysis and it also shows the importance of EDA when it comes to real-life projects. I hope everyone uses this technique while solving their project.

Happy Learning! 🙂

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Become a full stack data scientist##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

Understanding Cost Function
Understanding Gradient Descent
Math Behind Gradient Descent
Assumptions of Linear Regression
Implement Linear Regression from Scratch
Train Linear Regression in Python
Implementing Linear Regression in R
Diagnosing Residual Plots in Linear Regression Models
Generalized Linear Models
Introduction to Logistic Regression
Odds Ratio
Implementing Logistic Regression from Scratch
Introduction to Scikit-learn in Python
Train Logistic Regression in python
Multiclass using Logistic Regression
How to use Multinomial and Ordinal Logistic Regression in R ?
Challenges with Linear Regression
Introduction to Regularisation
Implementing Regularisation
Ridge Regression
Lasso Regression

Introduction to Stacking
Implementing Stacking
Variants of Stacking
Implementing Variants of Stacking
Introduction to Blending
Bootstrap Sampling
Introduction to Random Sampling
Hyper-parameters of Random Forest
Implementing Random Forest
Out-of-Bag (OOB) Score in the Random Forest
IPL Team Win Prediction Project Using Machine Learning
Introduction to Boosting
Gradient Boosting Algorithm
Math behind GBM
Implementing GBM in python
Regularized Greedy Forests
Extreme Gradient Boosting
Implementing XGBM in python
Tuning Hyperparameters of XGBoost in Python
Implement XGBM in R/H2O
Adaptive Boosting
Implementing Adaptive Boosing
LightGBM
Implementing LightGBM in Python
Catboost
Implementing Catboost in Python

Introduction to Clustering
Applications of Clustering
Evaluation Metrics for Clustering
Understanding K-Means
Implementation of K-Means in Python
Implementation of K-Means in R
Choosing Right Value for K
Profiling Market Segments using K-Means Clustering
Hierarchical Clustering
Implementation of Hierarchial Clustering
DBSCAN
Defining Similarity between clusters
Build Better and Accurate Clusters with Gaussian Mixture Models

Introduction to Machine Learning Interpretability
Framework and Interpretable Models
model Agnostic Methods for Interpretability
Implementing Interpretable Model
Understanding SHAP
Out-of-Core ML
Introduction to Interpretable Machine Learning Models
Model Agnostic Methods for Interpretability
Game Theory & Shapley Values

Deploying Machine Learning Model using Streamlit
Deploying ML Models in Docker
Deploy Using Streamlit
Deploy on Heroku
Deploy Using Netlify
Introduction to Amazon Sagemaker
Setting up Amazon SageMaker
Using SageMaker Endpoint to Generate Inference
Deploy on Microsoft Azure Cloud
Introduction to Flask for Model
Deploying ML model using Flask

Excellent work 🔥🔥 Clear explanation of topics. It would help me in revision of the topics I've learnt.

Good job! Good analysation used.

Very well done👏🏻. Excellent stuff written

Amazing content, keep up the good work.

Very informative article, well written @Sameer287!!

Very informative article and very engaging to read too, i rarely find good articles on exploratory data analysis, this was very well made

Grreatttt work mannn..... it was a whole tough task to include all the imp data analytics topics and features in one, which you covered up very wellllll, Good Productive work...👍

It was a great read Sameer! A very detailed analysis and a really good explanation with graphics to make things even clearer Really helpful for someone like me who's just starting out in the field of Data Science

I am a java developer and willing to learn Machine Learning as it is the new technology everyone is talking about. Since I dont know anything about it, I have been following your articles on Medium regarding the topic and by far I have got the general concepts or a good overview of ML. Thanks for posting. Keep up the good work.

Amazing work !

It is an information article about exploratory data analysis. I got a lot to learn. Please come up with articles on data science.

Very well articulated and it's quite informative too!! Great work @Sameer287

Nice insight to EDA and very well represented...

The article is very informative and to the point.

Great article. Very infirmative Keep up the good work!

Well written @Sameer287

Detailed explanation for importance of exploratory data analysis. Great article.

Nice! Great work!

Very informative and very well written!

Informative article 💯

Good project for resolving problem and useful

perfectly described

Excellent work

Detailed insightful article. Thanks for sharing!

Very well articulated and informative too. Keep up the good work. Kudos!

wow.. this is great

Great Work!

so well written.. nice blog man

This is some invaluable information buddy! would love to see more

This is some invaluable information buddy !! would love to see more !

Very Informative article and get to know more about EDA . Great work Keep it up .

Good job Sameer! Very informative!

Well written Sam. Good going.

Well, great work Sameer!! Really appreciate the way you have covered all the basics concepts.

Amazing insights man! Really beneficial

Great content. Very informative.

A very elaborate and insightful article you've written there!

Awesome article, was very informative! I learnt a new stuff :)

That's awesome article! Really very informative Great👍

Amazing!! Very informative 👍🏻

Thanks everyone!

Very well written !

Awesome article !!! Great job Sameer! It’s very informative

Great work 👏

Excellent work bro

Great work and amazing insights sir!🔥 Learnt something new from this

This article provides one of the best explanation to EDA. Great Work!!

Insightful and practical information!

Awesome work Sameer!!!!! 👏👏👏

Awesome work sameer!!!!

Excellent work Sameer !! 👍👍👍

Very informative. Keep it up Sameer. All the Very best wishes.

Great read. Keep it up👌

Very well written. Keep up the work bro

Great work Sameer!! 🙌💥

Very poor article, I don't know why people are giving such good reviews. Please don't encourage such links.