Exploratory Data Analysis is a set of techniques that were developed by Tukey, John Wilder in 1970. The philosophy behind this approach was to examine the data before building a model. John TukeyÂ encouraged statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments.Â Today Data scientists and analysts spend most of their time in Data Wrangling and Exploratory Data Analysis also known as EDA.Â But what is this EDA and why it is so important? This article explains what is EDA and how to apply EDA techniques to a dataset.

This article was published as a part of the Data Science Blogathon

Exploratory Data Analysis or EDA is used to take insights from the data. Data Scientists and Analysts try to find different patterns, relations, and anomalies in the data using some statistical graphs and other visualization techniques. Following things are part of EDA :

- Get maximum insights from a data set
- Uncover underlying structure
- Extract important variables from the dataset
- Detect outliers and anomalies(if any)
- Test underlying assumptions
- Determine the optimal factor settings

**Why EDA is Important?**

The main purpose of EDA is to detect any errors, outliers as well as to understand different patterns in the data. It allows Analysts to understand the data better before making any assumptions. The outcomes of EDA helps businesses to know their customers, expand their business and take decisions accordingly.

To understand EDA better let us take an example. We will be using Automobile Dataset for analysis.

**Python Code:**

We can see that the dataset has 26 attributes and column names are missing. We can also observe that there are ‘?’ at some places which means our data has missing value also. We will fill in column names first.

cols=['symboling','normalized_losses','make','fuel_type','aspiration','num_of_doors','body_style','drive_wheels_engine','location','wheel_base','length','width','height','curb_weight','engine_type','num_of_cylinders','engine_size','fuel_system','bore','stroke','compression_ratio','horsepower','peak_rpm','city_mpg','highway_mpg','price']

auto.columns=cols

auto.head()

We got our column names. The price column is our target variable.

auto.isnull().sum()

It is showing that we don’t have any null values in our dataset but we have observed earlier that there were ‘?’ symbols in the dataset, which means that these symbols are in the form of an object. Let us now check the data types of each attribute.

auto.info()

We can observe that those columns that have symbols are in object form as well as some columns should be of an integer type but are of an object type. Now let us detect which columns have symbols and if there are any other symbols too.

#Checking for wrong entries like symbols -,?,#,*,etc. for col in auto.columns: print('{} : {}'.format(col,auto[col].unique()))

There are null values in our dataset in form of ‘?’ only but pandas are not reading them so we will replace them into * np.nan* form.

for col in auto.columns: auto[col].replace({'?':np.nan},inplace=True)

auto.head()

Now we can observe that the ‘?’ symbols have been converted into *NaN* form. Let us check for missing values again.

auto.isnull().sum()

We can observe that now there are missing values in some columns.

With the help of heatmap, we can see the amount of data that is missing from the attribute. With this, we can make decisions whether to drop these missing values or to replace them. Usually dropping the missing values is not advisable but sometimes it may be helpful too.

sns.heatmap(auto.isnull(),cbar=False,cmap='viridis')

Now observe that there are many missing values in *normalized_losses* while other columns have fewer missing values. We can’t drop the *normalized_losses* column as it may be important for our prediction.

We will be replacing these missing values with mean because the number of missing values is less(we can use median too).

num_col = ['normalized_losses', 'bore', 'stroke', 'horsepower', 'peak_rpm','price'] for col in num_col: auto[col]=pd.to_numeric(auto[col]) auto[col].fillna(auto[col].mean(), inplace=True) auto.head()

We can observe that now our missing values are replaced with mean.

This is the most important step in EDA. This step will decide how much can you think as an Analyst. This step varies from person to person in terms of their questioning ability. Try to ask questions related to independent variables and the target variable. For example – how fuel_type will affect the price of the car?

Before this let us check the correlation between different variables, this will give us a roadmap on how to proceed further.

plt.figure(figsize=(10,10)) sns.heatmap(auto.corr(),cbar=True,annot=True,cmap='Blues')

*Price – wheel_base, length, width, curb_weight, engine_size, bore, horsepower**wheelbase – length, width, height, curb_weight, engine_size, price**horsepower – length, width, curb_weight, engine_size, bore, price**Highway mpg – city mpg*

*Price – highway_mpg, city_mpg**highway_mpg – wheel base, length, width, curb_weight, engine_size, bore, horsepower, price**city – wheel base, length, width, curb_weight, engine_size, bore, horsepower, price*

This heatmap has given us great insights into the data.

Now let us apply domain knowledge and ask the questions which will affect the price of the automobile.

plt.figure(figsize=(10,10)) plt.scatter(x='horsepower',y='price',data=auto) plt.xlabel('Horsepower') plt.ylabel('Price')

We can see that most of the horsepower value lies between 50-150 has price mostly between 5000-25000, there are outliers also(between 200-300).

Let’s see a count between 50-100 i.e univariate analysis of horsepower.

sns.histplot(auto.horsepower,bins=10)

The average count between 50-100 is 50 and it is positively skewed.

plt.figure(figsize=(10,10)) plt.scatter(x='engine_size',y='price',data=auto) plt.xlabel('Engine size') plt.ylabel('Price')

We can observe that the pattern is similar to horsepower vs price.

plt.figure(figsize=(10,10)) plt.scatter(x='highway_mpg',y='price',data=auto) plt.xlabel('Higway mpg') plt.ylabel('Price')

We can see price decreases with an increase in higway_mpg.

Let us check the number of doors.

#Unique values in num_of_doors auto.num_of_doors.value_counts()

We will use a boxplot for this analysis.

sns.boxplot(x='price',y='num_of_doors',data=auto)

With this boxplot, we can conclude that the average price of a vehicle with two doors is 10000, and the average price of a vehicle with four doors is 12000.

With this plot, we have gained enough insights from data and our data is ready to build a model.

In conclusion, Exploratory Data Analysis (EDA) is a crucial step in data analysis. It helps to understand the nature of the data and identify any patterns or trends hidden within it. We can gain insights into the data using various visualization techniques and statistical methods, which can help make informed decisions. This article has covered some essential techniques for performing EDA, such as summary statistics, data visualization, and correlation analysis. However, EDA is not limited to these techniques, and several other methods can be used depending on the nature of the data. Mastering EDA is essential for building accurate models and making data-driven decisions as a data scientist. If you want to learn more about EDA, consider enrolling in our Blackbelt program for advanced data analysis techniques.

*The media shown in this article are not owned by Analytics Vidhya and are used at the Authorâ€™s discretion.*

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Become a full stack data scientist
##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

Understanding Cost Function
Understanding Gradient Descent
Math Behind Gradient Descent
Assumptions of Linear Regression
Implement Linear Regression from Scratch
Train Linear Regression in Python
Implementing Linear Regression in R
Diagnosing Residual Plots in Linear Regression Models
Generalized Linear Models
Introduction to Logistic Regression
Odds Ratio
Implementing Logistic Regression from Scratch
Introduction to Scikit-learn in Python
Train Logistic Regression in python
Multiclass using Logistic Regression
How to use Multinomial and Ordinal Logistic Regression in R ?
Challenges with Linear Regression
Introduction to Regularisation
Implementing Regularisation
Ridge Regression
Lasso Regression

Introduction to Stacking
Implementing Stacking
Variants of Stacking
Implementing Variants of Stacking
Introduction to Blending
Bootstrap Sampling
Introduction to Random Sampling
Hyper-parameters of Random Forest
Implementing Random Forest
Out-of-Bag (OOB) Score in the Random Forest
IPL Team Win Prediction Project Using Machine Learning
Introduction to Boosting
Gradient Boosting Algorithm
Math behind GBM
Implementing GBM in python
Regularized Greedy Forests
Extreme Gradient Boosting
Implementing XGBM in python
Tuning Hyperparameters of XGBoost in Python
Implement XGBM in R/H2O
Adaptive Boosting
Implementing Adaptive Boosing
LightGBM
Implementing LightGBM in Python
Catboost
Implementing Catboost in Python

Introduction to Clustering
Applications of Clustering
Evaluation Metrics for Clustering
Understanding K-Means
Implementation of K-Means in Python
Implementation of K-Means in R
Choosing Right Value for K
Profiling Market Segments using K-Means Clustering
Hierarchical Clustering
Implementation of Hierarchial Clustering
DBSCAN
Defining Similarity between clusters
Build Better and Accurate Clusters with Gaussian Mixture Models

Introduction to Machine Learning Interpretability
Framework and Interpretable Models
model Agnostic Methods for Interpretability
Implementing Interpretable Model
Understanding SHAP
Out-of-Core ML
Introduction to Interpretable Machine Learning Models
Model Agnostic Methods for Interpretability
Game Theory & Shapley Values

Deploying Machine Learning Model using Streamlit
Deploying ML Models in Docker
Deploy Using Streamlit
Deploy on Heroku
Deploy Using Netlify
Introduction to Amazon Sagemaker
Setting up Amazon SageMaker
Using SageMaker Endpoint to Generate Inference
Deploy on Microsoft Azure Cloud
Introduction to Flask for Model
Deploying ML model using Flask

It is very useful blog to understand the concepts of EDA.

This is a very great content, thanks for putting this together

hey sir, i am new bee and explore different things i have question after auto.info you use for loop to see unique values but you use this: #Checking for wrong entries like symbols -,?,#,*,etc. for col in auto.columns: print('{} : {}'.format(col,auto[col].unique())) my question: instead of above query we also use below i don't understand why you use format in their for i in df.columns: print(i,df[i].unique())

This was very helpful and clear!

Hey jainick , i have used format there because I want name of the columns with their unique values in a proper format.