**Sklearn or scikit-learn** is no doubt the most useful library for machine learning in **Python**. The Sklearn library contains endless efficient tools for **Machine Learning** and **Statistical modeling** which includes **Classification**, **Regression**, **Clustering**, and **Dimensionality reduction**.

In this article, we will learn different types of objects that are present in **Sklearn**. This article mainly deals with a clear understanding of the functions present in the objects and gives a clear idea about when to use which function in our **Machine Learning Pipeline**.

Primarily, there are three types of objects in **scikit-learn design**:

**1.** Estimator

**2.** Predictor

**3.** Transformer

**Now, Let’s see the usage of some important methods fit(), transform(), fit_transform() and predict().**

**Â **

fit()

– It is used for calculating the initial parameters on the training data and later saves them as internal objects state.

– This method calculates the parameters Î¼(mean) and Ïƒ(standard deviation) and saves them as internal objects.

– A black box which only does the computation and gives nothing.

transform()

– Use the initial above calculated values and return modified training data as output.

– Using these same parameters, using this method we can transform a particular dataset.

– Used for pre-processing before modeling.

fit_transform()

– It is a conglomerate above two steps. Internally, it first calls fit() and then transform() on the same data.

– It joins the fit() and transform() method for the transformation of the dataset.

– It is used on the training data so that we can scale the training data and also learn the scaling parameters. Here, the model built will learn the mean and variance of the features of the training set. These learned parameters are then further used to scale our test data.

fit()

– It calculates the parameters or weights on the training data (e.g. parameters returned by coef() in case of Linear Regression) and saves them as an internal object state.

predict()

– Use the above-calculated weights on the test data to make the predictions.

Let’s try to understand the difference with a given example:

Suppose you have an array **arr = [1,2,3,y,5]** and you have a sklearn class FillMyArray that filled your array.

When you declare an instance of your class:

my_filler = FillMyArray()

We have the in hand methods fit(), transform() and fit_transform().

**fit(): my_filler.fit(arr)** will compute the value to assign to x to fill out the array and store it in our instance my_filler.

**transform():** After the value is computed and stored during the previous .fit() stage we can call **my_filler.transform(arr**) which will return the filled array [1,2,3,4,5].

**fit_transform():** If we perform **my_filler.fit_transform(arr)** we can skip one line of code and have the value calculated along with assigned to the filled array that is directly returned in only one stage.

__Question:__ We are aware that we call fit_transform() on our training dataset, while the transform() method on our test dataset. But the question is why do we do so?

**The real deal here is “Data leakage”.**

**fit_transform** do certain calculations followed by transformation, Let’s assume calculating the mean or average of columns from certain data and then replacing the missing values according to it. For a training set, we need to both calculate followed by transformation.

However, for the testing set, Machine learning applies predictions based on the learning during the training set, due to which it doesn’t need to perform calculations and perform just the transformation.

If we perform the **fit()** method even on test data, we will compute a new mean and variance that will be a Naive scale for each feature and will allow the model to learn on the test data too. However, we will no longer be able to keep it as a surprise to our model and it wouldn’t be able to give us a good estimate on model performance on the unseen data, which is certainly our ultimate aim.

It is the general procedure to scale the data when building a machine learning model. So that the model is not biased to a specific feature and prevents our model to learn the trends of our test data at the same time.

**Here we try to implement all the functions which we studied in the above part of the article.**

**Step-1: Import necessary python libraries and then read and load the “TITANIC” Dataset.**

**Step-2: Calculate the number of missing values per column.**

df.isnull().sum()

**Step-3: Fill the missing value of the “Age” column with their respective median.**

df['Age'].fillna(df.Age.median(),inplace=True)

**Step-4: Now, again check if there are missing values present or not in any column.**

df.isnull().sum()

**Step-5: Define our Independent(predictor) and Dependent(response) variables.**

X=df.iloc[:,1:] y=df.iloc[:,0]

**Step-6: Split our dataset into train and test subsets.**

from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.20,random_state=42)

**Step-7: Now using standard scaler we first fit and then transform our dataset.**

from sklearn.preprocessing import StandardScaler scaler=StandardScaler() X_train_fit=scaler.fit(X_train) X_train_scaled=scaler.transform(X_train) pd.DataFrame(X_train_scaled)

**Step-8: Use fit_transform() function directly and verify the results.**

X_train_scaled=scaler.fit_transform(X_train) pd.DataFrame(X_train_scaled)

**Step-9: Transform our test data.**

X_test_scaled=scaler.transform(X_test) pd.DataFrame(X_test_scaled)

**–** Here we observe that theÂ **fit_transform()** function gives the same result as the function **fit()** and theÂ **transform()** function gives separately by combining the results.

**– **Remember **fit_transform()** function only acts on training data, **transform()** acts on test data, andÂ **predict()** acts on test data.

**–Â **In summary, **fit()** performs or completes the training step, **transform()**** **changes the data in the pipeline to pass it on to the next stage in the pipeline, and **fit_transform()**** **does both the fitting and the transforming in one possibly short step.

*Thanks for reading!*

If you liked this and want to know more, go visit my other articles on Data Science and Machine Learning by clicking on the Link

Please feel free to contact me on Linkedin, Email.

Something not mentioned or want to share your thoughts? Feel free to comment below And Iâ€™ll get back to you.

Till then Stay Home, Stay Safe to prevent the spread of** COVID-19, **and Keep Learning!

Currently, I pursuing my Bachelor of Technology (B.Tech) in Computer Science and Engineering from the **Indian Institute of Technology Jodhpur(IITJ). **I am very enthusiastic about Machine learning, Deep Learning, and Artificial Intelligence.

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Become a full stack data scientist
##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

Understanding Cost Function
Understanding Gradient Descent
Math Behind Gradient Descent
Assumptions of Linear Regression
Implement Linear Regression from Scratch
Train Linear Regression in Python
Implementing Linear Regression in R
Diagnosing Residual Plots in Linear Regression Models
Generalized Linear Models
Introduction to Logistic Regression
Odds Ratio
Implementing Logistic Regression from Scratch
Introduction to Scikit-learn in Python
Train Logistic Regression in python
Multiclass using Logistic Regression
How to use Multinomial and Ordinal Logistic Regression in R ?
Challenges with Linear Regression
Introduction to Regularisation
Implementing Regularisation
Ridge Regression
Lasso Regression

Introduction to Stacking
Implementing Stacking
Variants of Stacking
Implementing Variants of Stacking
Introduction to Blending
Bootstrap Sampling
Introduction to Random Sampling
Hyper-parameters of Random Forest
Implementing Random Forest
Out-of-Bag (OOB) Score in the Random Forest
IPL Team Win Prediction Project Using Machine Learning
Introduction to Boosting
Gradient Boosting Algorithm
Math behind GBM
Implementing GBM in python
Regularized Greedy Forests
Extreme Gradient Boosting
Implementing XGBM in python
Tuning Hyperparameters of XGBoost in Python
Implement XGBM in R/H2O
Adaptive Boosting
Implementing Adaptive Boosing
LightGBM
Implementing LightGBM in Python
Catboost
Implementing Catboost in Python

Introduction to Clustering
Applications of Clustering
Evaluation Metrics for Clustering
Understanding K-Means
Implementation of K-Means in Python
Implementation of K-Means in R
Choosing Right Value for K
Profiling Market Segments using K-Means Clustering
Hierarchical Clustering
Implementation of Hierarchial Clustering
DBSCAN
Defining Similarity between clusters
Build Better and Accurate Clusters with Gaussian Mixture Models

Introduction to Machine Learning Interpretability
Framework and Interpretable Models
model Agnostic Methods for Interpretability
Implementing Interpretable Model
Understanding SHAP
Out-of-Core ML
Introduction to Interpretable Machine Learning Models
Model Agnostic Methods for Interpretability
Game Theory & Shapley Values

Deploying Machine Learning Model using Streamlit
Deploying ML Models in Docker
Deploy Using Streamlit
Deploy on Heroku
Deploy Using Netlify
Introduction to Amazon Sagemaker
Setting up Amazon SageMaker
Using SageMaker Endpoint to Generate Inference
Deploy on Microsoft Azure Cloud
Introduction to Flask for Model
Deploying ML model using Flask