Hi, Enthusiastic readers!

I have a Masters’s degree in Statistics and a year ago, I stepped into the field of data science. Writing a blog was on my bucket list for many days, and here I am making an attempt to share my knowledge.

The main focus of this article is to introduce hypothesis testing and illustrate with a few examples in Python. Whatever be the concept, its execution can be done easily with programming languages like Python. But, the most important part is drawing inference out of the output and it is highly recommended to know the math behind the executed code.

Hypothesis testing is important in statistics because it gives statistical evidence to show the validity of the study. The null hypothesis states that there is no statistical significance exists between sets of data which implies that the population parameter will be equal to a hypothesized value. Usually, We state the alternative hypothesis which we want to prove. For a null hypothesis H_{0} and its complementary alternative hypothesis H_{1}, there are 3 cases when the parametric value under H_{0} ≠ H_{1} or H_{0} < H_{1} or H_{0} > H_{1}.

Let’s consider a scenario where I have stated the hypothesis, a relevant test statistic, and the Python code for your understanding. I have coded the conclusion part too. Here, I have shared with you few cases instead of covering all. In this blog, I would like to give examples for one sample t-test, two-sample t-test, and paired t-test using Python.

Systolic blood pressures of 14 patients are given below:

183, 152, 178, 157, 194, 163, 144, 114, 178, 152, 118, 158, 172, 138

Test, whether the population mean, is less than 165

H_{0}: There is no significant mean difference in systolic blood pressure. i.e., μ = 165

H_{1}: The population mean is less than 165. i.e., μ < 165

Where,

x̄ is sample mean

μ is the population mean

s is sample standard deviation

n is the number of observations;

**Python Code:**

So we conclude that there is a significant mean difference in systolic blood pressure.

i.e., μ < 165 at %.2f level of significance”’%alpha)

Compare the effectiveness of ammonium chloride and urea, on the grain yield of paddy, an experiment was conducted. The results are given below:

Ammonium chloride (X | 13.4 | 10.9 | 11.2 | 11.8 | 14 | 15.3 | 14.2 | 12.6 | 17 | 16.2 | 16.5 | 15.7 |

Urea (X_{2}) | 12 | 11.7 | 10.7 | 11.2 | 14.8 | 14.4 | 13.9 | 13.7 | 16.9 | 16 | 15.6 | 16 |

H_{0}: The effect of ammonium chloride and urea on grain yield of paddy are equal i.e., μ_{1} = μ_{2}

H_{1}: The effect of ammonium chloride and urea on grain yield of paddy is not equal i.e., μ_{1} ≠ μ_{2}

Where,

x̄_{1 }and x̄_{2} are sample means for x_{1} and x_{2 }respectively.

n_{1} and n_{2} are the numbers of observations in x_{1} and x_{2} respectively.

s_{1} and s_{2 } are the sample standard deviation for x_{1} and x_{2 }respectively.

Ammonium_chloride=[13.4,10.9,11.2,11.8,14,15.3,14.2,12.6,17,16.2,16.5,15.7] Urea=[12,11.7,10.7,11.2,14.8,14.4,13.9,13.7,16.9,16,15.6,16]

from scipy import stats t_value,p_value=stats.ttest_ind(Ammonium_chloride,Urea) print('Test statistic is %f'%float("{:.6f}".format(t_value))) print('p-value for two tailed test is %f'%p_value) alpha = 0.05 if p_value<=alpha: print('Conclusion','n','Since p-value(=%f)'%p_value,'<','alpha(=%.2f)'%alpha,'''We reject the null hypothesis H0. So we conclude that the effect of ammonium chloride and urea on grain yield of paddy are not equal i.e., μ1 = μ2 at %.2f level of significance.'''%alpha) else: print('Conclusion','n','Since p-value(=%f)'%p_value,'>','alpha(=%.2f)'%alpha,'''We do not reject the null hypothesis H0.

So we conclude that the effect of ammonium chloride and urea on grain yield of paddy are equal

i.e., μ1 ≠ μ2 at %.2f level of significance.”’%alpha)

Eleven schoolboys were given a test in Statistics. They were given a Month’s tuition and a second test were held at the end of it. Do the marks give evidence that the students have benefited from the exam coaching?

Marks in 1st test: 23 20 19 21 18 20 18 17 23 16 19

Marks in 2nd test: 24 19 22 18 20 22 20 20 23 20 18

H_{0}: The students have not benefited from the tuition class. i.e., d = 0

H_{1}: The students have benefited from the tuition class. i.e., d < 0

Where, d = x-y; d is the difference between marks in the first test (say x) and marks in the second test (say y).

Test statistic

Where, n is the number of samples ‘s’ is sample standard deviation

alpha = 0.05 first_test =[23, 20, 19, 21, 18, 20, 18, 17, 23, 16, 19] second_test=[24, 19, 22, 18, 20, 22, 20, 20, 23, 20, 18]

from scipy import stats t_value,p_value=stats.ttest_rel(first_test,second_test) one_tailed_p_value=float("{:.6f}".format(p_value/2)) print('Test statistic is %f'%float("{:.6f}".format(t_value))) print('p-value for one_tailed_test is %f'%one_tailed_p_value) alpha = 0.05 if one_tailed_p_value<=alpha: print('Conclusion','n','Since p-value(=%f)'%one_tailed_p_value,'<','alpha(=%.2f)'%alpha,'''We reject the null hypothesis H0. So we conclude that the students have benefited by the tuition class. i.e., d = 0 at %.2f level of significance.'''%alpha) else: print('Conclusion','n','Since p-value(=%f)'%one_tailed_p_value,'>','alpha(=%.2f)'%alpha,'''We do not reject the null hypothesis H0. So we conclude that the students have not benefited by the tuition class. i.e., d = 0 at %.2f level of significance.'''%alpha)

#Output

Test statistic is -1.707331

p-value for one_tailed_test is 0.059282

Conclusion

Since p-value(=0.059282) > alpha(=0.05) We do not reject the null hypothesis H0.

So we conclude that the students have not benefited by the tuition class.

i.e., d = 0 at 0.05 level of significance.

1. Biostatistics Basic Concepts and Methodology for the Health Sciences By Wayne W. Daniel

2. A Text Book of Agricultural Statistics By Irā Araṅkacāmi, R. Rangaswamy

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Become a full stack data scientist##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

Understanding Cost Function
Understanding Gradient Descent
Math Behind Gradient Descent
Assumptions of Linear Regression
Implement Linear Regression from Scratch
Train Linear Regression in Python
Implementing Linear Regression in R
Diagnosing Residual Plots in Linear Regression Models
Generalized Linear Models
Introduction to Logistic Regression
Odds Ratio
Implementing Logistic Regression from Scratch
Introduction to Scikit-learn in Python
Train Logistic Regression in python
Multiclass using Logistic Regression
How to use Multinomial and Ordinal Logistic Regression in R ?
Challenges with Linear Regression
Introduction to Regularisation
Implementing Regularisation
Ridge Regression
Lasso Regression

Introduction to Stacking
Implementing Stacking
Variants of Stacking
Implementing Variants of Stacking
Introduction to Blending
Bootstrap Sampling
Introduction to Random Sampling
Hyper-parameters of Random Forest
Implementing Random Forest
Out-of-Bag (OOB) Score in the Random Forest
IPL Team Win Prediction Project Using Machine Learning
Introduction to Boosting
Gradient Boosting Algorithm
Math behind GBM
Implementing GBM in python
Regularized Greedy Forests
Extreme Gradient Boosting
Implementing XGBM in python
Tuning Hyperparameters of XGBoost in Python
Implement XGBM in R/H2O
Adaptive Boosting
Implementing Adaptive Boosing
LightGBM
Implementing LightGBM in Python
Catboost
Implementing Catboost in Python

Introduction to Clustering
Applications of Clustering
Evaluation Metrics for Clustering
Understanding K-Means
Implementation of K-Means in Python
Implementation of K-Means in R
Choosing Right Value for K
Profiling Market Segments using K-Means Clustering
Hierarchical Clustering
Implementation of Hierarchial Clustering
DBSCAN
Defining Similarity between clusters
Build Better and Accurate Clusters with Gaussian Mixture Models

Introduction to Machine Learning Interpretability
Framework and Interpretable Models
model Agnostic Methods for Interpretability
Implementing Interpretable Model
Understanding SHAP
Out-of-Core ML
Introduction to Interpretable Machine Learning Models
Model Agnostic Methods for Interpretability
Game Theory & Shapley Values

Deploying Machine Learning Model using Streamlit
Deploying ML Models in Docker
Deploy Using Streamlit
Deploy on Heroku
Deploy Using Netlify
Introduction to Amazon Sagemaker
Setting up Amazon SageMaker
Using SageMaker Endpoint to Generate Inference
Deploy on Microsoft Azure Cloud
Introduction to Flask for Model
Deploying ML model using Flask

Good to see our friends going higher and its a proud moment for me to learn from a friend instead of a tutor. Do more.