Hypothesis testing is a cornerstone of statistics, vital for statisticians, machine learning engineers, and data scientists. It involves using statistical tests to determine whether to reject the null hypothesis, which assumes no relationship or difference between groups. These tests, whether parametric or non-parametric, are essential for analyzing data sets, handling outliers, and understanding p-values and statistical power. This article explores various statistical tests, including parametric tests like T-test and Z-test, and non-parametric tests, which do not assume a specific data distribution. Through these tests, we can draw meaningful conclusions from our data.

- Differentiate between parametric analysis and non-parametric methods, understanding their applications in data analysis.
- Apply regression techniques to analyze relationships between variables in data science.
- Conduct parametric analysis on both small and large sample sizes, ensuring accurate interpretations.
- Utilize non-parametric tests such as the Wilcoxon Signed Rank Test, Spearman correlation, and Chi-Square for data sets with ordinal data and non-normally distributed data.
- Analyze blood pressure data and other health metrics using appropriate statistical methods.
- Evaluate the differences in independent groups using both parametric and non-parametric methods.
- Understand the significance of the distribution of the data in choosing the right statistical test.
- Integrate statistical tests into broader data science projects for robust analysis and insights.

**This article was published as a part of the ****Data Science Blogathon.**

The basic principle behind the parametric tests is that we have a fixed set of parameters that are used to determine a probabilistic model that may be used in Machine Learning as well.

Parametric tests are those tests for which we have prior knowledge of the population distribution (i.e, normal), or if not then we can easily approximate it to a normal distribution which is possible with the help of the Central Limit Theorem.

Parameters for using the normal distribution is:

- Mean
- Standard Deviation

Eventually, the classification of a test to be parametric is completely dependent on the population assumptions. There are many parametric tests available from which some of them are as follows:

- To find the confidence interval for the population means with the help of known standard deviation.
- To determine the confidence interval for population means along with the unknown standard deviation.
- To find the confidence interval for the population variance.
- To find the confidence interval for the difference of two means, with an unknown value of standard deviation.

In Non-Parametric tests, we don’t make any assumption about the parameters for the given population or the population we are studying. In fact, these tests don’t depend on the population.

Hence, there is no fixed set of parameters is available, and also there is no distribution (normal distribution, etc.) of any kind is available for use.

This is also the reason that non-parametric tests are also referred to as** distribution-free tests**.

In modern days, Non-parametric tests are gaining popularity and an impact of influence some reasons behind this fame is –

- The main reason is that there is no need to be mannered while using parametric tests.
- The second reason is that we do not require to make assumptions about the population given (or taken) on which we are doing the analysis.
- Most of the nonparametric tests available are very easy to apply and to understand also i.e. the complexity is very low.

Parameter | Parametric Test | Nonparametric Test |
---|---|---|

Assumptions | Assume normal distribution and equal variance | No assumptions about distribution or variance |

Data Types | Suitable for continuous data | Suitable for both continuous and categorical data |

Test Statistics | Based on population parameters | Based on ranks or frequencies |

Power | Generally more powerful when assumptions are met | More robust to violations of assumptions |

Sample Size | Requires larger sample size, especially when distributions are non-normal | Requires smaller sample size |

Interpretation of Results | Straightforward interpretation of results | Results are based on ranks or frequencies and may require additional interpretation |

** **

Let us explore types of parametric tests for hypothesis testing.

- It is a parametric test of hypothesis testing based on Student’s T distribution.
- It is essentially, testing the significance of the difference of the mean values when the sample size is small (i.e, less than 30) and when the population standard deviation is not available.

**Assumptions of this test:**

- Population distribution is normal, and
- Samples are random and independent
- The sample size is small.
- Population standard deviation is not known.

Mann-Whitney ‘U’ test is a non-parametric counterpart of the T-test.

A T-test can be a:

**One Sample T-test: **To compare a sample mean with that of the population mean.

**where,**

**x̄**is the sample mean**s**is the sample standard deviation**n**is the sample size**μ**is the population mean

**Two-Sample T-test:** To compare the means of two different samples.

where,

**x̄**is the sample mean of the first group_{1}**x̄**is the sample mean of the second group_{2}**S**is the sample-1 standard deviation_{1}**S**is the sample-2 standard deviation_{2}**n**is the sample size

**Note:**

- If the value of the test statistic is greater than the table value ->
**Rejects the null hypothesis**. - If the value of the test statistic is less than the table value ->
**Do not reject the null hypothesis**.

- It is a parametric test of hypothesis testing.
- It is used to determine whether the means are different when we know the population variance and the sample size is large (i.e., greater than 30).

**Assumptions of this test:**

- Population distribution is normal
- Samples are random and independent.
- The sample size is large.
- Population standard deviation is known.

**A Z-test can be:**

**One Sample Z-test: **To compare a sample mean with that of the population mean.

**Two Sample Z-test:** To compare the means of two different samples.

**where,**

**x̄**_{1}is the sample mean of 1st group**x̄**_{2}is the sample mean of 2nd group**σ**is the population-1 standard deviation_{1}**σ**is the population-2 standard deviation_{2}**n**is the sample size

- It is a parametric test of hypothesis testing based on Snedecor F-distribution.
- It is a test for the null hypothesis that two normal populations have the same variance.
- An F-test regards a comparison of equality of sample variances.

F-statistic is simply a ratio of two variances.

**F = s _{1}^{2}/s_{2}^{2}**

By changing the variance in the ratio, F-test has become a very flexible test. It can then be used to:

- Test the overall significance for a regression model.
- To compare the fits of different models and
- To test the equality of means.

**Assumptions of this test:**

- Population distribution is normal, and
- Researchers draw samples randomly and independently.

- Also called as Analysis of variance, it is a parametric test of hypothesis testing.
- It is an extension of the T-Test and Z-test.
- It tests the significance of the differences in the mean values among more than two sample groups.
- It uses F-test to statistically test the equality of means and the relative variance between them.

**Assumptions of this test:**

- Population distribution is normal, and
- Samples are random and independent.
- Homogeneity of sample variance.

One-way ANOVA and Two-way ANOVA are is types.

F-statistic = variance between the sample means/variance within the sample

Learn more about the difference between Z-test and T-test

Let us now explore types of non-parametric tests.

- It is a non-parametric test of hypothesis testing.
- It helps in assessing the goodness of fit between a set of observed and those expected theoretically.
- It makes a comparison between the expected frequencies and the observed frequencies.
- Greater the difference, the greater is the value of chi-square.
- If there is no difference between the expected and observed frequencies, then the value of chi-square is equal to zero. It is also known as the “Goodness of fit test” which determines whether a particular distribution fits the observed data or not.

As a non-parametric test, chi-square can be used:

- test of goodness of fit.
- as a test of independence of two variables.

Chi-square is also used to test the independence of two variables.

Conditions for chi-square test:

- Randomly collect and record the Observations.
- In the sample, all the entities must be independent.
- No one of the groups should contain very few items, say less than 10.
- The reasonably large overall number of items. Normally, it should be at least 50, however small the number of groups may be.

Chi-square as a parametric test is used as a test for population variance based on sample variance. If we take each one of a collection of sample variances, divide them by the known population variance and multiply these quotients by (n-1), where n means the number of items in the sample, we get the values of chi-square.

It is calculated as:

- It is a non-parametric test of hypothesis testing.
- This test investigates whether two independent samples were selected from a population having the same distribution.
- It serves as a true non-parametric counterpart of the T-test and provides the most accurate estimates of significance, especially when sample sizes are small and the population is not normally distributed.
- It is based on the comparison of every observation in the first sample with every observation in the other sample.
- The test statistic used here is “U”.
- Maximum value of “U” is ‘n
_{1}*n_{2}‘ and the minimum value is zero.

**It is also known as:**

- Mann-Whitney Wilcoxon Test.
- Mann-Whitney Wilcoxon Rank Test.

Mathematically, U is given by:

**U _{1} = R_{1} – n_{1}(n_{1}+1)/2**

where n_{1} is the sample size for sample 1, and R_{1} is the sum of ranks in Sample 1.

**U _{2} = R_{2} – n_{2}(n_{2}+1)/2**

When you consult the significance tables, use the smaller values of U1 and U2. The sum of the two values is given by,

**U _{1} + U_{2} = { R_{1} – n_{1}(n_{1}+1)/2 } + { R_{2} – n_{2}(n_{2}+1)/2 } **

Knowing that R_{1}+R_{2} = N(N+1)/2 and N=n_{1}+n_{2}, and doing some algebra, we find that the sum is:

**U _{1 }+ U_{2} = n_{1}*n_{2}**

- It is a non-parametric test of hypothesis testing.
- Researchers use this test to compare two or more independent samples of equal or different sample sizes.
- It extends the Mann-Whitney-U-Test, which is used to compare only two groups.
- One-Way ANOVA is the parametric equivalent of this test. And that’s why it is also known as ‘One-Way ANOVA on ranks.
- It uses ranks instead of actual data.
- It does not assume the population to be normally distributed.
- The test statistic used here is “H”.

**Also Read: The Evolution and Future of Data Science Innovation**

Understanding the distinctions and applications of parametric and non-parametric methods is crucial in quantitative data analysis. The choice between these methods depends on factors such as sample size, data distribution, and the presence of outliers. Techniques like the permutation test and the sign test provide robust alternatives when traditional assumptions are not met. Knowledge of standard deviation and other statistical measures enhances the reliability of your findings. For further reading and deeper insights into these topics, consult reputable sources such as Wiley publications.

A. Parametric tests assume that the data is distributed and assume equal variances of the groups being compared. Nonparametric tests do not make any assumptions about the distribution of the data or the equality of variances.

A. The 4 parametric tests are t-test, ANOVA (Analysis of Variance), pearson correlation coefficient

and linear regression.

A. The four non-parametric tests include the Wilcoxon signed-rank test, Mann-Whitney U test, Kruskal-Wallis test, and Spearman correlation coefficient.

**The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.**

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Become a full stack data scientist
##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

##

Understanding Cost Function
Understanding Gradient Descent
Math Behind Gradient Descent
Assumptions of Linear Regression
Implement Linear Regression from Scratch
Train Linear Regression in Python
Implementing Linear Regression in R
Diagnosing Residual Plots in Linear Regression Models
Generalized Linear Models
Introduction to Logistic Regression
Odds Ratio
Implementing Logistic Regression from Scratch
Introduction to Scikit-learn in Python
Train Logistic Regression in python
Multiclass using Logistic Regression
How to use Multinomial and Ordinal Logistic Regression in R ?
Challenges with Linear Regression
Introduction to Regularisation
Implementing Regularisation
Ridge Regression
Lasso Regression

Introduction to Stacking
Implementing Stacking
Variants of Stacking
Implementing Variants of Stacking
Introduction to Blending
Bootstrap Sampling
Introduction to Random Sampling
Hyper-parameters of Random Forest
Implementing Random Forest
Out-of-Bag (OOB) Score in the Random Forest
IPL Team Win Prediction Project Using Machine Learning
Introduction to Boosting
Gradient Boosting Algorithm
Math behind GBM
Implementing GBM in python
Regularized Greedy Forests
Extreme Gradient Boosting
Implementing XGBM in python
Tuning Hyperparameters of XGBoost in Python
Implement XGBM in R/H2O
Adaptive Boosting
Implementing Adaptive Boosing
LightGBM
Implementing LightGBM in Python
Catboost
Implementing Catboost in Python

Introduction to Clustering
Applications of Clustering
Evaluation Metrics for Clustering
Understanding K-Means
Implementation of K-Means in Python
Implementation of K-Means in R
Choosing Right Value for K
Profiling Market Segments using K-Means Clustering
Hierarchical Clustering
Implementation of Hierarchial Clustering
DBSCAN
Defining Similarity between clusters
Build Better and Accurate Clusters with Gaussian Mixture Models

Introduction to Machine Learning Interpretability
Framework and Interpretable Models
model Agnostic Methods for Interpretability
Implementing Interpretable Model
Understanding SHAP
Out-of-Core ML
Introduction to Interpretable Machine Learning Models
Model Agnostic Methods for Interpretability
Game Theory & Shapley Values

Deploying Machine Learning Model using Streamlit
Deploying ML Models in Docker
Deploy Using Streamlit
Deploy on Heroku
Deploy Using Netlify
Introduction to Amazon Sagemaker
Setting up Amazon SageMaker
Using SageMaker Endpoint to Generate Inference
Deploy on Microsoft Azure Cloud
Introduction to Flask for Model
Deploying ML model using Flask

Thanks for the wonderful lecture.

I liked your article Can you provide atleast one example of each parametric test and non parametric test to understand application of each statistical tools

Great article, Aashi Goyal! Thank you for providing a comprehensive overview of parametric and non-parametric tests in statistics. The importance of understanding these tests cannot be overstated, as they play a crucial role in hypothesis testing. Your article effectively explains the key differences between the two types of tests, highlighting the assumptions, data types, and test statistics involved. It's a valuable resource for statisticians, data scientists, and machine learning engineers. Keep up the excellent work!