Learn everything about Analytics

The Hackathon Practice Guide by Analytics Vidhya

, / 2


Data Hackathons are a platform where you get a chance of intense workout with your knowledge and techniques learnt in analytics. It is a place where you can evaluate yourself by competing and learning from fellow data science experts.

       This time your tiny workout can make you win Amazon Vouchers worth Rs. 10000 (~$200)

Here is an exclusive guide to help you prepare to participate in Data Hackathon Online on 11th July 2015. This guide illustrates the list of important techniques which you should practice before stepping into the playing ground.

Considering our series of online hackathons, we’ll keep building this guide into a one place exhaustive resource for data science techniques and algorithms. For the time being. this guide is good enough to help you prepare for upcoming data hackathon on 11th July 2015.

1. Framework of Model Building Process

This is how the framework for model building works – you get data from multiple sources, which you extract and transform. Once transformed, you apply your knowledge of predictive modeling and business understanding to build predictive models.

model building process

2. Hypothesis Generation

  • In your groups, list down all possible variables, which might influence the chances of survival of a passenger
  • Download the dataset from Kaggle
  • Next, look at the dataset and see which variables are available

                                                   Make sure you always do this in this order


3.  Data Exploration and Feature Engineering

  • Import data set
  • Variable identification
  • Univariate, Bivariate and Multivariate analysis
  • Identify and Treat missing and outlier values
  • Create new variables or transform existing variables



Modelling Techniques

1) Logistic Regression

  • Logistic regression is a form of regression analysis in which the outcome variable is binary or dichotomous
  • Used when the focus on whether or not an event occurred, rather than when it occurred
  • Here, Instead of modelling the outcome, Y, directly, the method models the log odds(Y) using the logistic function
  • Analysis of variance (ANOVA) and logistic regression all are special cases of General Linear Model (GLM)
  • The probability of success falls between 0 and 1 for all possible values of X

picture 1 picture 2

a) Logit Transformation


b) Logit is directly related to Odds

  • The logistic model can be written as:


  • This implies that the odds for success can be expressed as:


  • This relationship is the key to interpreting the coefficients in a logistic regression model



2) Decision Tree

  • Decision tree is a type of supervised learning algorithm
  • It works for both categorical and continuous input and output variables
  • It is a classification technique that split the population or sample into two or more homogeneous sets (or sub-populations) based on most significant splitter / differentiator in input variables


Decision Tree – Example


Types of Decision Trees

  • Binary Variable Decision Tree: Decision Tree which has binary target variable then it called as Binary Variable Decision Tree. Example:- In above scenario of student problem, where the target variable was “Student will play cricket or not” i.e. YES or NO.
  • Continuous Variable Decision Tree: Decision Tree has continuous target variable then it is called as Continuous Variable Decision Tree.


Decision Tree – Terminology




Decision Tree – Advantage and Disadvantages


  • Easy to understand
  • Useful in data exploration
  • Less Data Cleaning required
  • Data type is not a constraint


  • Overfit
  • Not fit for continuous variables
  • Not Sensitive to Skewed distributions




3) Random Forest

  • “Random Forest“ is an algorithm to perform very intensive calculations.
  • Random forest is like a bootstrapping algorithm with Decision tree (CART) model.
  • Random forest gives much more accurate predictions when compared to simple CART/CHAID or regression models in many scenarios.
  • It captures the variance of several input variables at the same time and enables high number of observations to participate in the prediction.
  • A different subset of the training data and subset of variables are selected for each tree
  • Remaining training data are used to estimate error and variable importance


Random Forest – Advantages and Disadvantages


  • No need for pruning trees
  • Accuracy and variable importance generated automatically
  • Not very sensitive to outliers in training data
  • Easy to set parameters


  • Over fitting is not a problem
  • It is black box, rules behind model building can not be explained



4) Support Vector Machine(SVM)

  • It is a classification technique.
  • Support Vectors are simply the coordinates of individual observation
  • Support Vector Machine is a frontier which best segregates the one class from other
  • Solving SVMs is a quadratic programming problem
  • Seen by many as the most successful current text classification method


Case Study 1

We have a population of 50%-50% Males and Females. Here, we want to create some set of rules which will guide us the gender class for rest of the population.Picture8

The blue circles in the plot represent females and green squares represents male.

Males in our population have a higher average height.

Females in our population have longer scalp hairs.


Case Study 2

picture 10




Text Mining:

Text mining is the analysis of data contained in natural language text. Text mining works by transposing words and phrases of unstructured data into numerical values which can then be linked with structured data in a database and analyzed with traditional data mining techniques.


End Notes

In this guide we talked about various modelling techniques, text analytics and the following stages which are necessary for a perfect model building. For now, this guide is focused on the upcoming datathon on 7th June 2015, but we will keep this updated with more useful content to help you fight harder at future Hackathon.

If you like what you just read & want to continue your analytics learning, subscribe to our emailsfollow us on twitter or like our facebook page.



  • Stella James says:

    Thanks for your article.
    I appreciate the fact that you are telling us the advantages and the disadvantages of random forest and the decision tree. It is very helpful especially when you need to take a decision.

  • RAKESH KUMAR says:

    Thanks Manish,

    You not only keep yourself busy with the knowledge sharing on Analytics Vidhya, but also keep motivate others to follow you. Kudos!! to the team…. for the kind of help & support you provide…


Leave A Reply

Your email address will not be published.

Join world’s fastest growing Analytics Community
Receive awesome tips, guides, infographics and become expert at: