## Introduction

In the last few months, we have started conducting data science hackathons. These hackathons are contests with a well defined data problem, which has be be solved in short time frame. They typically last any where between 2 – 7 days.

If month long competitions on Kaggle are like marathons, then these hackathons are shorter format of the game – 100 mts Sprint. They are high energy events where data scientists bring in lot of energy, the leaderboard changes almost every hour and speed to solve data science problem matters lot more than Kaggle competitions.

One of the best tip, I can provide to data scientists participating in these hackathons (or even in longer competitions) is to quickly build the first solution and submit. The first few submissions should be real quick. I have created modules on Python and R which can takes in tabular data and the name of target variable and BOOM! I have my first model in less than 10 minutes (Assuming your data has more than 100,000 observations). For smaller data sets, this can be even faster. The reason of submitting this super-fast solution is to create a benchmark for yourself on which you need to improve. I will talk about my methodology in this article.

## Breaking Down the process of Predictive Modeling

To understand the strategic areas, let’s first break down the process of predictive analysis into its essential components. Broadly, it can be divided into 4 parts. Every component demands x amount of time to execute. Let’s evaluate these aspects n(with time taken):

**Descriptive analysis on the Data – 50% time****Data treatment (Missing value and outlier fixing) – 40% time****Data Modelling – 4% time****Estimation of performance – 6% time**

*Note:* The percentages are based on a sample of 40 competition, I have participated in past (rounded off).

Now we know where do we need to cut down time. Let’s go step by step into the process (with time estimate):

1.**Descriptive Analysis** : When I started my career into analytics, we used to primarily build models based on Logistic Regression and Decision Trees. Most of the algorithm we used involved greedy algorithms, which can subset the number of features I need to focus on.

With advanced machine learning tools coming in race, time taken to perform this task can be significantly reduced. For your initial analysis, you probably need not do any kind of feature engineering. Hence, the time you might need to do descriptive analysis is restricted to know missing values and big features which are directly visible. In my methodology, you will * need 2 minutes* to complete this step (I assume a data with 100,000 observations).

2.**Data Treatment** : Since, this is considered to be the most time consuming step, we need to find smart techniques to fill in this phase. Here are two simple tricks which you can implement :

- Create dummy flags for missing value(s): In general, I have discovered that missing values in variable also sometimes carry a good amount of information. For instance, if you are analyzing the clickstream data, you probably won’t have a lot of values in specific variables corresponding to mobile usage.
- Impute missing value with mean/any other easiest method : I have found ‘mean’ works just fine for the first iteration. Just in cases where there is an obvious trend coming from Descriptive analysis, you probably need a more intelligent method.

With such simple methods of data treatment, you can reduce the time to treat data to * 3-4 minutes*.

3. **Data Modelling** : I have found GBM to be extremely effective for 100,000 observation cases. In case of bigger data, you can consider running a Random Forest. This will take maximum amount of time (* ~4-5 minutes*)

4. **Estimation of Performance** : I find k-fold with k=7 highly effective to take my initial bet. This finally takes * 1-2 minutes* to execute and document.

The reason to build this model is not to win the competition, but to establish a benchmark for our self. Let me take a deeper dive into my algorithm. I have also included a few snippets of my code in this article.

### Let’s start putting this into action

I will not include my entire function to give you space to innovate. Here is a skeleton of my algorithm(in R):

**Step 1** : Append both train and test data set together

**Step 2** : Read data-set to your memory

setwd("C:\\Users\\Tavish\\Desktop\\Kagg\\AV") complete <- read.csv("complete_data.csv", stringsAsFactors = TRUE)

**Step 3**: View the column names/summary of the dataset

colnames(complete ) [1] "ID" "Gender" "City" "Monthly_Income" "Disbursed" "train"

**Step 4**: Identify the a) Numeric variable b) ID variables c) Factor Variables d) Target variables

**Step 5** : Create flags for missing values

missing_val_var <- function(data,variable,new_var_name) { data$new_var_name <- ifelse(is.na(variable),1,0)) return(data$new_var_name)}

**Step 6** : Impute Numeric Missing values

numeric_impute <- function(data,variable) { mean1 <- mean(data$variable) data$variable <- ifelse(is.na(data$variable),mean1,data$variable) return(new_var_name) }

Similarly impute categorical variable so that all missing value is coded as a single value say “Null”

**Step 7** : Pass the imputed variable into the modelling process

*#*Challenge: Try to Integrate a K-fold methodology in this step

create_model <- function(trainData,target) { set.seed(120) myglm <- glm(target ~ . , data=trainData, family = "binomial") return(myglm) }

**Step 8** : Make predictions

score <- predict(myglm, newdata = testData, type = "response") score_train <- predict(myglm, newdata = complete, type = "response")

**Step 9** : Check performance

auc(complete$Disbursed,score_train)

And Submit!

## End Notes

Hopefully, this article would have given enough motivation to make your own 10-min scoring code. Most of the masters on Kaggle and the best scientists on our hackathons have these codes ready and fire their first submission before making a detailed analysis. Once they have some estimate of benchmark, they start improvising further. Share your complete codes in the comment box below.

Did you find this article helpful? Please share your opinions / thoughts in the comments section below.

That’s a very well put list 🙂 Thanks Tavish.

This is a fantastic way to kick-off the model building. You are right that we should have such model handy while doing model building. If from the very beginning we start thinking about optimization of the model then this will take lots of time to develop the model. A step by step approach always helps to break the problem and get a reliable and quick outcome.

I visit Analytics Vidhya almost daily and really like the articles published in the forum. You guys are doing fantastic work. My best wishes with the forum.

Keep Learning, Keep Growing.

Regards

Deepak Sharma

Hi Tavish,

Nice article.

But here you did not mention who did you treat multicollinearity & non normally distributed data.

Thanks, Shivi

Hi Shivi

Given that we are using tree based algo, you need not worry about multi collinearity.

Tavish

Hi Tavish,

I am currently working in IT, I am thinking to shift my career into analytics,Is it right decision.

I am thinking to start with SAS and R training. What’s your suggestion on this field.

Regards

Yogesh T

Dear Tavish,

Like all of the time it was perfect

Thanks Tavish for sharing this awesomely crafted walk-through, I just wanted to seek your opinion about using Rattle to do modelling as I personally find it very easy and flexible and it doesn’t require me to be a coder. Your thoughts will be highly appreciated.

This is a great article and I would really like to step through it using the same data that you use. Can you make the file complete_data.cv available or tell me where I can find it?

Thanks a lot for this inspiring article!

Hi,

I am new to data analysis and would like to run your complete code to see the final result.

Can you please share it with me, as well as the dataset?

Cheers

Hi Tavish,

Could you please share the data file/atleast edited data file to practice on this.

Thank you,

Ravi A

Dear all,

Am the beginner of creating modeling in a company, can anyone please help me with the complete process for creating a modeling for any data.

Please explain about,

1. Data cleaning

2. SAS Codes

3. Model preparation

4. Algorithm used for model preparation.

Please send the details to my Email,

Thanks in advance.

Dinesh

Hi Tavish,

Its very nice article .Thank you . seems we need to do dimensionality reduction process before applying the Model.Could you please explain the Dimensionality reduction techniques( For Variable selection).

Thanks in advance.

Siva Vulli

can u suggest me how to handle missing values in different cases.

Hi Tavish,

Thanks for sharing such useful information with us.

I also want to lean about data modeling, can you help me on that.