Home » Boosting » Page 2

# Boosting

,

### Tree Based Algorithms: A Complete Tutorial from Scratch (in R & Python)

Overview Explanation of tree based algorithms from scratch in R and python Learn machine learning concepts like decision trees, random forest, boosting, bagging, ensemble …

,

### Winning Solutions of DYD Competition – R and XGBoost Ruled

Introduction It’s all about an extra mile one is willing to walk! Winning a data science competition require 2 things: Persistence and Willingness to try …

,

### Complete Machine Learning Guide to Parameter Tuning in Gradient Boosting (GBM) in Python

Overview Learn parameter tuning in gradient boosting algorithm using Python Understand how to adjust bias-variance trade-off in machine learning for gradient boosting   Introduction …

,

### Mini DataHack and the tactics of the three “Last Man Standing”!

Introduction: February started on a high for us. “Last Man Standing” saw more than 1600 Data Scientists compete from all over the world making more …

,

### Secrets from winners of our best ever Data Hackathon!

Introduction One of the books I read in initial days of my career was titled “What got you Here, Won’t Get You There”. While …

,

### Quick Introduction to Boosting Algorithms in Machine Learning

Introduction Lots of analyst misinterpret the term ‘boosting’ used in data science. Let me provide an interesting explanation of this term. Boosting grants power to machine …

,

### 5 Easy questions on Ensemble Modeling everyone should know

Introduction If you’ve ever participated in data science competitions, you must be aware of the pivotal role that ensemble modeling plays. In fact, it is …

,

### Learn Gradient Boosting Algorithm for better predictions (with codes in R)

Introduction The accuracy of a predictive model can be boosted in two ways: Either by embracing feature engineering or by applying boosting algorithms straight …

,

### Basics of Ensemble Learning Explained in Simple English

Introduction Ensemble modeling is a powerful way to improve the performance of your model. It usually pays off to apply ensemble learning over and …

,