What makes you more money – A/B Testing or Multi Armed Bandit?  

Every company whose core business depends on their website/app do a lot of modifications and experiments with their UI and webpages and test these changes to find out the modifications that are leading to increased engagement. Similarly, Ad tech companies use several different banners of the same ad to optimize their click through rates. A/B testing or split testing already provides a structured way to conduct these experiments. However, there is an alternate lesser known way of doing a similar task called ‘multi-armed bandits’ or MAB. This method can be more beneficial as compared to A/B testing especially in cases where there is a high lost opportunity cost involved in the experiment.

In this hack session, speakers will explain the problem setting of MABs, and discuss the below mentioned techniques to solve a multi-armed bandit problem:
  1. Epsilon Greedy Method

  2. Upper Confidence Bound (UCB)

  3. Softmax

Also, they will discuss this implementation in Python and do a quantitative comparison of these techniques vs. A/B testing on a real dataset to solidify the idea.

   

Hackers



Ankit Choudhary


IIT Bombay Graduate with a Masters and Bachelors in Electrical Engineering. He has previously worked as a lead decision scientist for Indian National Congress deploying statistical models (Segmentation, K-Nearest Neighbours) to help party leadership/Team make data-driven decisions. His interest lies in putting data in heart of business for data-driven decision making.


Duration of Hack-Session: 1 hour
Buy Ticket
Social media & sharing icons powered by UltimatelySocial