“I just followed my interests and focussed on learning machine learning as much as I could”- Philip Margolis
Age is just a number when it comes to transitioning at a very late stage in your life from one career to another. Or just simply learning new things. There is no cap on when to start and how much you can achieve. This is no different for data science.
To prove this to you, Philip Margolis joins us in this edition of the Kaggle Grandmaster Series.
Philip is a Kaggle Competitions Grandmaster and ranks 47th with 17 gold medals to his name. He is also an expert in the Kaggle Discussions category.
Philip was a technical writer for nearly 20 years, and then an entrepreneur for about 15 years. He authored two books about computer science and was a founder or co-founder of several companies involved in web publishing and search engine technologies. Since 2012, he has been learning and applying Machine Learning Techniques.
You can go through the previous Kaggle Grandmaster Series Interviews here.
In this interview, we cover a range of topics, including:
- Philip’s Education and Work
- Philip’s Kaggle Journey from Scratch to becoming a Kaggle Grandmaster
- Philip’s Advice for Beginners in Data Science
- Philip’s Inspiration
So let’s begin without any further ado.
Philip’s Education and Work
Analytics Vidhya(AV): You did your undergrad in Journalism in 1979 and then around 30 years later, in 2012 you did a course in Data Science. This is really interesting! We would like to know after so many years what motivated you to learn new technology and enter a new field? What resources/tools helped you in overcoming this gap?
Philip Margolis(PM): Although I majored in Journalism in college, I was always fascinated by computers and began programming as a teen. My early career as a technical writer was a good combination of my writing and technical interests. After being a technical author for about 15 years, I spent the next 20 years as a serial entrepreneur. Almost all the start-ups in which I was involved were data-driven companies, so I maintained a connection with programming and data even though my roles were at the management and strategic level.
I discovered machine learning during a transition period after I had sold my last company, Cozio Publishing. I don’t remember exactly how I landed on the Kaggle website, but once I realized what ML was capable of, I felt as though I had finally arrived home after a long trek in the wilderness. I think what appeals to me most about ML is that it can be used to answer such an enormous range of real-life questions. I’ve always been much more interested in solving practical problems than in theoretical research.
As I hadn’t done any programming in years, and my statistics knowledge was rudimentary, I had a very steep learning curve at first. I took a lot of online courses and followed the Kaggle discussion forums to pick up tips. The Kaggle leaderboards were very useful in motivating me to keep learning more.
AV: Being a technical writer for a long time, did you face any difficulties in transitioning to a Machine Learning specialist?
PM: Transitioning from tech writer and entrepreneur to ML practitioner was definitely a challenge. I was fortunate to be financially secure so I didn’t feel any pressure to make a living as a data scientist. I just followed my interests and focussed on learning as much as I could.
AV: You’ve got considerable experience in Freelancing in the field of Machine Learning as well. What are the pitfalls that a beginner should avoid while freelancing in this field?
PM: The thing I like most about freelancing is that every data set and data problem is unique and requires a customized solution. I think the biggest pitfall for beginners is to assume that each new project will be similar to something they’ve already encountered. My experience is that this is the exception, and it’s more likely that the project will require one to learn new techniques.
AV: How did you get the idea of founding Cozio Publishing? Did you have experience with coding beforehand?
PM: Cozio Publishing started when my wife, who is a professional violinist, was looking to buy a ‘new’ violin. New for her, that is, but it’s generally accepted that the best violins were produced in Italy in the 18th century, so ‘new’ actually means ‘old’ in this context. There was a lot of information about antique stringed instruments, but it was scattered in different print publications — books, magazines, auction catalogs, etc.
As I was helping my wife collect information, I started entering data into a custom database so that we could keep track of the different instruments she was considering. At some point, I realized that other musicians might find the information useful so I published the database on a website called cozio.com. As people discovered the website, it became increasingly popular because it helped lend some transparency to what was, and still is, a very non-transparent market.
I found I enjoyed collecting and publishing the violin data and decided to try to turn it into a business by charging a small subscription for access to some of the data. Later, the company also published several books about antique violins. To my great surprise, I kept doing this for nearly 10 years, much longer than I had devoted to any other enterprise. Eventually, I decided to move on to something new and I sold the company to tarisio.com, the leading online auction site for stringed instruments.
AV: Data journalism – it’s a popular buzzword these days. Do you have any kind of experience in this field? How does it leverage the field of machine learning?
PM: Even though I studied journalism and was a technical author, I haven’t actually done any writing in many years, so I can’t really comment on data journalism, but it sounds interesting.
Phillip’s Kaggle Journey from Scratch to becoming a Grandmaster
AV: You’re a Kaggle Competitions Grandmaster and currently ranked 47th. You’ve participated in over 80 competitions. This is amazing! If we talk about your Kaggle journey specifically, then what are the challenges you faced, and how did you overcome them?
I basically started at ground zero, with no knowledge of machine learning, probability, statistics, or matrix algebra. All I had was a little knowledge of database design and some very rusty C programming skills. So I had to learn everything at once — R, Python, probability and statistics, and ML. I was lucky that I started this journey exactly when MOOCs were becoming popular. One of my first online courses, which was really inspiring, was the famous Andrew Ng ML course. Another that was fantastic was the Stanford Statistical Learning course by Tibshirani and Hastie.
I took many other courses online via Coursera, MIT, Stanford, etc. The overall high quality of these courses is really amazing.
AV: What are the five tips you would give to beginners to get into the top 1% tier in Kaggle Competitions?
PM: I guess my first tip would be to set a different goal. If a high Kaggle ranking is your primary goal, you might be tempted to look for shortcuts, like blending lots of public kernels. This might help for a specific competition but it won’t help in the long run. So I would recommend setting the goal of learning as many practical ML techniques as possible. Use each Kaggle competition as a learning opportunity, even if it doesn’t lead to a high ranking in that competition. If I had to specify 5 tips, I guess they would be:
- Try to solve the problem yourself before looking at the public kernels.
- Start early. It’s really hard to catch up in a competition if you start after it has already been running for a month or more.
- Try to team up with people who have more knowledge/experience.
- Follow the discussion forums. These often contain really important information for understanding the special characteristics of the data and the competition.
- Be patient and don’t get frustrated if your ranking falls. Everything you learn will help in future competitions.
AV: Which have been the two most challenging competitions so far and how did you come up with its solution?
PM: That’s a tough one because every competition has unique challenges. In recent kernel competitions, for example, the biggest challenge is usually fitting the model within the memory and CPU/GPU limits. The competition where I invested the most time and energy was the $1 million Zillow challenge. The main challenge here was a very large and rich data set and extremely motivated competitors.
I really had to stretch myself to remain competitive. My final solution, which finished in 2nd place, was a blend of LGB models. Most of my effort was devoted to featuring engineering and avoiding overfitting.
Philip’s Advice to the Beginners in Data Science
AV: We would like to know what steps you follow while building a Machine Learning model?
PM: I don’t really have a systematic methodology, but I usually start with some very basic EDA and a simple model to set a baseline. For many competitions, I spend a lot of time, in the beginning, making sure that I have a validation setup that is compatible with the train/test set split. Once I have confidence in the validation setup, I’ll start trying to improve the model incrementally.
When I hit a wall, and the model stops improving, or if my validation scores are not in sync with the Leaderboard scores, I’ll do a deeper analysis of the data to figure out what’s going on. I generally avoid looking at any public kernels till I’m stuck.
AV: You’re active in Discussions as well. Which of your discussion threads would you recommend to the beginners?
PM: If I finish a competition with a good ranking, I’ll generally post a summary of my solution, and occasionally I’ll post some general notes about the competition in the early stages. I would strongly recommend that beginners (and experts) read all the solution summaries that are posted after the competition ends. These are incredibly valuable because they often contain very inventive techniques that can be used in future projects.
AV: You are an example that it’s never too late to start learning something new. What advice would you give to people who want to transition into this field and have little background experience/knowledge coming into this domain?
PM: I generally avoid this type of advice because people are so different, and what worked for me might not necessarily work for someone else. My philosophy of life has always been to follow my interests and be open to learning new things.
This is essential in ML, which is evolving so fast. What motivates me most to learn new skills are specific problems, whether they be Kaggle challenges or freelance projects. I know some people who can motivate themselves to learn just for the sake of learning. I envy them, but I’m not one of them.
AV: Which are the Data Science experts whose work you always look forward to?
PM: Within the Kaggle community there are many experts whom I admire, and whose posts I read especially carefully, but I don’t want to name them all because I’m sure I would forget someone. I’ve had the opportunity to team up with many of them in Kaggle competitions, which is a fantastic experience. Outside of Kaggle, the one expert I follow is an ex-Kaggleer, Jeremy Howard, who leads the FastAI project.
Even when he explains something that all data scientists think they understand, like L2 regularization, he finds a new angle that gives additional insight. I think he’s a brilliant ML practitioner, and also one of the best teachers I’ve ever encountered.
Passion comes before age while doing anything new. Philip’s journey is a testament to that. I hope this interview helps you realize that it is not too late for you to begin your data science journey.
This is the 16th interview in the Kaggle Grandmasters Series. You can read the previous few in the following links-
What did you learn from this interview? Are there other data science leaders you would want us to interview for the Kaggle Grandmaster Series? Let me know in the comments section below!You can also read this article on our Mobile APP