Kaggle Grandmaster Series – Exclusive Interview with Kaggle Competiton Grandmaster Oleg Yaroshevskiy (#Rank 32)

avcontentteam 17 Feb, 2021 • 6 min read

Welcome back to the 22nd edition of the Kaggle Grandmaster Series

Today, we are thrilled to be joined by Oleg Yaroshevskiy to share his data science journey with the AV community.

data science interview

Oleg is a Kaggle Competitions Grandmaster. He ranks 17th in the category and has 5 Gold Medals to his name. Also, he is an Expert in the Kaggle Discussion Category.

Oleg has a Master’s degree in Computer Science and Applied Statistics from the Taras Shevchenko National University of Kyiv. he currently works as a Machine Learning Research Engineer at Stealth Mode Startup Company.

You can go through the previous Kaggle Grandmaster Series Interviews here.

 

In this interview, we cover a range of topics, including:

  • Oleg’s Education and Work
  • Oleg’s Kaggle Journey
  • Oleg’s Advice to Beginners in Data Science
  • Oleg’s Inspiration

So without any further ado. Let’s begin.

 

Oleg’s Education and Work

kaggle grandmaster series education and work

Analytics Vidhya(AV): Your educational background is in the field of Computer Science. How has your Master’s degree in CS and Applied Statistics helped you in your career and Kaggle Competitions as well, since it is often said that higher education is mandatory for this field of Data Science and ML?

Oleg Yaroshevskiy(OY): Before I learned about Data Science I had been working as a software engineer. Back in 2016 I left my job and was curious about new opportunities so I found there had evolved a whole new field which was much of what we’d learned in university. I was very comfortable with it so I believe fundamentals helped me much. Like it or not, but many opportunities are closed for you if you don’t have Masters’s or even Ph.D. Is it justified? To some extent.

But there are also many talented folks solving sophisticated problems having no degree. 

 

AV: You’ve been a Deep Learning Engineer at your previous job. This job role is not too common in the industry. So, can you tell us more about the job role of a DL Engineer and what skill sets are required to become a DL Engineer?

OY: I found it recently on Twitter that “the vast majority of data scientists have never worked with anything except tabular data”. Have no idea if it’s accurate, but that’s funny as I can say exactly the opposite – I had already worked with major data types before I got tabular. As I worked with texts, audio or images I found there are many cross-domain things you can and probably should learn.

So for me, DL Engineer means an engineer who is very comfortable about solving various problems with deep models. At some point you’ll be like “let’s feed a Resnet with some tabular data!” and it feels so good. That’s also about your natural curiosity.

 

Oleg’s Kaggle Journey

data science interview kaggle grandmaster series

AV: You’re Kaggle Competitions Grandmaster and currently ranked 24th, This is really amazing! What is your strategy/framework to tackle any DL problem given in any competition?

OY: Thanks. I always build a PyTorch baseline asap to see my first results. By a baseline, I mean a very straightforward solution, the one you build at courses. Then, having my results I can figure out what the problem really is. Sometimes it’s very different from what it seems to be. You need to understand why the score is what it is, and what target metric you really need to optimize.

At one of Kaggle competitions, error analysis showed that the model failed to detect some hand labeling noise, but the noise itself looked very deterministic. Some post-processing or pre-processing boosted the score. Many might say that has nothing to do with machine learning – I disagree. Dealing and “understanding” data is a real business task. Simply building neural architecture is never enough. So for me, error analysis is an important part of EDA. 

 

AV: What were the challenges you faced when you participated in your first Kaggle competition and how did you overcome them?

OY: Oh, my first kaggle competition (TalkData fraud detection) was a big mess which turned out to be my first gold medal. By that point, I’d already built complicated TensorFlow models but I’d had no real experience with pandas! So I spent most of my time trying to craft at least some features for a few hundred million rows table. I was so hot-tempered competing with myself. But we showed good results because of team efforts.

 

AV: Can you tell us about your top five favorite DL competitions so far that have shaped your Kaggle journey? And what were the challenges you faced while coming up with their solutions?

OY: I would name a few.

Avito Demand Prediction – This one was really interesting and there were all kinds of data so you could come up with very creative solutions. We trained transformers, dense and sparse feedforward networks, RNNs, CNNs, factorial machines. My friend and teammate Dmytro Danevskiy even trained a denoising autoencoder for unlabeled data. We didn’t get into gold and we almost burnt our laptops (that time we didn’t have computational resources) but that was so fun!

Airbus Ship Detection Challenge – That was a competition where I probably learned most of what I know about Computer Vision. At some point, we stacked in the silver zone and couldn’t get higher on a public leaderboard even though our local validation showed better and better results. Imagine our surprise when we woke up at 4th position (I didn’t sleep). That’s how we learned to trust our own cross-validation later. Probably the most valuable Kaggle lesson.

Two times I finished one position from gold (Quora Insincere Question Classification and TensoFlow2 Question Answering). Both times in NLP competitions which I believed I was good at. Can’t describe how rejected I felt after the second time. We were all burnt out but also reckless – we couldn’t stop. So from the very next day, we joined another race (Google Quest Challenge) and that happened to be a huge win – 1st place. In terms of the NLP Kaggle competition, we really did our best no doubt. Surprisingly, I got into the top-20 ranking and learned to love my failures.

 

AV: How does your industry experience in the DL helped you in the competitions?

OY: It works both ways. Industrial experience gives you some deeper intuition behind specific problems while Kaggle competitions help you to build a wider range of tools and skills. I’m more comfortable with NLP tasks – for example, I took my solo gold in speech processing which I worked with at my first job.

 

Oleg’s Advice to the Beginners

Kaggle grandmaster series data science interview work

AV: Could you name some lesser-known DL Frameworks which you feel everyone should know?

If you are a beginner, I’m not sure if there is much sense to invest in anything except PyTorch or TensorFlow. But of course, there are might be exclusions for those who work in other programming languages like Swift or C#.

 

AV: How do you keep yourself updated with the latest technologies/frameworks related to DL? And how do you use them in the industry as well as in the hackathons?

OY: Unfortunately I can’t keep an eye on everything as the field is growing exponentially. I read some news on Twitter, some news in Open Data Science (ods.ai). When I have time I check Kaggle solutions or Papers with code where you can learn about recent state-of-the-art approaches to a specific task. You might feel stressed as there are more and more papers coming every day but not many of them have a real impact. So simply searching for “top machine learning articles of 2020” or “NeurIPS 2020 best” you might stay up-to-date.

 

AV: What advice would you give to all the beginners who are fascinated by Deep learning? What are the things they should avoid and what thing should they follow?

OY:

Be curious. Be patient with your failures. Never walk alone.

If we talk about data science in general – learn the basics. Learn how to build strong validation. You can’t imagine how many people fail in this!

 

Oleg’s Inspiration

AV: Who are the five Data Science experts whose work you always look forward to?

I’d say nowadays it’s more about teams, not personalities: FacebookAI, DeepMind, other big or small labs. It’s OpenAI and Ilya Sutskever for what they’ve done recently (GPT and Dall-E). Also Thomas Wolf and Huggingface for democratizing the NLP field. Andrej Karpathy for inspiring me back in the days with his famous article on RNNs. Richard Socher and his Stanford’s NLP lectures. My friends and colleagues who I was lucky to meet.

 

End Notes

His thoughts and words are enough to get anyone to begin and stay focused on their data science journey. I hope this edition of the Kaggle Grandmaster Series with Oleg adds value to your data science journey.

This is the 21st interview in the Kaggle Grandmasters Series. You can read the previous few in the following links-

What did you learn from this interview? Are there other data science leaders you would want us to interview for the Kaggle Grandmaster Series? Let me know in the comments section below!

avcontentteam 17 Feb 2021

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear