How the Data Science Team at Netflix uses Jupyter Notebooks
- The Netflix data team has penned down a blog on how they use Jupyter Notebooks
- There are broadly three use cases: data access, notebook templates, and scheduling notebooks
- Because of their flexibility, ease-of-use, and rapid adoption rate, Jupyter Notebooks have become Netflix’s most popular tool within the organization
Jupyter notebooks are powerful and compelling tools, and have become ultra popular among data scientists. I have personally tried out a ton of different IDEs from Spyder to RStudio, but I keep coming back to Jupyter Notebooks because of how neat, multi-faceted, and easy-to-use the whole environment is. In fact, I wrote an entire article on how to get started with them!
So when the data team at Netflix posted a blog about how they use these wonderful notebooks, my interest was piqued. In this AVBytes article, I have briefly summarised their different use cases for using Jupyter (if you are interested in reading their entire post, the link is at the bottom of this section).
Netflix’s use of Jupyter notebooks can broadly be classified into three use cases:
- Data access
- Notebook templates
- Scheduling notebooks
Notebooks were initially used at Netflix with the aim of supporting the different data science workflows. But as their popularity grew, the team realised there were a ton of benefits to reap by extending their usage for general data access.
According to the team, they started this transition in Q3 2017. They also created and actively maintain a Python library that “consolidates access to platform APIs”. This enables their users to have access to the entire platform from within their own notebook!
Once the team started expanding the use of these notebooks, brand new tasks were created to meet the different use cases. Out of this whole operation came parametrised notebooks which basically allow you to specify parameters in your code blocks and provide input values while the code is running.
Data scientists use it to run experiments with different parameters, data engineers use it for part of the deployment process and data analysts use it perform queries and visualizations.
Netflix uses notebooks as a unifying layer for scheduling workflows. This helps bridge the gap between constructing an entire workflow and getting that into deployment. When a Spark job in executed, the source code is sent into a brand new notebook and run there. This notebook essentially becomes an archive which can be referred to during the troubleshooting process.
If you’re curious about how the data science team is structured at Netflix, below are the different roles there:
- Business Analyst
- Data Analyst
- Quantitative Analyst
- Algorithm Engineer
- Analytics Engineer
- Data Engineer
- Data Scientist
- Machine Learning Scientist
- Research Scientist
You can check out Netflix’s blog post in full here.
Our take on this
Netflix have emerged as a leader in how to scale up data science workflows when working with copious amounts of data. Their blog post contains a lot more detail about other aspects of their data science process so make sure you go through it.
Coming to Jupyter notebooks, I cannot stress enough on how awesome they are. If you haven’t used them yet (where have you been?), you should give them a try immediately. They transformed the way I code and have saved me a lot of time and headache with their really useful shortcut functions.
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!