All About AI-powered Jupyter Notebooks with JupyterAI

Nikhil1e9 22 May, 2024
15 min read


Generative AI has been at the forefront of recent advancements in artificial intelligence. It has become a part of every major sector, from tech and healthcare to finance and entertainment, and continues transforming our work. It has enabled us to create high-quality content and perform complex tasks in minutes.

Now, imagine a world where you can use simple text prompts to harness the power of generative AI, allowing you to write high-quality code or analyze complex data directly from a Jupyter Notebook. Welcome to Jupyter AI, which seamlessly integrates cutting-edge generative AI models into your notebooks, allowing you to perform all these complex tasks effortlessly while increasing productivity and efficiency.

Jupyter AI

Learning Objectives

By the end of this article, you will have a clear understanding of

  • The differences between traditional Jupyter notebooks and Jupyter AI
  • How to effectively use Jupyter AI to perform complex tasks and boost productivity
  • Using text prompts to generate code, visualize data, and automate manual tasks in Jupyter AI
  • Data and privacy concerns when using Jupyter AI
  • Limitations and drawbacks of using Jupyter AI

This article was published as a part of the Data Science Blogathon.

What is Jupyter AI?

Unlike traditional Jupyter notebooks, which require the user to perform all tasks manually, Jupyter AI can easily automate tedious and repetitive tasks. It allows users to write high-quality code and analyze data more effectively than ever by using simple text prompts. It has access to several large language model providers, including Open AI, Google, Anthropic, and Cohere. The interface is simple, user-friendly, and accessible directly from a Jupyter Notebook.

In this article, I will walk you through the entire process of using Jupyter AI to become a more productive data scientist and boost your efficiency. Jupyter AI can be used in two different ways. The first method is to interact with an AI chatbot through JupyterLab, and the second is to run the `jupyter_ai_magics` command in a Jupyter notebook. We will be looking at both of these options in this article. So, let’s get started right away.

Generate API Keys

To use Jupyter AI with a specific model provider, we first need to provide the API keys so that the model provider can serve our requests. There are options for open-source models that won’t require an API key. However, you must install all the configuration files on your system to run them, which may require additional storage space. Furthermore, in this case, the inference would be made on your CPU, which would be much slower and take a long time to generate responses to even a single prompt. Unless you are dealing with highly confidential data, I recommend using cloud providers because they are beginner-friendly and handle all the complex stuff.

I will be using TogetherAI and Google Gemini for this tutorial. Together, AI provides seamless integration with several major LLM models and provides fast inference. Also, signing in with a new account will give you $25 in free credits. These are enough to run 110 million tokens on the Llama-2 13B model. In perspective, 1 million tokens are roughly equivalent to 700,000 words. In comparison, the massive Lord of the Rings trilogy has a combined word count of approximately 500,000 only. This means you would need more than 150 of these books to use up all of the tokens. The free credits will be more than sufficient for any use case.

If you use a different model provider and already have an API key, feel free to skip this section.

TogetherAI API key

To generate a TogetherAI API key, follow the steps below:

  1. Create an account on the platform
  2. Sign in to your account
  3. Go to to see your API keys

Google API key

You must create an API key to use the Google Gemini model. The steps are:

  1. Go to Google Dev 
  2. Select the “Get API key in Google AI Studio” option
  3. Sign in with your Google account
  4. In Google AI Studio, click on Get API key and generate your API key

Cohere API key

To fine-tune the model to our local data, we would also need to have access to an embedding model. I will be using Cohere’s text embeddings for this. Follow the steps below to generate a Cohere API key:

  1. Go to Cohere API
  2. Create your Cohere account
  3. Go to Trial keys and create your API key

Install necessary dependencies

Jupyter AI is compatible with any system that supports Python versions 3.8 to 3.11, including Windows, macOS, and Linux machines. You will also need a conda distribution to install the necessary packages. If you don’t already have a conda distribution installed on your computer, you must first install conda from here. I prefer Anaconda, but the other two are also viable options.

Create virtual environment

The next step is to create a virtual environment for our project. Before starting any project, you should create a virtual environment to avoid piling up packages in the default Python environment and potential conflicts with other packages. Copy the code below into your shell to create an isolated environment with Python 3.11.

$ conda create -n jupyter-ai-env python=3.11

This will create a new conda environment called `jupyter-ai-env` and install Python version 3.11 to this environment. Next, activate this environment using the command

$ conda activate jupyter-ai-env

Install JupyterLab and Jupyter AI

Next, install JupyterLab and Jupyter AI with the `conda install` command

$ conda install -c conda-forge jupyter-ai

This will install JupyterLab and JupyterAI with all the other necessary dependencies to our environment.

To use some of the model providers, such as Open AI, Google, Anthropic, and NVIDIA, you must first install their required langchain dependencies. We would also need to install two additional packages: `pypdf` for pdf support and `cohere` for the embedding model. To install these, write

$ pip install langchain-google-genai
$ pip install langchain-openai
$ pip install langchain-anthropic
$ pip install langchain_nvidia_ai_endpoints
$ pip install pypdf cohere

$ jupyter lab

You don’t need to install all of them. Simply select the ones you require based on your needs and availability of the API key. Then start an instance of JupyterLab using `jupyter lab`.

Jupyter AI in JupyterLab

On startup, the JupyterLab interface would look like this:


Chat Interface

On the left side is Jupyternaut, the chatbot with which we will interact. In addition to the basic chat functionality, it offers a variety of other features. It can also learn about our local data and then provide tailored responses to our prompts. As we will see in the later sections of this tutorial, it can even generate a complete Jupyter notebook from just a single text prompt. You can select the models by clicking on the settings icon at the top right of the Jupyternaut interface.

Language Model vs Embedding Model

There are two types of models here: language model and embedding model. Let’s understand the difference between the two. The language model is the one that powers the chat UI, which we will use to chat and generate responses to prompts. The embedding model, on the other hand, generates vector embeddings of our local data files and stores them in a vector database. This allows the language model to retrieve relevant information when asked specific questions about the data. Using Retrieval-Augmented Generation (RAG), the model can extract relevant information from the vector database and combine it with its existing knowledge to answer questions about a specific topic in a detailed and accurate manner. 

Jupyter AI supports a wide range of model providers and their models. You can see the list of all the model providers in the dropdown.


Select your preferred model from the dropdown, enter your API keys into the appropriate boxes and save the changes.

Simple Task

Let’s chat with our AI assistant and test its knowledge with a simple question.


It pretty much nailed it. Along with the definitions, it correctly provides the example of image classification for supervised learning and clustering for customer segmentation task, that fall under the unsupervised learning category.

Code Generation

Now, let us see how it performs on a coding problem. 


The code above looks efficient and logically correct. Let us ask some follow-up questions to see if it knows what it is discussing.


It surely knows its concepts well. To test it further, we can add a notebook to the right-side panel and have it optimize our code for us. 

Code Optimization

To do this, you could highlight a section of your notebook and include it with your prompt. Select the include selection option with your prompt to make the code visible to the chatbot. You can now ask questions regarding the selected code, as depicted in the image below


Jupyternaut can even replace your selection with its own response by choosing the replace selection option. Let us tell it to print a more optimized code version, along with comments explaining it.


Jupyternaut sends the code to your chosen language model and then replaces the selection with the model’s response. It optimizes the code correctly by using a set rather than a list and then explaining it with proper comments, as shown above.

Learn from local data

So far, so good, but let us take it one step further. Let us now ask a few questions about our local data. To use this feature, we must upload some documents, preferably in text format, such as .pdf or .txt files, to the current directory. Create a new folder named docs, and upload your data files to this folder. After that, use the /learn docs\ command as depicted below:


I fed it a research paper on sleep paralysis. Now, use /ask to ask any specific questions about the data. You would notice a significant difference between the AI’s responses before and after learning from the documents. Here’s an example of me asking it about sleep paralysis 


Before learning the specifics of the document, the chatbot provided a vague and generic response that conveyed no useful information. However, after learning the embeddings, it provided a much better response. This is the power of retrieval-augmented generation (RAG). It allows the language model to cater to the very specifics of the data, providing highly accurate results. The one thing to note here is that the accuracy and correctness of the responses depend entirely on the quality of the data we are feeding into the model. As famously said in data science, “Garbage in, garbage out.”

You can also delete the information it learned with the /learn -d command, which will cause it to forget everything it has learned about the local data.

Generate notebooks from scratch

To demonstrate the full potential of JupyterAI, we will now allow it to create a complete notebook from scratch. As this is such a complex task, it will require a highly developed and nuanced model like GPT-4 or Gemini Pro. These models use their langchain libraries to deal with complex scenarios like these. I am choosing Gemini Pro for this task. To generate a Jupyter Notebook from a text prompt, start the prompt with the /generate command. Let’s take a look at an example of this


It created a notebook demonstrating a classification use case from scratch in just one minute. You can check the time stamps for yourself to verify this. This is what the generated notebook looks like.


I was amazed to see this level of detail in the generated notebook, and after testing different models on the same task, I wasn’t expecting this from Gemini. Nothing even came close to this. The notebook generated by Gemini is simply perfect. It also followed all of the instructions I provided in the prompt. This truly unleashes the ultimate power of LLMs. Data scientists, beware!!

Export chat history

JupyterLab provides yet another useful feature. You can also save your chat data using the /export command. This command exports the entire chat history to a Markdown file and saves it in the current directory. This makes JupyterAI an extremely versatile tool.

Jupyter AI in Jupyter notebooks

The chat interface is truly remarkable, but there is more to JupyterAI. If you cannot install JupyterLab or it does not work properly on your system, there is one more alternative for using JupyterAI. It can also be used in notebooks via JupyterAI magics with the `%%ai` command. This means you can utilize JupyterAI’s features without relying solely on JupyterLab. This works with any IPython interface, such as Google Colab, VSCode, or your local Jupyter installation.

Enable Jupyter AI magics

If you already have `jupyter_ai` installed, the magics package `jupyter_ai_magics` will be installed automatically. Otherwise, use the following command to install it:

pip install jupyter_ai_magics

To load JupyterAI to your IPython interface, run the command below and the magics extension will be loaded to your notebook.

%load_ext jupyter_ai_magics

To take a look at the different model providers, type `%ai list`, or you can list only the models from a specific provider using %ai list <provider-id>. You will now see a long list of all the different model providers and their models.

Again, I will be using the TogetherAI models and Gemini Pro. But before going further, we need to provide our API key again and store it in an environment variable. To do this, write


If you are using a different model provider, simply change the model provider name above, and you’ll be good to go.

The model’s full name contains the model provider, followed by the model name. We can use an alias instead of writing the full name every time before calling a cell. To set an alias for our model name, use the code below:

%ai register raven togetherai:Nexusflow/NexusRaven-V2-13B
%ai register llama-guard togetherai:Meta-Llama/Llama-Guard-7b 
%ai register hermes togetherai:Austism/chronos-hermes-13b
%ai register mytho-max togetherai:Gryphe/MythoMax-L2-13b
%ai register llama2 togetherai:NousResearch/Nous-Hermes-Llama2-13b
%ai register gemini gemini:gemini-pro

You can now use these aliases as any other model name with the `%%ai` magic command. To enable Jupyter AI for a specific cell and send text prompts to our model, we first need to invoke the `%%ai` magic command with the model name and then provide the prompt below it

%%ai llama2
{Write your prompt here}

Jupyter AI assumes that a model will output markdown by default, so the output of a `%%ai` command will be in markdown format. This can sometimes cause problems, causing some models to output nothing. You can change this by adding the `-f` or `–format` flag to your magic command. Other valid formats include code, math, html, text, images, and json.

Text Generation

Therefore, setting the flag to text is always better if you want a text output. An example of this is shown below:


Mathematical Equations

We can also use it to write mathematical equations, changing the format to math.


HTML Tables

It can also generate good-looking HTML tables when the format changes to HTML.


Language Translation

Using curly braces, we can also include variables and other Python expressions in the prompt. Let’s understand it using an example of translating text from English to Hindi


Similar to f-strings, the `{lang}` and `{name}` placeholders are interpolated as the values assigned to the variables, respectively. It did not spell my name correctly, but I will let it get away with that.

Error Correction

It is good at writing and optimizing code. Let us see how well it does at correcting errors in code.


Jupyter AI has a special “Err” method that captures errors from different cell executions. This method can then be used in another cell to ask questions about the error. In the example above, it correctly detects the error and rewrites the corrected code.

Generating a report

Let’s now give it a comparatively more complicated task to test its caliber again. Here is an example where I instructed it to generate a report on COVID-19 and its impact on the world.


As shown in the image above, the report is well-structured, with distinct sections for introduction, global health impact, economic impact, and social impact. It also elaborated on ongoing challenges and how nations worldwide are addressing them.

Text Summarization

The interpolation functionality can be extended further by combining the input/output of a specific cell with our prompt. Here’s an example where I asked it to create a brief summary of the COVID-19 report.


It lists out the summary of the report in crisp bullet points. Also, interpolation allows the model to read the report directly from Jupyter, saving us from the pain of copying and pasting the text each time.

Data Visualization

Now, let’s put it to a final test. For this, I uploaded the Titanic CSV file and instructed it to write the code for univariate analysis on the Titanic dataset.


Wow! Not bad at all. There is not even a single error in the code. Every time the AI generates code, it is labeled as AI-generated, as shown in the image above. The code it provided worked and resulted in some lovely plots.


It also used subplots in the implementation, as specified in the prompt. It is amazing how well it adapts to the specifics of the prompt.

Limitations and Challenges

So far, we have looked at the positive aspects of Jupyter AI, but like anything else, it has limitations, too. Let’s look at these limitations one by one.

Biased Response

Because LLMs are trained on massive amounts of text data from all over the internet, they commonly produce biased responses to questions. Let’s look at an example of this:


Firstly, it did not attempt to argue that the question was biased before answering it. Second, it did not even consider the possibility that its points could be incorrect. This is just typical biased behavior.


When the model simply invents something nonexistent or makes stuff up, it is said to be hallucinating. Hallucinations are one of the most prominent problems with LLMs, greatly hampering their reliability.


It does not ask for clarifications and completes the sentence according to its preference. That’s why it is always recommended to fact-check every piece of information an LLM generates rather than blindly trusting everything it says.

Factual Inconsistency

When asked about a person who has been to Mars, this was the response:


This is yet another example of the AI confidently stating wrong facts.

Jupyter AI poses some other challenges as well. These are:

  • It is difficult to select a single reliable model for each task because models that perform well on one task may perform poorly on others. 
  • If the question is not structured, it may misinterpret the prompt, resulting in a suboptimal or hallucinated response.

Additional Information

Apart from these, here are some additional points to keep in mind when using Jupyter AI:

  • Jupyter AI sends data to third-party model providers. Review the provider’s privacy policy and pricing options to understand data usage and payment obligations better.
  • Including additional context in messages can increase token count and costs. Therefore, it is advised to check cost policies before making large requests.
  • AI-generated code may contain errors, so it is always best to carefully review all generated code before running it.
  • Review the provider’s policies for third-party embedding models before sending any confidential or sensitive data.


In this article, we looked at the incredible power of Jupyter AI and how it can assist in various tasks, freeing us from tedious and repetitive tasks and allowing us to focus on the more creative aspects of our jobs. This brings us to the end of this article. This is just a glimpse of what Jupyter AI and LLMs, in general, are capable of. They have limitless potential yet to be unfolded. 

I hope you enjoyed this article. As always, thank you for reading, and I look forward to seeing you at another AI tutorial.

Key Takeaways

  • Jupyter AI provides chat assistance through a conversational assistant. This assistant can help summarize text, write good-quality code, and provide more specific information by learning about local data. 
  • We solved highly complex tasks by writing simple text prompts, such as creating an entire notebook from scratch.
  • Then, we examined how to transform our Jupyter notebooks into generative AI playgrounds using the `%%ai` magic command. 
  • We used different models for various tasks, such as code optimization, data visualization, and generating a well-structured report. 
  • Finally, we examined some of the language models’ limitations, including their ability to occasionally generate inconsistent and biased responses and hallucinations.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Frequently Asked Questions

Q1. What are the different ways to access Jupyter AI?

A. Jupyter AI can be accessed in two main ways. The first way is to use the chatbot as a conversational assistant in JupyterLab. The second method is to use the %%ai magic command in an IPython kernel such as Google Colab, Visual Studio Code or your local Jupyter installation.

Q2. What are the different model providers supported by Jupyter AI?

A. Jupyter AI supports a wide range of model providers and models, including Open AI, Cohere, Hugging Face, Anthropic, and Gemini. Visit the official documentation to see the complete list of supported model providers.

Q3. How does Jupyter AI ensure data privacy?

A. Jupyter AI only contacts an LLM when you specifically request it to. It does not read your data or transmit it to models without your explicit consent.

Q4. What are the different tasks that Jupyter AI can be used for?

A. Jupyter AI can be used for a wide range of tasks, ranging from answering simple questions to generating code, creating complex data visualizations, summarizing documents, composing creative content like stories or articles, translating text between languages, and many more.

Q5. Should I choose cloud models or host them locally?

A. The choice between cloud and locally hosted models boils down to the trade-off between privacy and faster inference. In other words, if you have sensitive or highly confidential data and want to ensure maximum privacy, you should use local models. If data privacy is not a major concern for you and you want quick inference, you should opt for cloud model providers.

Nikhil1e9 22 May, 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers