A Comprehensive Guide to Implement HuggingFace Models Using Langchain

Tarun R Jain Last Updated : 08 Aug, 2024
7 min read

Introduction

Large Language Models have been the backbone of advancement in the AI domain. With the release of various Open source LLMs, the need for ChatBot-specific use cases has grown in demand. HuggingFace is the primary provider of Open Source LLMs, where the model parameters are available to the public, and anyone can use them for inference. On the other hand, Langchain with Huggingface is a robust, large language model framework that helps integrate AI seamlessly into your application with the help of a language model. By combining HuggingFace and Langchain, one can easily incorporate domain-specific ChatBots.

Langchain vs Hugging Face

The integration of LangChain and Hugging Face enhances natural language processing capabilities by combining Hugging Face’s pre-trained models with LangChain’s linguistic toolkit. This partnership simplifies workflows, enabling efficient model deployment and advanced text analysis.

Learning Objectives

  • Understand the need for open-source large language models and how HuggingFace is one of the most important providers.
  • Explore three methods to implement Large Language Models with the help of the Langchain framework and HuggingFace open-source models.
  • Learn how to implement the HuggingFace task pipeline with Langchain using T4 GPU for free.
  • Learn how to implement models from HuggingFace Hub using Inference API on the CPU without downloading the model parameters.
  • Implementation of LlamaCPP using gguf format Large language models format.

This article was published as a part of the Data Science Blogathon.

HuggingFace and Open Source Large Language models

HuggingFace is the cornerstone for developing AI and deep learning models. The extensive collection of open-source models in the Transformers repository by HuggingFace makes it a go-to choice for many practitioners. Publicly accessible learning parameters characterize open-source large language models, such as LLaMA, Falcon, Mistral, etc. In contrast, closed-source large language models have private learning parameters. Utilizing such models may necessitate interacting with API endpoints, as seen with GPT-4 and GPT -3.5, for instance.

Hugging Face is a top platform that offers pre-trained models and libraries for understanding natural language. It is well-known for its Transformers library, which includes a wide variety of pre-trained models that can be adjusted for different NLP tasks.

This is where HuggingFace comes in handy. HuggingFace provided HuggingFace Hub, a platform with over 120k models, 20k datasets, and 50k spaces (demo AI applications).

What is Langchain?

With the advancement of Large Language Models in AI, the need for informative ChatBots is in high demand. Let’s say you founded a new Gaming company with many user manuals and shortcut documentation. You need to integrate a ChatBot like ChatGPT for this company’s data. How do we achieve this?

This is where Langchain comes in. Langchain is a robust Large Language model framework that integrates various components such as embedding, Vector Databases, LLMs, etc. Using these components, we can provide external documents to the significant language models and build AI applications seamlessly.

Installation

We need to install the required libraries to get started with different ways to use HuggingFace on Langchain.

To use Langchain components, we can directly install Langchain with Huggingface the following command:

!pip install langchain

To use HuggingFace Models and embeddings, we need to install transformers and sentence transformers. In the latest update of Google Colab, you don’t need to install transformers.

!pip install transformers
!pip install sentence-transformers
!pip install bitsandbytes accelerate

To run the GenAI applications on edge, Georgi Gerganov developed LLamaCPP. LLamaCPP implements the Meta’s LLaMa architecture in efficient C/C++.

!pip install llama-cpp-python

Approach 1: HuggingFace Pipeline

The pipelines are a great and easy way to use models for inference. HuggingFace provides a pipeline wrapper class that can easily integrate tasks like text generation and summarization in just one line of code. This code line contains the calling pipeline attribute by instantiating the model, tokenizer, and task name.

We must load the Large Langauge model and relevant tokenizer to implement this. Since not everyone can access A100 or V100 GPUs, we must proceed with the Free T4 GPU. To run the large language model for inference using pipeline, we will use orca-mini 3 billion parameter LLM with quantization configuration to reduce the model size.

from langchain.llms.huggingface_pipeline import HuggingFacePipeline
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, 
from transformers import BitsAndBytesConfig

nf4_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)

In the provided code snippet, we utilize AutoModelForCausalLM to load the model and AutoTokenizer to load the tokenizer. Once the model and tokenizer are loaded, assign the model and tokenizer to the pipeline and mention the task to be text generation. The pipeline also allows adjustment of the output sequence length by modifying max_new_tokens.

model_id = "pankajmathur/orca_mini_3b"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
                 model_id,
                 quantization_config=nf4_config
                 )
pipe = pipeline("text-generation", 
               model=model, 
               tokenizer=tokenizer, 
               max_new_tokens=512
               )

Good job on running the pipeline successfully. HuggingFacePipeline wrapper class helps to integrate the Transformers model and Langchain with Huggingface. The code snippet below defines the prompt template for the orca model.

hf = HuggingFacePipeline(pipeline=pipe)

query = "Who is Shah Rukh Khan?"

prompt = f"""
### System:
You are an AI assistant that follows instruction extremely well. 
Help as much as you can. Please be truthful and give direct answers

### User:
{query}

### Response:
"""

response = hf.predict(prompt)
print(response)
HuggingFace and Langchain

Approach 2: HuggingFace Hub using Inference API

In approach one, you might have noticed that while using the pipeline, the model and tokenization download and load the weights. This approach might be time-consuming if the length of the model is enormous. Thus, the HuggingFace Hub Inference API comes in handy. To integrate HuggingFace Hub with Langchain, one requires a HuggingFace Access Token.

Steps to get HuggingFace Access Token

  • Log in to HuggingFace.co.
  • Click on your profile icon at the top-right corner, then choose “Settings.”
  • In the left sidebar, navigate to “Access Token.”
  • Generate a new access token, assigning it the “write” role.
from langchain.llms import HuggingFaceHub
import os
from getpass import getpass

os.environ["HUGGINGFACEHUB_API_TOKEN"] = getpass("HF Token:")

Once you get your Access token, use HuggingFaceHub to integrate the Transformers model with Langchain. In this case, we use the Zephyr, a fined-tuned model on Mistral 7B.

llm = HuggingFaceHub(
    repo_id="huggingfaceh4/zephyr-7b-alpha", 
    model_kwargs={"temperature": 0.5, "max_length": 64,"max_new_tokens":512}
)

query = "What is capital of India and UAE?"

prompt = f"""
 <|system|>
You are an AI assistant that follows instruction extremely well.
Please be truthful and give direct answers
</s>
 <|user|>
 {query}
 </s>
 <|assistant|>
"""

response = llm.predict(prompt)
print(response)
HuggingFace and Langchain

Since we are using Free Inference API, there are a few limitations on using the larger language models with 13B, 34B, and 70B models.

Approach 3: LlamaCPP

LLamaCPP allows the use of models packaged as. gguf files format that runs efficiently in CPU-only and mixed CPU/GPU environments using the llama.

To use LlamaCPP, we specifically need models whose model_path ends with gguf. You can download the model from here: zephyr-7b-beta.Q4.gguf. Once this model is downloaded, you can directly upload it to your drive or any other local storage.

from langchain.llms import LlamaCpp

from google.colab import drive
drive.mount('/content/drive')

llm_cpp = LlamaCpp(
            streaming = True,
            model_path="/content/drive/MyDrive/LLM_Model/zephyr-7b-beta.Q4_K_M.gguf",
            n_gpu_layers=2,
            n_batch=512,
            temperature=0.75,
            top_p=1,
            verbose=True,
            n_ctx=4096
            )
"

The prompt template remains the same since we are using the Zephyr model.

query = "Who is Elon Musk?"

prompt = f"""
 <|system|>
You are an AI assistant that follows instruction extremely well.
Please be truthful and give direct answers
</s>
 <|user|>
 {query}
 </s>
 <|assistant|>
"""

response = llm_cpp.predict(prompt)
print(response)
HuggingFace and Langchain

Conclusion

To conclude, we successfully implemented HuggingFace and Langchain open-source models with Langchain. Using these approaches, one can easily avoid paying OpenAI API credits. This guide mainly focused on using the Open Source LLMs, one major RAG pipeline component.

Hope you enjoy the article and gain a deeper understanding of LangChain and Hugging Face, two innovative tools that are transforming the landscape of natural language processing and application development.

Key Takeaways

  • Using HuggingFace’s Transformers pipeline, one can easily pick any top-performing Large Language models, Llama2 70B, Falcon 180 B, or Mistral 7B. The inference script is less than five lines of code.
  • As not all can afford to use A100 or V100 GPUs, HuggingFace provides Free Inference API (Access Token) to implement a few models from HuggingFace Hub. The most preferred model in this case is the 7B model.
  • LLamaCPP is used when you need to run Large Language models on the CPU. Currently, LlamaCPP is only supported with gguf model files.
  • It is recommended to follow the prompt template to run the predict() method on the user query.

Reference

Frequently Asked Questions

Q1. What is the difference between LangChain and Hugging Face?

A. LangChain focuses on blockchain-based decentralized AI models, emphasizing transparency and data ownership. Hugging Face, in contrast, provides a platform for sharing and using pretrained AI models, focusing on NLP applications.

Q2. Is LangChain better than Hugging Face?

A. Hugging Face is for finding pre-trained models.LangChain is for building applications using those models. They often work together. Think of Hugging Face as the model store and LangChain as the builder.

Q3. What is LangChain used for?

A. LangChain is used for deploying decentralized AI models on a blockchain, ensuring transparency and user control over data and algorithms.

Q4. What exactly does Hugging Face do?

A. Hugging Face offers a platform for sharing, discovering, and using pretrained AI models, primarily focusing on natural language processing (NLP) tasks such as text generation, translation, and sentiment analysis.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Data Scientist at AI Planet || YouTube- AIWithTarun || Google Developer Expert in ML || Won 5 AI hackathons || Co-organizer of TensorFlow User Group Bangalore || Pie & AI Ambassador at DeepLearningAI

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details