A Comprehensive Guide to Fine-Tuning Large Language Models

Babina Banjara 25 Apr, 2024 • 15 min read

Introduction

Over the past few years, the landscape of natural language processing (NLP) has undergone a remarkable transformation, all thanks to the advent of fine-tuning large language models. These sophisticated models have opened the doors to a wide array of applications, ranging from language translation to sentiment analysis and even the creation of intelligent chatbots.

Fine tunning Large lanauge models

But their versatility sets these models apart; fine-tuning them to tackle specific tasks and domains has become a standard practice, unlocking their true potential and elevating their performance to new heights. In this comprehensive guide, we’ll delve into the world of fine-tuning large language models, covering everything from the basics to advanced.

Learning Objectives

  • Understand the concept and importance of fine-tuning in adapting large language models to specific tasks.
  • Discover advanced fine-tuning techniques like multitasking, instruction fine-tuning, and parameter-efficient fine-tuning.
  • Gain practical knowledge of real-world applications where fine-tuned language models revolutionize industries.
  • Learn the step-by-step process of fine-tuning large language models.
  • Implement the peft finetuning mechanism.
  • Understand the difference between standard finetuning and instruction finetuning.

This article was published as a part of the Data Science Blogathon.

Understanding Fine Tuning Large Language Models

Pre-trained language models, like those used for fine-tuning large language models, are large neural networks trained on vast corpora of text data, usually sourced from the internet. The training process involves predicting missing words or tokens in a given sentence or sequence, which imbues the model with a profound understanding of grammar, context, and semantics. By processing billions of sentences, these models can grasp the intricacies of language and effectively capture its nuances. Fine-tuning, a crucial step in optimizing these models for specific tasks, allows for further customization and specialization based on specific datasets or domains

Examples of popular pre-trained language models include BERT (Bidirectional Encoder Representations from Transformers), GPT-3 (Generative Pre-trained Transformer 3), RoBERTa (A Robustly Optimized BERT Pretraining Approach), and many more. These models are known for their ability to perform tasks such as text generation, sentiment classification, and language understanding at an impressive level of proficiency.

Let’s discuss one of the language models in detail.

GPT-3

GPT-3 Generative Pre-trained Transformer 3 is a ground-breaking language model architecture that has transformed natural language generation and understanding. The Transformer model is the foundation for the GPT-3 architecture, which incorporates several parameters to produce exceptional performance.

The Architecture of GPT-3

A stack of Transformer encoder layers makes up GPT-3. Multi-head self-attention mechanisms and feed-forward neural networks make up each layer. While the feed-forward networks process and transform the encoded representations, the attention mechanism enables the model to recognize dependencies and relationships between words.

The main innovation of GPT-3 is its enormous size, which allows it to capture a huge amount of language knowledge thanks to its astounding 175 billion parameters.

"

Implementation of Code

You can use the OpenAI API to interact with the GPT- 3 model of openAI. Here is an example of text generation using GPT-3.

import openai

# Set up your OpenAI API credentials
openai.api_key = 'YOUR_API_KEY'

# Define the prompt for text generation
prompt = "A quick brown fox jumps"

# Make a request to GPT-3 for text generation
response = openai.Completion.create(
  engine="text-davinci-003",
  prompt=prompt,
  max_tokens=100,
  temperature=0.6
)

# Retrieve the generated text from the API response
generated_text = response.choices[0].text

# Print the generated text
print(generated_text)

Fine-Tuning: Tailoring Models to Our Needs

Here’s the twist: while pre-trained language models are prodigious, they are not inherently experts in any specific task. They may have an incredible grasp of language, but they need some LLMs fine-tuning, a process where developers enhance their performance in tasks like sentiment analysis, language translation, or answering questions about specific domains. Fine-tuning large language models is the key to unlocking their full potential and tailoring their capabilities to specific applications

Fine-tuning is like providing a finishing touch to these versatile models. Imagine having a multi-talented friend who excels in various areas, but you need them to master one particular skill for a special occasion. You would give them some specific training in that area, right? That’s precisely what we do with pre-trained language models during fine-tuning.

fine tuning large language model, llms Fine tunning

Fine-tuning large language models involves training the pre-trained model on a smaller, task-specific dataset. This new dataset is labeled with examples relevant to the target task. By exposing the model to these labeled examples, it can adjust its parameters and internal representations to become well-suited for the target task.

The Need for Fine-Tuning LLMs

While pre-trained language models are remarkable, they are not task-specific by default. Fine-tuning large language models is adapting these general-purpose models to perform specialized tasks more accurately and efficiently. When we encounter a specific NLP task like sentiment analysis for customer reviews or question-answering for a particular domain, we need to fine-tune the pre-trained model to understand the nuances of that specific task and domain.

The benefits of fine-tuning are manifold. Firstly, it leverages the knowledge learned during pre-training, saving substantial time and computational resources that would otherwise be required to train a model from scratch. Secondly, fine-tuning allows us to perform better on specific tasks, as the model is now attuned to the intricacies and nuances of the domain it was fine-tuned for.

Fine-Tuning LLMs Process: A Step-by-step Guide

The LLM fine-tuning process typically involves feeding the task-specific dataset to the pre-trained model and adjusting its parameters through backpropagation. The goal is to minimize the loss function, which measures the difference between the model’s predictions and the ground-truth labels in the dataset. This fine-tuning process updates the model’s parameters, making it more specialized for your target task.

Here we will walk through the process of fine-tuning a large language model for sentiment analysis. We’ll use the Hugging Face Transformers library, which provides easy access to pre-trained models and utilities for fine-tuning.

Step 1: Load the Pre-trained Language Model and Tokenizer

The first step is to load the pre-trained language model and its corresponding tokenizer. For this example, we’ll use the ‘distillery-base-uncased’ model, a lighter version of BERT.

from transformers import DistilBertTokenizer, DistilBertForSequenceClassification

# Load the pre-trained tokenizer
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')

# Load the pre-trained model for sequence classification
model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')

Step 2: Prepare the Sentiment Analysis Dataset

We need a labeled dataset with text samples and corresponding sentiments for sentiment analysis. Let’s create a small dataset for illustration purposes:

texts = ["I loved the movie. It was great!",
         "The food was terrible.",
         "The weather is okay."]
sentiments = ["positive", "negative", "neutral"]

Next, we’ll use the tokenizer to convert the text samples into token IDs, and attention masks the model requires.

# Tokenize the text samples
encoded_texts = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')

# Extract the input IDs and attention masks
input_ids = encoded_texts['input_ids']
attention_mask = encoded_texts['attention_mask']

# Convert the sentiment labels to numerical form
sentiment_labels = [sentiments.index(sentiment) for sentiment in sentiments]

Step 3: Add a Custom Classification Head

The pre-trained language model itself doesn’t include a classification head. We must add one to the model to perform sentiment analysis. In this case, we’ll add a simple linear layer.

import torch.nn as nn

# Add a custom classification head on top of the pre-trained model
num_classes = len(set(sentiment_labels))
classification_head = nn.Linear(model.config.hidden_size, num_classes)

# Replace the pre-trained model's classification head with our custom head
model.classifier = classification_head

Step 4: Fine-Tune the Model

With the custom classification head in place, we can now fine-tune the model on the sentiment analysis dataset. We’ll use the AdamW optimizer and CrossEntropyLoss as the loss function.

import torch.optim as optim

# Define the optimizer and loss function
optimizer = optim.AdamW(model.parameters(), lr=2e-5)
criterion = nn.CrossEntropyLoss()

# Fine-tune the model
num_epochs = 3
for epoch in range(num_epochs):
    optimizer.zero_grad()
    outputs = model(input_ids, attention_mask=attention_mask, labels=torch.tensor(sentiment_labels))
    loss = outputs.loss
    loss.backward()
    optimizer.step()

What is Instruction Fine tuning?

Instruction fine-tuning is a specialized technique to tailor large language models to perform specific tasks based on explicit instructions. While traditional fine-tuning involves training a model on task-specific data, instruction fine-tuning goes further by incorporating high-level instructions or demonstrations to guide the model’s behavior.

"pre trained LLM , Fine-tuned LLM

This approach allows developers to specify desired outputs, encourage certain behaviors, or achieve better control over the model’s responses. In this comprehensive guide, we will explore the concept of instruction fine-tuning and its implementation step-by-step.

Instruction Finetuning Process

What if we could go beyond traditional fine-tuning and provide explicit instructions to guide the model’s behavior? Instruction fine-tuning does that, offering a new level of control and precision over model outputs. Here we will explore the process of instruction fine-tuning large language models for sentiment analysis.

Step 1: Load the Pre-trained Language Model and Tokenizer

To begin, let’s load the pre-trained language model and its tokenizer. We’ll use GPT-3, a state-of-the-art language model, for this example.

from transformers import GPT2Tokenizer, GPT2ForSequenceClassification

# Load the pre-trained tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')

# Load the pre-trained model for sequence classification
model = GPT2ForSequenceClassification.from_pretrained('gpt2')

Step 2: Prepare the Instruction Data and Sentiment Analysis Dataset

For instruction fine-tuning, we need to augment the sentiment analysis dataset with explicit instructions for the model. Let’s create a small dataset for demonstration:

texts = ["I loved the movie. It was great!",
         "The food was terrible.",
         "The weather is okay."]
sentiments = ["positive", "negative", "neutral"]
instructions = ["Analyze the sentiment of the text and identify if it is positive.",
                "Analyze the sentiment of the text and identify if it is negative.",
                "Analyze the sentiment of the text and identify if it is neutral."]

Next, let’s tokenize the texts, sentiments, and instructions using the tokenizer:

# Tokenize the texts, sentiments, and instructions
encoded_texts = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
encoded_instructions = tokenizer(instructions, padding=True, truncation=True, return_tensors='pt')

# Extract input IDs, attention masks, and instruction IDs
input_ids = encoded_texts['input_ids']
attention_mask = encoded_texts['attention_mask']
instruction_ids = encoded_instructions['input_ids']

Step 3: Customize the Model Architecture with Instructions

To incorporate instructions during fine-tuning, we need to customize the model architecture. We can do this by concatenating the instruction IDs with the input IDs:

import torch

# Concatenate instruction IDs with input IDs and adjust attention mask
input_ids = torch.cat([instruction_ids, input_ids], dim=1)
attention_mask = torch.cat([torch.ones_like(instruction_ids), attention_mask], dim=1)

Step 4: Fine-Tune the Model with Instructions

With the instructions incorporated, we can now fine-tune the GPT-3 model on the augmented dataset. During fine-tuning, the instructions will guide the model’s sentiment analysis behavior.

import torch.optim as optim

# Define the optimizer and loss function
optimizer = optim.AdamW(model.parameters(), lr=2e-5)
criterion = torch.nn.CrossEntropyLoss()

# Fine-tune the model
num_epochs = 3
for epoch in range(num_epochs):
    optimizer.zero_grad()
    outputs = model(input_ids, attention_mask=attention_mask, labels=torch.tensor(sentiments))
    loss = outputs.loss
    loss.backward()
    optimizer.step()

Instruction fine-tuning takes the power of traditional fine-tuning to the next level, allowing us to control the behavior of large language models precisely. By providing explicit instructions, we can guide the model’s output and achieve more accurate and tailored results.

Key Differences Between the Two Approaches

Standard fine-tuning involves training a model on a labeled dataset, honing its abilities to perform specific tasks effectively. However, when it comes to fine-tuning large language models like GPT-3.5, if we want to provide explicit instructions to guide the model’s behavior, instruction fine-tuning comes into play. This approach offers unparalleled control and adaptability, allowing us to tailor the model’s responses to meet specific criteria or address nuanced requirements

Here are the critical differences between instruction finetuning and standard finetuning.

  • Data Requirements: Standard fine-tuning relies on a significant amount of labeled data for the specific task, whereas instruction fine-tuning benefits from the guidance provided by explicit instructions, making it more adaptable with limited labeled data.
  • Control and Precision: Instruction fine-tuning allows developers to specify desired outputs, encourage certain behaviors, or achieve better control over the model’s responses. Standard fine-tuning may not offer this level of control.
  • Learning from Instructions: Instruction fine-tuning requires an additional step of incorporating instructions into the model’s architecture, which standard fine-tuning does not.

Introducing Catastrophic Forgetting: A Perilous Challenge

As we navigate the vast realm of fine-tuning large language models, we inevitably face the daunting challenge of catastrophic forgetting. This phenomenon arises when the model undergoes fine-tuning for a new task, causing it to inadvertently erase or ‘forget’ the valuable knowledge acquired during pre-training. In this intricate process, the model risks losing its grasp on the broader language structure, concentrating its focus solely on the intricacies of the new task at hand.

Imagine our language model as a ship’s cargo hold filled with various knowledge containers, each representing different linguistic nuances. During pre-training, these containers are carefully filled with language understanding. The ship’s crew rearranges the containers when we approach a new task and begin fine-tuning. They empty some to make space for new task-specific knowledge. Unfortunately, some original knowledge is lost, leading to catastrophic forgetting.

Mitigating Catastrophic Forgetting: Safeguarding Knowledge

To navigate the waters of catastrophic forgetting, we need strategies to safeguard the valuable knowledge captured during pre-training. There are two possible approaches.

Multi-task Finetuning: Progressive Learning

Here we gradually introduce the new task to the model. Initially, the model focuses on pre-training knowledge and slowly incorporates the new task data, minimizing the risk of catastrophic forgetting.

Multitask instruction fine-tuning embraces a new paradigm by simultaneously training language models on multiple tasks. Instead of fine-tuning the model for one task at a time, we provide explicit instructions for each task, guiding the model’s behavior during fine-tuning.

Multitask instruction fine-tuning introduces a revolutionary approach to refining large language models. Embracing a new paradigm, this method involves concurrently training language models on multiple tasks. Instead of the traditional approach of fine-tuning the model for one task at a time, multitask instruction fine-tuning takes a more holistic approach. It entails providing explicit instructions for each task, thereby guiding the model’s behavior throughout the fine-tuning process. This innovative technique not only enhances efficiency but also allows for a more seamless integration of diverse tasks, showcasing the adaptability and versatility of Fine-tuning large language models

fine tuning large lanuage models, progressive learning

Benefits of Multitask Instruction Fine-Tuning

  • Knowledge Transfer: The model gains insights and knowledge from different domains by training on multiple tasks, enhancing its overall language understanding.
  • Shared Representations: Multitask instruction fine-tuning allows the model to share representations across tasks. This sharing of knowledge improves the model’s generalization capabilities.
  • Efficiency: Training on multiple tasks concurrently reduces the computational cost and time compared to fine-tuning each task individually.

Parameter Efficient Finetuning: Transfer Learning

Here we freeze certain layers of the model during fine-tuning in large language models. By freezing early layers responsible for fundamental language understanding, we preserve the core knowledge while only fine-tuning later layers for the specific task.

Understanding PEFT

Memory is necessary for full fine-tuning to store the model and several other training-related parameters. You must be able to allocate memory for optimizer states, gradients, forward activations, and temporary memory throughout the training process, even if your computer can hold the model weight of hundreds of gigabytes for the largest models. These extra parts may be much bigger than the model and quickly outgrow the capabilities of consumer hardware.

LLM  with additional layers for PEFT

Parameter-efficient fine-tuning large language models techniques only update a small subset of parameters instead of full fine-tuning, which updates every model weight during supervised learning. Some path techniques concentrate on fine-tuning a portion of existing model parameters, such as specific layers or components, while freezing the majority of model weights. Other methods add a few new parameters or layers and only fine-tune the new components; they do not affect the original model weights. Most, if not all, LLM weights are kept frozen using PEFT. As a result, compared to the original LLM, there are significantly fewer trained parameters.

Why PEFT?

PEFT empowers parameter-efficient models with impressive performance, revolutionizing the landscape of NLP. Here are a few reasons why we use PEFT.

  • Reduced Computational Costs: PEFT requires fewer GPUs and GPU time, making it more accessible and cost-effective for training large language models.
  • Faster Training Times: With PEFT, models finish training faster, enabling rapid iterations and quicker deployment in real-world applications.
  • Lower Hardware Requirements: PEFT works efficiently with smaller GPUs and requires less memory, making it feasible for resource-constrained environments.
  • Improved Modeling Performance: PEFT produces more robust and accurate models for diverse tasks by reducing overfitting.
  • Space-Efficient Storage: With shared weights across tasks, PEFT minimizes storage requirements, optimizing model deployment and management.

Finetuning with PEFT

While freezing most pre-trained LLMs, PEFT only approaches fine-tuning a few model parameters, significantly lowering the computational and storage costs. This also resolves the problem of catastrophic forgetting, which was seen during LLMs’ full fine-tuning.

In low-data regimes, PEFT approaches have also been demonstrated to be superior to fine-tuning and to better generalize to out-of-domain scenarios.

Loading the Model

Let’s load the opt-6.7b model here; its weights on the Hub are roughly 13GB in half-precision( float16). It will require about 7GB of memory if we load them in 8-bit.

import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
import torch.nn as nn
import bitsandbytes as bnb
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "facebook/opt-6.7b", 
    load_in_8bit=True, 
    device_map='auto',
)

tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b")

Postprocessing On the Model

Let’s freeze all our layers and cast the layer norm in float32 for stability before applying some post-processing to the 8-bit model to enable training. We also cast the final layer’s output in float32 for the same reasons.

for param in model.parameters():
  param.requires_grad = False  # freeze the model - train adapters later
  if param.ndim == 1:
    param.data = param.data.to(torch.float32)

model.gradient_checkpointing_enable()  # reduce number of stored activations
model.enable_input_require_grads()

class CastOutputToFloat(nn.Sequential):
  def forward(self, x): return super().forward(x).to(torch.float32)
model.lm_head = CastOutputToFloat(model.lm_head)

Using LoRA

Load a PeftModel, we will use low-rank adapters (LoRA) using the get_peft_model utility function from Peft.

The function calculates and prints the total number of trainable parameters and all parameters in a given model. Along with the percentage of trainable parameters, providing an overview of the model’s complexity and resource requirements for training.

def print_trainable_parameters(model):
 
    # Prints the number of trainable parameters in the model.
   
    trainable_params = 0
    all_param = 0
    for _, param in model.named_parameters():
        all_param += param.numel()
        if param.requires_grad:
            trainable_params += param.numel()
    print(
        f"trainable params: {trainable_params} || all params: {all_param} || 
          trainable%: {100 * trainable_params / all_param}"
    )
    

This uses the Peft library to create a LoRA model with specific configuration settings, including dropout, bias, and task type. It then obtains the trainable parameters of the model and prints the total number of trainable parameters and all parameters, along with the percentage of trainable parameters.

from peft import LoraConfig, get_peft_model 

config = LoraConfig(
    r=16,
    lora_alpha=32,
    target_modules=["q_proj", "v_proj"],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)

model = get_peft_model(model, config)
print_trainable_parameters(model)

Training the Model

This uses the Hugging Face Transformers and Datasets libraries to train a language model on a given dataset. It utilizes the ‘transformers.Trainer’ class to define the training setup, including batch size, learning rate, and other training-related configurations and then trains the model on the specified dataset.

import transformers
from datasets import load_dataset
data = load_dataset("Abirate/english_quotes")
data = data.map(lambda samples: tokenizer(samples['quote']), batched=True)

trainer = transformers.Trainer(
    model=model, 
    train_dataset=data['train'],
    args=transformers.TrainingArguments(
        per_device_train_batch_size=4, 
        gradient_accumulation_steps=4,
        warmup_steps=100, 
        max_steps=200, 
        learning_rate=2e-4, 
        fp16=True,
        logging_steps=1, 
        output_dir='outputs'
    ),
    data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False  # silence the warnings. Please re-enable for inference!
trainer.train()

Real-world Applications of Fine-tuning LLMs

We will look closer at some exciting real-world use cases of fine-tuning large language models, where NLP advancements are transforming industries and empowering innovative solutions.

  • Sentiment Analysis: Fine-tuning language models for sentiment analysis allows businesses to analyze customer feedback, product reviews, and social media sentiments to understand public perception and make data-driven decisions.
  • Named Entity Recognition (NER): By fine-tuning models for NER, entities like names, dates, and locations can be automatically extracted from text, enabling applications like information retrieval and document categorization.
  • Language Translation: Fine-tuned models can be used for machine translation, breaking language barriers and enabling seamless communication across different languages.
  • Chatbots and Virtual Assistants: By fine-tuning llms, chatbots and virtual assistants can provide more accurate and contextually relevant responses, enhancing user experiences.
  • Medical Text Analysis: Fine-tuned models can aid in analyzing medical documents, electronic health records, and medical literature, assisting healthcare professionals in diagnosis and research.
  • Financial Analysis: Fine-tuning language models can be utilized in financial sentiment analysis, predicting market trends, and generating financial reports from vast datasets.
  • Legal Document Analysis: Fine-tuned models can help in legal document analysis, contract review, and automated document summarization, saving time and effort for legal professionals.

In the real world, fine-tuning large language models has found applications across diverse industries, empowering businesses and researchers to harness the capabilities of NLP for a wide range of tasks, leading to enhanced efficiency, improved decision-making, and enriched user experiences.

Conclusion

Fine-tuning large language models has emerged as a powerful technique to adapt these pre-trained models to specific tasks and domains. As the field of NLP advances, fine-tuning will remain crucial to developing cutting-edge language models and applications.

This comprehensive guide has taken us on an enlightening journey through the world of fine-tuning large language models. We started by understanding the significance of fine-tuning, which complements pre-training and empowers language models to excel at specific tasks. Choosing the right pre-trained model is crucial, and we explored popular models. We dived into advanced techniques like multitask fine-tuning, parameter-efficient fine-tuning, and instruction fine-tuning, which push the boundaries of efficiency and control in NLP. Additionally, we explored real-world applications, witnessing how fine-tuned models revolutionize sentiment analysis, language translation, virtual assistants, medical analysis, financial predictions, and more.

Key Takeaways

  • Fine-tuning complements pre-training, empowering language models for specific tasks, making it crucial for cutting-edge applications.
  • Advanced techniques like multitasking, parameter-efficient, and instruction fine-tuning push NLP’s boundaries, enhancing model performance and adaptability.
  • Embracing fine-tuning revolutionizes real-world applications, transforming how we understand textual data, from sentiment analysis to virtual assistants.

With the power of fine-tuning, we navigate the vast ocean of language with precision and creativity, transforming how we interact with and understand the world of text. So, embrace the possibilities and unleash the full potential of language models through fine-tuning, where the future of NLP is shaped with each finely tuned model.

Frequently Asked Questions

Q1: What is fine-tuning, and why is it essential for large language models?

A1:  Fine-tuning is adapting pre-trained language models to specific tasks and domains. It complements pre-training and enables models to excel in particular contexts, making them more powerful and effective for real-world applications.

Q2: What are multitask fine-tuning and instruction fine-tuning?

A2: Multitask fine-tuning involves training a model on multiple related tasks simultaneously, enhancing its ability to transfer knowledge across tasks. Instruction fine-tuning introduces prompts or instructions during training, allowing fine-grained control over the model’s behavior.

Q3: How can parameter-efficient fine-tuning benefit NLP tasks?

A3: Parameter-efficient fine-tuning reduces the computational resources required, making it more accessible for low-resource environments while maintaining comparable performance to standard fine-tuning.

Q4: Will fine-tuning a model cause overfitting to my specific dataset?

A4: While fine-tuning can lead to overfitting on small datasets, techniques like early stopping, dropout, and data augmentation can mitigate this risk and promote generalization to new data.

Q5: How can I fine-tune a language model for my task if labeled data is limited?

A5: In scenarios with limited labeled data, transfer learning from related tasks or leveraging pre-training on similar datasets can help improve the model’s performance and adaptability. Also, few-shot learning and data augmentation techniques can be useful for low-resource scenarios.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion. 

Babina Banjara 25 Apr 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers