Mastering LLMs: Training, Fine-tuning, and Best Practices

Calendar 5th August, 2023 clock 9:30 am - 5:30 pm location RENAISSANCE :- Race Course Rd, Madhava Nagar Extension

Are you a frequent user of ChatGPT and Bard? Ever wondered about the magic that powers these remarkable technologies? It’s all thanks to the incredible models known as Large Language Models (LLMs). LLMs have revolutionized the field of Natural Language Processing (NLP) and are the driving force behind countless NLP applications. The world of LLMs is buzzing with groundbreaking research and exciting advancements within the community. Brace yourself for an immersive workshop where we delve deep into the realm of LLMs, exploring their various types and uncovering the cutting-edge architectures that define the current state of the art. Get ready to unlock the secrets of LLMs and witness the future of language technology unfold before your eyes!

Here are the detailed module wise details-

Module 0: Introduction to LLMs

  • History of LLMs
  • What are LLMs?
  • Why LLMs?
  • What are the different types of LLMs?
    • Continuing the text
    • Dialogue Optimized

Module 1: Understand the current state of the art of LLMs

  • Transformers
  • BERT
  • GPT and its variants
  • ChatGPT
  • Bard
  • LIMA
  • Falcon

Module 2: Training LLMs and their Best Practices

  • Build vs Buy Pretrained LLM models?
  • Understand the scaling laws
  • What are the challenges while training LLMs from scratch?
  • How to train LLMs from scratch?
  • Pretrain LLM on a domain specific datasets
  • How to evaluate LLMs?

Module 3: Finetuning LLMs

  • How can we use LLMs on the downstream tasks?
    • Prompting
    • Fine Tuning
  • Learn about prompt engineering and its techniques.
  • Learn about different finetuning techniques
  • Finetune LLM on a downstream tasks

Module 4: Parameter Efficient Fine Tuning 

  • Why and What is Parameter Efficient Fine Tuning (PEFT)?
  • Understanding different PEFT techniques
    • Prefix Tuning
    • LoRA
    • QLoRA
  • Finetune LLM on a single GPU using PEFT techniques

Module 5: AutoGPT, LangChain, Vector DBs

  • Work hands-on with popular LLM tools
  • Work hands-on frameworks like AutoGPT, LangChain and Vector DBs

Pre-requisites:

  • System Requirement and Setup
    • Laptop with at least 4-8 GB of RAM
    • We will be using a cloud jupyter notebook powered by GPU for the workshop
  • Offline Setup [Optional]
    • GPU good to have!
    • Install Python3.9 or higher version(Resource)
    • Install jupyter notebook (Resource)
  • Pre-reads

Note: These are tentative details and are subject to change.

Download Full Agenda