From LLMs to VLMs: Building Multimodal AI for Enterprise Use Cases

About the Workshop

About the Workshop 

Do you want to go beyond text and build models that can understand documents, images, charts, screens, and visual workflows? 

This full-day workshop is designed for learners and practitioners who want to build and fine-tune multimodal models, rather than only rely on ready-made APIs. Participants will get a hands-on introduction to how modern Vision Language Models (VLMs) are built, trained, and adapted for real-world applications. 

Many high-value enterprise workflows are visual by nature. Teams deal with invoices, contracts, scanned PDFs, handwritten forms, product images, dashboards, medical records, screenshots, and software interfaces every day. Traditional OCR pipelines can extract text, but they often fail when the task requires understanding the domain, layout, visual relationships, charts, tables, handwritten content, or interface structure. Text-only LLMs also struggle when the meaning depends on what is seen, not just what is written. Vision Language Models solve this by combining visual understanding with language reasoning and its pretrained world knowledge. 

In this workshop, participants will learn the full lifecycle of a practical VLM: building a VLM from an LLM, fine-tuning open-source models for tasks such as document understanding, OCR, object tracking, and computer use, and improving performance through reinforcement learning. With a strong hands-on emphasis, participants will leave with both conceptual clarity and practical intuition for working with modern multimodal AI systems. 

Why this workshop matters 

This workshop will help participants build skills that are directly relevant to modern enterprise AI use cases where text-only systems are not enough. 

In many business settings, the challenge is to convert the information in unstructured documents, screenshots, UIs and images into a structured format and derive insights from it. That is where VLMs become far more useful than standard LLM workflows. 

For example: 

  • in enterprise automation, a VLM can interact with a software interface, making it useful for computer-use agents and UI automation. 
  • in retail and manufacturing, VLMs can support product image checks, packaging verification, defect review, and visual inspection tasks that a standard LLM cannot perform directly 
  • in healthcare and biomedicine, a VLM can handle visually rich records, forms, handwritten notes, and diagnostic imagery in ways that are difficult to solve with text-only models 
  • in financial operations, a text-only LLM may read extracted OCR text from an invoice, but a VLM can not only extract text , but it can better interpret the overall context,  layout, key-value placement, stamps, tables, signatures, and multi-page document structure . 
  • in legal and compliance workflows, a VLM can reason over scanned contracts, annotations, clause placement, and tabular annexures more effectively than a text-only system working on noisy OCR output  

In short, this workshop teaches participants how to build AI systems that can see, reason, and act on visual information in practical business environments. 

Key Takeaways 

With the knowledge from this workshop, participants will be able to: 

  • continue pretraining a VLM for domain adaptation in focus areas such as finance, biomedicine, and legal workflows  
  • adapt open-source VLMs such as Qwen or Gemma for downstream enterprise tasks  
  • build document understanding systems for OCR, information extraction, and document question answering  
  • develop computer-use agents that can interpret and act on software interfaces  
  • apply VLMs to image reasoning tasks such as product image quality assessment and visual inspection  
  • improve multimodal systems for tasks where layout, visuals, and structure matter as much as text 

 

*Note: These are tentative details and are subject to change. 

Prerequisites

  • Participants should be comfortable with Python

  • Participants should have a basic understanding of deep learning fundamentals and transformers

  • Prior experience with computer vision is helpful, but not required

Workshop Modules

This module introduces the motivation behind VLMs through real-world examples such as image understanding, handwritten document interpretation, and computer-use agents. It establishes why multimodal models are becoming a foundational building block for modern AI products. 

This module explains how vision encoders and large language models are connected, how multimodal representations are formed, and which architectural choices influence performance, efficiency, and usability in practice. 

This hands-on module walks participants through the process of building a VLM starting from an existing LLM and a vision encoder. A small but capable VLM will be trained so participants can apply and reinforce the concepts learnt so far.

This will also help participants understand continued pretraining of an existing model for domain adaptation. 

This hands-on module focuses on adapting existing VLMs using supervised fine-tuning for downstream tasks such as document image understanding and object tracking. 

We will fine-tune a VLM on a real-world task to understand visual documents such as scanned PDFs, images, charts, and tables, and use it to extract information and answer questions from those documents. 

It covers task formulation, data considerations, and practical trade-offs, while giving participants direct experience fine-tuning a VLM for a real application task. 

This module introduces reinforcement learning in the multimodal setting, including cases where a VLM acts as the policy model and where it is used as a judge or evaluator. It also covers why RL can be useful for improving multimodal reasoning and task performance. 

Participants will see a hands-on example of applying reinforcement learning (RLVR) to improve the performance of the fine-tuned model from Module 4 on the visual document understanding task. 

Instructor

Workshop Details