Logesh Kumar Umapathi

Logesh Kumar Umapathi

Machine Learning Consultant

About

Logesh Kumar Umapathi is a Machine learning Engineer at Blackbox.ai. His work focuses on building agentic systems and models that help automate software development and improve developer productivity. He has led the development of state-of-the-art software engineering agents, and his research has been widely cited by leading ML labs including OpenAI , Meta and Microsoft .His interests include Code generation LLMs , Reinforcement learning , Synthetic data generation with LLMs and alignment of code LLMs to Human preferences.

Beyond LLM and core agent work, he also speaks and demos applied AI: at DataHack Summit (DHS) 2025, he presented “From Language to Robotics” featuring a live robotic-arm demo that highlighted the convergence of LLMs and reinforcement learning for embodied systems. He was also recognized with Top AI Scientist award in Analytics Vidhya’s AV Luminary Awards at DHS 2025.

He writes and documents work publicly at logeshumapathi.com

About the Workshop 

Do you want to go beyond text and build models that can understand documents, images, charts, screens, and visual workflows? 

This full-day workshop is designed for learners and practitioners who want to build and fine-tune multimodal models, rather than only rely on ready-made APIs. Participants will get a hands-on introduction to how modern Vision Language Models (VLMs) are built, trained, and adapted for real-world applications. 

Many high-value enterprise workflows are visual by nature. Teams deal with invoices, contracts, scanned PDFs, handwritten forms, product images, dashboards, medical records, screenshots, and software interfaces every day. Traditional OCR pipelines can extract text, but they often fail when the task requires understanding the domain, layout, visual relationships, charts, tables, handwritten content, or interface structure. Text-only LLMs also struggle when the meaning depends on what is seen, not just what is written. Vision Language Models solve this by combining visual understanding with language reasoning and its pretrained world knowledge. 

In this workshop, participants will learn the full lifecycle of a practical VLM: building a VLM from an LLM, fine-tuning open-source models for tasks such as document understanding, OCR, object tracking, and computer use, and improving performance through reinforcement learning. With a strong hands-on emphasis, participants will leave with both conceptual clarity and practical intuition for working with modern multimodal AI systems. 

Why this workshop matters 

This workshop will help participants build skills that are directly relevant to modern enterprise AI use cases where text-only systems are not enough. 

In many business settings, the challenge is to convert the information in unstructured documents, screenshots, UIs and images into a structured format and derive insights from it. That is where VLMs become far more useful than standard LLM workflows. 

For example: 

  • in enterprise automation, a VLM can interact with a software interface, making it useful for computer-use agents and UI automation. 
  • in retail and manufacturing, VLMs can support product image checks, packaging verification, defect review, and visual inspection tasks that a standard LLM cannot perform directly 
  • in healthcare and biomedicine, a VLM can handle visually rich records, forms, handwritten notes, and diagnostic imagery in ways that are difficult to solve with text-only models 
  • in financial operations, a text-only LLM may read extracted OCR text from an invoice, but a VLM can not only extract text , but it can better interpret the overall context,  layout, key-value placement, stamps, tables, signatures, and multi-page document structure . 
  • in legal and compliance workflows, a VLM can reason over scanned contracts, annotations, clause placement, and tabular annexures more effectively than a text-only system working on noisy OCR output  

In short, this workshop teaches participants how to build AI systems that can see, reason, and act on visual information in practical business environments. 

Key Takeaways 

With the knowledge from this workshop, participants will be able to: 

  • continue pretraining a VLM for domain adaptation in focus areas such as finance, biomedicine, and legal workflows  
  • adapt open-source VLMs such as Qwen or Gemma for downstream enterprise tasks  
  • build document understanding systems for OCR, information extraction, and document question answering  
  • develop computer-use agents that can interpret and act on software interfaces  
  • apply VLMs to image reasoning tasks such as product image quality assessment and visual inspection  
  • improve multimodal systems for tasks where layout, visuals, and structure matter as much as text 

 

*Note: These are tentative details and are subject to change. 

Read More →