Kartik Nighania

Kartik Nighania

MLOps Engineer

Typewise

Kartik Nighania, an esteemed figure in AI, specializes in MLOps and is currently an engineer at Typewise in Europe. With over seven years of industry experience, Kartik's expertise spans diverse domains such as computer vision, reinforcement learning, NLP, and Gen AI systems. Previously, as Head of Engineering of a YCombinator-backed startup, Kartik spearheaded successful ventures in AI focusing on infrastructure scaling, team leadership, and MLOps implementation. His contributions to academia include publications in top journals and projects undertaken for the Government's Ministry of Human Resource Development (MHRD).

In this hands-on workshop, participants will learn how to build, deploy, scale, and monitor multi-agent workflows exactly the way it's done in a production environment at scale. 

We'll cover agent architecture fundamentals like multi-agent systems, MCP, context engineering and then build real-world agents with tool calling, RAG, memory, and human-in-the-loop workflows.

The workshop goes beyond development, covering evaluation strategies, CI/CD integration, prompt versioning, cloud scaling, and production monitoring and alerting.

 By the end, attendees will have the skills to take an AI agent from a local prototype to a fully monitored, production-ready system.

Prerequisites

  • Familiarity with Python
  • Basic understanding of API integrations 
  • Prior exposure to agent frameworks is helpful but not required
Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More