Kartik Nighania

Kartik Nighania

MLOps Engineer

Typewise

Kartik Nighania, an esteemed figure in AI, specializes in MLOps and is currently an engineer at Typewise in Europe. With over seven years of industry experience, Kartik's expertise spans diverse domains such as computer vision, reinforcement learning, NLP, and Gen AI systems. Previously, as Head of Engineering of a YCombinator-backed startup, Kartik spearheaded successful ventures in AI focusing on infrastructure scaling, team leadership, and MLOps implementation. His contributions to academia include publications in top journals and projects undertaken for the Government's Ministry of Human Resource Development (MHRD).

Ready to go from experimentation to production with LLMs? This hands-on session will guide you through training language models using HuggingFace, building Retrieval Augmented Generation (RAG) pipelines with Qdrant, and deploying automated training workflows on Amazon SageMaker. You’ll also learn how to orchestrate multi-agent workflows using LangGraph and test, monitor, and evaluate your models with LangSmith. Through practical labs, participants will build end-to-end, production-ready GenAI systems that prioritize scalability, reliability, and real-world performance, equipping you with the tools to operationalize LLMs with confidence.

Prerequisite: Basic Python programming skills, basic understanding of machine learning concepts, and familiarity with AWS services.

*Note: These are tentative details and are subject to change.

Read More

This hands-on session reveals battle-tested strategies for scaling AI agents from prototype to production. We'll cover critical engineering practices including robust monitoring systems, comprehensive logging frameworks, automated testing pipelines, and CICD workflows optimized for agent deployments. Participants will learn concrete techniques to detect hallucinations, measure reliability metrics, and implement guardrails that ensure consistent agent performance under real-world conditions. Join us for practical insights on building GenAI systems that don't just work in demos, but deliver dependable value in production environments.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More