Kartik Nighania

Kartik Nighania

MLOps Engineer

Typewise

Kartik Nighania, an esteemed figure in AI, specializes in MLOps and is currently an engineer at Typewise in Europe. With over seven years of industry experience, Kartik's expertise spans diverse domains such as computer vision, reinforcement learning, NLP, and Gen AI systems. Previously, as Head of Engineering of a YCombinator-backed startup, Kartik spearheaded successful ventures in AI focusing on infrastructure scaling, team leadership, and MLOps implementation. His contributions to academia include publications in top journals and projects undertaken for the Government's Ministry of Human Resource Development (MHRD).

Learn how to train LLMs with HuggingFace and use RAG with Qdrant 

Deploy and create automated pipelines for LLMs with SageMaker

Build multi-agent workflows with LangGraph

How to test, monitor, and evaluate LLMs with LangSmith 

Through practical labs, participants will build production-ready LLM systems incorporating best practices for scalability and reliability.

Prerequisite: Basic Python programming skills, basic understanding of machine learning concepts, and familiarity with AWS services.

*Note: These are tentative details and are subject to change.

Read More

This hands-on session reveals battle-tested strategies for scaling AI agents from prototype to production. We'll cover critical engineering practices including robust monitoring systems, comprehensive logging frameworks, automated testing pipelines, and CICD workflows optimized for agent deployments. Participants will learn concrete techniques to detect hallucinations, measure reliability metrics, and implement guardrails that ensure consistent agent performance under real-world conditions. Join us for practical insights on building GenAI systems that don't just work in demos, but deliver dependable value in production environments.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More