speaker detail

Kartik Nighania

MLOps Engineer

company logo

Kartik Nighania, an esteemed figure in AI, specializes in ML Ops as an ML Ops Engineer at Typewise in Europe. With over seven years of industry experience, Kartik's expertise spans diverse domains such as computer vision, reinforcement learning, NLP, and generative systems. Previously, as Head of Engineering at Pibit.ai, backed by YCombinator, Kartik spearheaded successful ventures in insurance automation using AI, focusing on infrastructure scaling, team leadership, and MLOps implementation. Kartik's tenure at HSBC Technology's global DevOps department showcased his proficiency in building and managing CI/CD pipelines at scale, enhancing software delivery processes. His contributions to academia include publications in biometrics and innovative projects like compact networks for face super-resolution and ML-driven crop health detection sponsored by the Ministry of Human Resource Development (MHRD).

LLMs have taken the world by storm since their inception, and the past year has marked a significant shift in the AI industry and its impact on our day-to-day lives.

As an engineer working on LLMs, tackling the challenges of collaborating, training, scaling, and monitoring such massive models has become increasingly complex. LLMOps encompasses the practices, techniques, and tools necessary for the operational management of large language models in production. It's the infrastructure created by LLMOps that drives efficiency, agility, security, and scalability for both its engineers and end-users.

Join us in this immersive LLMOps workshop, where we'll embark on a day-long journey, delving into various modules crafted to equip you with actionable insights and hands-on skills to harness the full potential of LLMs.

Prerequisite: AWS Account with Sagemaker, EKS and Bedrock full access

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details