Vignesh Kumar

Vignesh Kumar

AI Engineering Manager

Ford Motor Company

Vignesh is the AI Services Lead at Ford, where he focuses on translating cutting-edge AI concepts into tangible products and integrated system features. His expertise spans a decade in data science, bridging advanced technical execution with strategic business objectives. He specialises in areas like advanced machine learning (CNNs, RNNs, Transformers), NLP (from sentiment analysis to LLM-powered applications), and building robust, scalable end-to-end MLOps pipelines on GCP. He is deeply engaged with the latest advancements in Generative AI and Explainable AI, ensuring model transparency and responsible AI practices. Beyond his role at Ford, he actively contributes to the AI community as a speaker and mentor, particularly within the Great Lakes ecosystem. Currently, he is expanding his skillset through a dual Master's program at IIT and IIM Indore, driven by a passion for shaping the future of AI through innovation and collaboration.

Ensuring customer transparency through electronic Video Health Checks (eVHC) is crucial in the automotive service sector, yet processing millions of videos annually presents a significant scaling challenge for manual review. This session explores leveraging multimodal Generative AI, specifically Google's Gemini models on GCP, to automate the analysis of high-volume eVHC videos within the automotive industry. We will dissect a practical implementation, showcasing an end-to-end serverless architecture built on Google Cloud for this use case. Learn how to handle data ingestion, video retrieval, and utilize Vertex AI and Gemini Flash for automated content extraction and summarization, deployed efficiently via Cloud Run. We'll discuss the potential for improved operational efficiency, scalability, cost reductions, and significant uplifts in key customer metrics like satisfaction scores and value per service visit. Join this session for actionable insights into deploying multimodal AI for video analysis, building robust serverless AI workflows on GCP, and translating AI capabilities into measurable business impact across the automotive service landscape.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More