Jayita Bhattacharyya

Jayita Bhattacharyya

Data Scientist

Deloitte

Jayita Bhattacharyya is a Data Scientist at Deloitte, where she builds AI-driven enterprise applications that optimise workflows across industry verticals. She fondly refers to herself as a “glorified if-else coder” who thrives within the dynamic world of Jupyter Notebooks. As a seasoned technical speaker and active member of the open-source community, Jayita is one of the organisers of BangPypers (Bangalore Python User Group). She frequently mentors at hackathons, including the recent Great Bangalore Hackathon, and is passionate about fostering collaboration and innovation through community engagement.

Enabling LLMs to enhance their outputs through increased test-time computation is a crucial step toward building self-improving agents capable of handling open-ended natural language tasks. This session explores how allowing a fixed but non-trivial amount of inference-time compute can impact performance on challenging prompts—an area with significant implications for LLM pretraining strategies and the trade-offs between inference-time and pretraining compute.

Reasoning-focused LLMs, particularly open-source ones, are now challenging closed models with comparable performance using less compute. We’ll explore the mechanisms behind this shift, including Chain-of-Thought (CoT) prompting and reinforcement learning-based reward modeling.

The session will cover the architectures, benchmarks, and performance of next-gen reasoning models through hands-on code walkthroughs. Topics include foundational LLM architectures (pre/post-training and inference), zero-shot CoT prompting (without RL), RL-based reasoning enhancements (beam search, Best-of-N, lookahead), and a comparison of fine-tuning strategies Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Generalized Rejection-based Preference Optimization (GRPO)). Finally, we'll demonstrate how to run and fine-tune models efficiently using the Unsloth.ai framework on limited compute setups.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More