Bhaskarjit Sarmah

Bhaskarjit Sarmah

Head of Financial Services AI Research

Domyn

Bhaskarjit Sarmah, Head of Financial Services AI Research at Domyn, leverages over 11 years of data science expertise across diverse industries. Previously, at BlackRock, he pioneered machine learning solutions to bolster liquidity risk analytics, uncover pricing opportunities in securities lending, and develop market regime change detection systems using network science. Bhaskarjit's proficiency extends to natural language processing and computer vision, enabling him to extract insights from unstructured data and deliver actionable reports. Committed to empowering investors and fostering superior financial outcomes, he embodies a strategic fusion of data-driven innovation and domain knowledge in the world's largest asset management firm.

This workshop introduces AgentOps, a subcategory of GenAIOps, which focuses on the operationalization of AI agents. It dives into how we can create, manage, and scale generative AI agents effectively within production environments. You’ll learn the essential principles of AgentOps, from external tool integration and memory management to task orchestration, multi-agent systems, and Agentic RAG. By the end of the workshop, participants will have the skills to build and deploy intelligent agents that can automate complex tasks, handle multi-step processes, and operate within enterprise environments.

Prerequisites:

  • Basic understanding of AI/ML and LLMs (Large Language Models)
  • Familiarity with Python programming and using frameworks like LangChain or LangGraph
  • Experience with APIs and web-based tool integrations (e.g., basic knowledge of calling external APIs)
  • Familiarity with cloud-based environments (e.g., AWS, Google Cloud) is a plus, but not required

*Note: These are tentative details and are subject to change.

Read More

Autonomous AI agents promise super-charged productivity but without the right guardrails they can also jailbreak, leak data, or go off-topic. In this session we will discuss about:

  • building a lightweight agentic workflow from scratch
  • probing real-world vulnerabilities, AI risks
  • mapping your agent against the four-axis “agentic profile” framework for alignment and governance
  • applying risk-mitigation checklists distilled from risk management frameworks

What we will build -
In the hands-on segment we will build a complete agent to go from blank notebook to governed production prototype. We’ll begin by bootstrapping a one-file Python agent with LangChain and OpenAI Functions that can plan, call external APIs, and write concise summaries. Next, we’ll wrap that agent with the open-source Python libraries, layering in rate-limits, PII scrubbing, and role-based tool permissions so you can see policy enforcement in action. With guardrails in place, we’ll shift to offense - running an automated PyTest suite populated with the red-team prompts to expose prompt-injection and tool-abuse weak spots. We’ll then quantify how well the patched agent stays on-mission by applying a lightweight PRISM-style alignment rubric that emits a JSON scorecard. Finally, we’ll wire everything into a Streamlit mini-dashboard that streams agent actions, policy hits, and manual override controls in real time, giving a turnkey template we can fork for our next project.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More