Quantifying Our Confidence in Neural Networks and AI

About

Although Large Language Models and AI are known to generate false and misleading responses to prompts, relatively little effort has gone into understanding how we can quantify the confidence we should have in the output from these models. In this hack session, the speaker will illustrate the problem using a simple neural network and then demonstrate two methods for quantifying our confidence in the model outputs. He will then show how these methods can be applied to Large Language Models and AI.

Key Takeaways:

  • Understand why trusting AI outputs blindly can be risky—and how to measure model confidence.
  • Learn two practical methods to quantify confidence in neural network predictions.
  • Explore how these confidence techniques scale from simple models to LLMs.
  • Gain hands-on insights into building more reliable and trustworthy AI systems.

Speaker

Book Tickets
Download Brochure

Download agenda