LLMs Are Boring. How Can We Make Them More Interesting?

About

Today's LLMs, and in a broader sense agentic workflows and RAG, are excellent at retrieval, summarization, and conversation. They have also been given quantitative skills by providing access to tools. However, their outputs are rarely novel or surprising. In other words, their outputs are generally boring. The focus of this talk will be on exploring ways to make the outputs more interesting. We will look at well-known approaches such as training diversity and higher temperature, but we will go on to explore ways which inject novelty more organically, through sources of directed randomness. The north star of this effort is to enable generative AI to perform effective discovery, rather than stick to the beaten path.

Key Takeaways:

  • Explore why today’s LLM outputs often lack novelty—and how to make them more engaging.
  • Go beyond temperature tweaks to discover organic ways of injecting directed randomness.
  • Learn techniques to move LLMs from safe summarizers to creative, discovery-driven generators.
  • Understand how to reimagine agentic workflows and RAG for more surprising and valuable outputs

Speaker

Book Tickets
Download Brochure

Download agenda