Customer-facing conversational AI assistants don’t operate in a vacuum. They are embedded within well-defined business processes. That’s why these systems are expected to reliably and consistently guide users through each step of a predetermined workflow.
However, existing agentic frameworks that leverage a concept of tool calling or function calling to interact with systems (such as APIs or databases) often fall short of this goal. They lack the robustness, controllability, and built-in support for complex processes required by enterprise-grade applications.
In this article, we’ll explore why this is the case and introduce an alternative approach: process calling. This enables the creation of reliable, process-aware, and easily debuggable conversational agents. We’ll also share code examples and walk you through how to get started with the Rasa platform.
In the current paradigm, AI agents are equipped with tools that enable them to solve specific tasks. These tools typically perform atomic actions, such as calling an API to read or write data, updating or fetching data from a database, or similar operations. The limitation of such an approach is that it often lacks state, making AI agents unpredictable and sometimes even unreliable for several reasons:
On the other hand, businesses have well-established processes, and AI assistants are expected to follow them, not improvise or create their own. A conversational AI agent deployed for customer service must understand users’ needs, connect them to the right company processes, clearly explain how it can assist, and guide them through each step to achieve their goals–all while maintaining a smooth and natural conversation flow.
This is where process calling comes in. With process calling, the LLM invokes and collaborates with a stateful process. The user asks the assistant a question, and the LLM predicts which specific, defined business process to trigger. The process, along with LLM, works together to drive the conversation forward.
Let’s dive into how to build a reliable AI assistant using the process-calling approach in practice. We will develop a banking AI agent capable of handling simple processes, including transferring money, opening a savings account, responding to frequently asked questions (FAQs), and addressing off-topic requests.
The Rasa Platform is a conversational AI framework that offers an end-to-end solution for building AI assistants. At the heart of the Rasa Platform is CALM (Conversational AI with Language Models), Rasa’s AI-driven dialogue orchestration engine. CALM is designed to integrate business logic with adaptive conversation management. CALM’s core features are dialogue understanding, dialogue manager, and contextual response rephraser.
With Rasa, you can build enterprise-grade, fluent text and voice AI assistants. Let’s set up the environment to begin building your AI banking assistant.
First, you need to get a free Developer Edition key here. A confirmation email will be sent to the email address you provided, and you’ll need to copy your token from that message.
There are two ways to get started with Rasa:
In this tutorial, we’ll use GitHub Codespaces because it lets you start building an agent directly in your browser, no local installation required. This option is ideal for beginners and anyone new to Rasa.
What you’ll need:
To create your first AI agent using Rasa, go through the following steps:
RASA_PRO_LICENSE="your-rasa-pro-license-key"
Load the environment variables using:
source .env
To activate the virtual environment:
source .venv/bin/activate
Create your first agent using the tutorial template provided by Rasa. Throughout the installation, press Enter or say yes to each question.
Execute the following command in the terminal:
rasa init --template tutorial
A new tab with the Rasa Inspector will open. Try asking your agent a few questions, such as:
You can also try the command:
Transfer money is an example of transactional flow, where the agent follows a predefined sequence of actions such as asking for missing information, calling an API, updating a record in a database, or similar.
Remember how we talked about building a reliable, deterministic execution of business logic at the beginning? You can create such a process in Rasa using flows. We’ll add functionality to our agent for opening a savings account to demonstrate how process calling works in practice.
Flows allow you to build a predefined sequence of steps that must be followed to achieve a specific outcome. Of course, opening a real savings account at a bank would involve many more steps, such as authenticating the user, checking account eligibility, etc. All of this can be implemented in Rasa using custom actions, which are essentially Python functions.
We’ll build a simplified version where we ask the user for some additional information before opening a new savings account:
Once these steps are defined, the AI agent will consistently follow them and execute the business logic as prescribed, while also improving the capabilities of LLMs for better dialogue understanding.
We’ll now add this flow to our assistant by editing the flows.yml file in the data directory:
open_savings_account:
description: Collect details to open a savings account.
steps:
- collect: account_name
description: The name the user wants to give their savings account.
- collect: currency
description: The currency for the savings account.
- collect: duration
description: The amount of time (e.g., months or years) for the savings account.
- action: utter_confirm_savings_account_opened
As you can see, flows are written in YAML format. If you want to understand the syntax of the flows, you can read the official documentation at Rasa Docs here.
Next, update the domain.yml file to define the necessary slots and responses. Think of domain.yml as the universe of your conversational AI assistant: whenever you add new slots or responses, you need to include them here so your assistant knows about them.
Add new slots to the slots section:
account_name:
type: text
mappings:
- type: from_llm
currency:
type: text
mappings:
- type: from_llm
duration:
type: text
mappings:
- type: from_llm
Add new responses to the responses section:
utter_ask_account_name:
- text: "What would you like to call your new account?"
utter_ask_currency:
- text: "Which currency would you like to use?"
utter_ask_duration:
- text: "How many months or years would you like to save for?"
utter_confirm_savings_account_opened:
- text: "Your savings account '{account_name}' has been successfully opened."
Finally, run the following commands to train your assistant and open the Rasa Inspector:
rasa train
rasa inspect
You can now test the new savings account flow by chatting with your agent and saying something like:
I want to open a savings account
The assistant will follow the process you defined and collect the required details step by step.
Some of the benefits of using flows are:
Flows make it easier to manage structured conversations, especially when you need to execute business logic consistently and reliably.
Now that you know how to add a flow, you can expand your agent’s functionality to handle any number of tasks, each executed precisely according to your instructions. Whether you have 10 flows or 100, Rasa will leverage the power of LLMs to trigger the correct one.
But what if the user asks an information question instead of a transactional one?
You don’t want to create a dedicated flow for every question, such as “How long does a money transfer take?” or “What’s the commission for international transfers?“
To handle such questions, Rasa includes a component called Enterprise Search. There are several ways to get started with Enterprise Search in Rasa and let users chat with your docs:
In this tutorial, we’ll use the first option: FAISS vector store. Here are the steps to make your AI Agent understand informational queries:
By default, Enterprise Search uses OpenAI as the default LLM provider, so you’ll need to add your OPENAI_API_KEY
to the .env file.
Prepare your data in .txt format and add it to docs/faq.txt so that your AI agent can answer any questions based on the provided data, without being explicitly programmed to do so.
Next, in your config.yml, uncomment EnterpriseSearchPolicy
:
- name: EnterpriseSearchPolicy
Edit the patterns.yml
file in the data
folder to include the following search pattern:
pattern_search:
description: Flow for handling knowledge-based questions
name: pattern search
steps:
- action: action_trigger_search
Retrain and re-run your agent. You can now test both transactional and informational queries.
The last thing we want to cover is what to do when users ask questions that can’t be answered with a flow or Enterprise Search.
In Rasa, there is a default pattern, pattern_chitchat
, designed to handle such situations. All out-of-scope queries will be routed there, and you have a couple of options:
pattern_chitchat:
description: Handles off-topic or general small talk
name: pattern chitchat
steps:
- action: action_handle_chitchat
You can then define your action_handle_chitchat
as either a static response or use it to connect to an LLM for dynamic replies.
This ensures that your assistant always responds gracefully, even when the question falls outside its core business logic or knowledge base.
In this article, we explored the conversational AI framework Rasa and how to use it to build a reliable and scalable AI Agent that strictly follows well-defined business processes. We demonstrated how to implement the process calling approach, which ensures predictability, control, and alignment with real-world business requirements.
You learned how to:
Now you have all the tools to build AI Assistants that can confidently operate within clearly defined business logic. Try it yourself, get your developer edition license key, and create your first assistant today.