How It Works
When a subagent node has tools / functions attached, the LLM receives both the node instruction and the list of available tools / functions. During the conversation, the LLM determines when a tool / function should be called based on context, extracts the required parameters, and invokes it while maintaining the dialogue with the user.- Multiple tools / functions can be added to a single subagent node
- The agent can continue talking while a tool / function executes
- Tool / function results are available to the LLM for generating follow-up responses
Write Instruction
Subagent nodes only supportPrompt instructions. Unlike a conversation node, Static Sentence is not supported.
Write the instruction to define the task, what information the agent should gather, and when it should use the available tools / functions.
For example:
Subagent Node vs Function Node
Subagent nodes and function nodes serve different purposes:| Function Node | Subagent Node | |
|---|---|---|
| Execution | Deterministic — executes on node entry | LLM-driven — called when the LLM decides it’s appropriate |
| Tools per node | One | Multiple |
| Conversation | Not intended for dialogue | Full dialogue with tools available |
| Best for | Always-execute actions (e.g. always look up an order on entry) | Context-dependent actions during dialogue (e.g. look up an order only if the user asks) |
Add Tools / Functions
Add a tool / function
In the settings panel, find the Tools section and click + Add.Select the tool / function type from the dropdown menu.
Configure the tool / function
Configure the tool / function based on its type. See the Available Tool / Function Types table below for configuration details for each type.
Available Tool / Function Types
| Tool Type | Description | Configuration Guide |
|---|---|---|
| Custom Function | Make HTTP requests to your external APIs | Custom Function |
| Code Tool | Run JavaScript code directly without an external server | Code Tool |
| Check Calendar Availability | Query available time slots via Cal.com | Check Availability |
| Book Appointment | Book calendar events via Cal.com | Book Calendar |
| End Call | Terminate the call | End Call |
| Transfer Call | Transfer to a phone number | Transfer Call |
| Transfer Agent | Transfer to another Retell agent | Transfer Agent |
| Press Digit | Send DTMF tones | Press Digit |
| Send SMS | Send a text message | Send SMS |
| Extract Dynamic Variable | Extract variables from the conversation | Extract Dynamic Variable |
| MCP Tool | Call tools on your MCP server | MCP Node |
Execution Speech Settings
Each tool / function has settings that control what the agent says while it is running and after it completes.Speak During Execution
When enabled, the agent says a message while the tool / function is executing, for exampleOne moment, let me check that for you. This is recommended when the tool / function takes over 1 second, including network latency, so the agent remains responsive.
You can configure how the message is generated:
- Prompt: The LLM dynamically generates what to say based on a description you provide.
- Static Sentence: The agent speaks the exact text you provide.
Speak After Execution
When enabled, the agent calls the LLM after the tool / function returns a result so it can speak about the outcome to the user. Turn this off if you want to run it silently.- Speak During Execution is available on: Custom Function, Code Tool, End Call, Transfer Call, Transfer Agent, and MCP Tool.
- Speak After Execution is available on: Custom Function, Code Tool, and MCP Tool.
When Can Transition Happen
- when user is done speaking
- when
Skip Responseis enabled and agent finishes speaking
Node Settings
- Tools: attach the tools / functions this subagent can use during the conversation.
- Skip Response: when enabled, the transition will only have one edge that you can connect, and when agent is done talking, it will transition to the next node via that specific edge.
- Knowledge Base: configure node-level knowledge bases to combine topic-specific knowledge with the agent-level knowledge base. Read more at Knowledge Base.
- Global Node: read more at Global Node
- Block Interruptions: when enabled, the agent will not be interrupted by user when speaking.
- LLM: choose a different model for this particular node. Will be used for response generation, tool / function selection, and tool / function argument generation.
- Fine-tuning Examples: Can finetune conversation response, and transition. Read more at Finetune Examples
Best Practices
- Be explicit in your node instruction. Tell the agent when each tool / function should be used.
- Use function nodes for guaranteed execution. If a tool / function must always run at a certain point in the flow, use a function node instead.
- Avoid adding too many tools / functions to one subagent node. If you have many tools / functions, consider splitting them across multiple subagent nodes.