Skip to main content
You can add functions directly to a conversation node so the agent can perform actions during the conversation. The LLM decides when to call functions based on the conversation context — the agent converses with the user while also being able to perform tasks like calling APIs, transferring calls, or sending SMS.

How It Works

When a conversation node has functions attached, the LLM receives both the node instruction and the list of available functions. During the conversation, the LLM determines when a function should be called based on context, extracts the required parameters, and invokes the function — all while maintaining the dialogue with the user.
  • Multiple functions can be added to a single conversation node
  • The agent can continue talking while a function executes
  • Function results are available to the LLM for generating follow-up responses

Node Functions vs Function Nodes

Node functions and function nodes serve different purposes:
Function NodeNode Functions
ExecutionDeterministic — executes on node entryLLM-driven — called when the LLM decides it’s appropriate
Functions per nodeOneMultiple
ConversationNot intended for dialogueFull dialogue with functions available
Best forAlways-execute actions (e.g., always look up order on entry)Context-dependent actions during dialogue (e.g., look up order only if user asks)
Use function nodes when you want guaranteed execution every time the flow reaches that step. Use node functions when the agent should decide whether and when to call a function based on what the user says.

Add Functions

1

Select a conversation node

Click on a conversation node to open its settings panel on the right side.
2

Add a function

In the settings panel, find the Tools section and click + Add to add a new function.Select the function type from the dropdown menu.
3

Configure the function

Configure the function based on its type. See Available Function Types below for configuration details for each type.
4

Update the node instruction

Update the node’s prompt instruction to guide the LLM on when to use the function. For example:
Help the user check their order status. If the user provides an order number,
call the check_order_status function to look up the order.

Available Function Types

Function TypeDescriptionConfiguration Guide
Custom FunctionMake HTTP requests to your external APIsCustom Function
Check Calendar AvailabilityQuery available time slots via Cal.comCheck Availability
Book AppointmentBook calendar events via Cal.comBook Calendar
End CallTerminate the callEnd Call
Transfer CallTransfer to a phone numberTransfer Call
Transfer AgentTransfer to another Retell agentTransfer Agent
Press DigitSend DTMF tonesPress Digit
Send SMSSend a text messageSend SMS
Extract Dynamic VariableExtract variables from the conversationExtract Dynamic Variable
MCP ToolCall tools on your MCP serverMCP Node

Execution Speech Settings

Each function has settings that control what the agent says while the function is running and after it completes.

Speak During Execution

When enabled, the agent says a message while the function is executing (e.g., “One moment, let me check that for you.”). Recommended when your function takes over 1 second (including network latency) to complete, so the agent remains responsive. You can configure how the message is generated:
  • Prompt: The LLM dynamically generates what to say based on a description you provide.
  • Static Text: The agent speaks the exact text you provide.

Speak After Execution

When enabled, the agent calls the LLM after the function returns a result, so it can speak about the outcome to the user. Turn this off if you want to run the function silently (e.g., logging data to your server without informing the user).
  • Speak During Execution is available on: Custom Function, End Call, Transfer Call, Transfer Agent, and MCP Tool.
  • Speak After Execution is available on: Custom Function and MCP Tool.

Best Practices

  • Be explicit in your node instruction about when each function should be called. The LLM uses the instruction and function descriptions to decide when to invoke a function.
  • Use function nodes for guaranteed execution. If a function must always run at a certain point in the flow (e.g., always fetch user data before greeting), use a function node instead.
  • Avoid adding too many functions to a single node. Adding many functions can make it harder for the LLM to choose the right one. If you have many functions, consider splitting them across multiple conversation nodes.