Combine conversation flow structure with single prompt flexibility for dynamic task handling
Flex Mode combines the best of both worlds:
Conversation Flow: clear, visual business logic that’s easy to manage.
Single Prompt Agent: flexible, natural handling of varied user behavior.
You design your conversation flow as usual (nodes, edges, tools). At
runtime, Flex Mode compiles that flow into one structured prompt made of Tasks
and available Tools. The agent then navigates Tasks dynamically while still
following your global prompt.
You can enable the ‘Flex Mode’ either at Component level or the Agent level.
When enabled at agent level, all the nodes get converted to a single flex
node. It will stay on the flex node and behave like a single prompt agent until
reach the ‘End Call’.When enabled on a component, only that component’s nodes are converted into a
single prompt; the rest stays as standard conversation flow.
The are some differences how flex mode (single prompt) and traditional
conversation flow handle the tool call/function.
Speak During Execution The exection message part will still work the same.
Speak After Execution There is no ‘Speak After Execution’ setting in Flex
Mode. The agent will always speak after function execution.
Wait For Result There is no ‘waitForResult’ setting in Flex Mode. Agent
will always wait for the function to complete (similar to Single Prompt
agent).
Write the node instruction in a concise manner so that LLM could better focus
on the task.
Only use Prompt edge, avoid using Equation edge as LLM is really bad at
interpret equation conditions. You might see very weird behaviors.
Be explicit on transitions: write crisp, observable conditions.
If you use flex mode for more than 20 nodes, performance might degrade and
agent might have higher hallucination risk. We recommend splitting into
smaller components.
LLM might not always follow the static text instruction.