Skip to main content
Flex Mode combines the best of both worlds:
  • Conversation Flow: clear, visual business logic that’s easy to manage.
  • Single Prompt Agent: flexible, natural handling of varied user behavior.
You design your conversation flow as usual (nodes, edges, tools). At runtime, Flex Mode compiles that flow into one structured prompt made of Tasks and available Tools. The agent then navigates Tasks dynamically while still following your global prompt.

When To Use

  • You want the clarity of a flowchart (business steps) but need the freedom of a single prompt:
    • You can easily switch context from different tasks e.g. every node would become global node
    • Could move on to the proper task if the user completed multiple tasks at the same time.
    • After swtich the context to another flow, agent could resume on the previous task without repeating the already completed steps.

How It Works

You can enable the ‘Flex Mode’ either at Component level or the Agent level.
When enabled at agent level, all the nodes get converted to a single flex node. It will stay on the flex node and behave like a single prompt agent until reach the ‘End Call’. When enabled on a component, only that component’s nodes are converted into a single prompt; the rest stays as standard conversation flow.

Tool Call / Function

The are some differences how flex mode (single prompt) and traditional conversation flow handle the tool call/function.
  • Speak During Execution The exection message part will still work the same.
  • Speak After Execution There is no ‘Speak After Execution’ setting in Flex Mode. The agent will always speak after function execution.
  • Wait For Result There is no ‘waitForResult’ setting in Flex Mode. Agent will always wait for the function to complete (similar to Single Prompt agent).

Knowledge Base

Node level KB will be ignored in Flex Mode. You will need to configure the KB at agent level.

Best Practices & Known Issues

  • Write the node instruction in a concise manner so that LLM could better focus on the task.
  • Only use Prompt edge, avoid using Equation edge as LLM is really bad at interpret equation conditions. You might see very weird behaviors.
  • Be explicit on transitions: write crisp, observable conditions.
  • If you use flex mode for more than 20 nodes, performance might degrade and agent might have higher hallucination risk. We recommend splitting into smaller components.
  • LLM might not always follow the static text instruction.