Skip to main content
Conversation node is the most commonly used node type in conversation flow. It’s used to have a conversation with the user. You can optionally add functions to this node so the agent can perform actions during the conversation. Please note that agent can have a multi turn conversation inside a single node, so you don’t necessarily need to create a new conversation node for every sentence the agent needs to say. It’s recommended to split node when there’s logic split, or the instruction got too long.

Write Instruction

Inside the node, you get to pick how you want to write the specific instruction for the agent to follow:
  • Prompt: Write a prompt for the agent to dynamically generate what to say.
  • Static Sentence: Agent will say a fixed sentence first, and if later still inside this node, it will generate content dynamically based on the static sentence set.

When Can Transition Happen

  • when user is done speaking
  • when Skip Response is enabled and agent finishes speaking

Node Functions (Optional)

You can add functions to a conversation node so the LLM can call them during the conversation when appropriate. This combines dialogue with action — the agent converses with the user while also being able to perform tasks like calling APIs, transferring calls, or sending SMS. Unlike function nodes where functions execute deterministically on node entry, functions on a conversation node are invoked by the LLM based on the conversation context — similar to how function calling works in single/multi prompt agents. Learn more about node functions →

Node Settings

  • Skip Response: when enabled, the transition will only have one edge that you can connect, and when agent is done talking, it will transition to the next node via that specific edge. This is useful when you want the agent to say things like disclaimers, where you don’t need a response to move on to another node.
  • Global Node: read more at Global Node
  • Block Interruptions: when enabled, the agent will not be interrupted by user when speaking.
  • LLM: choose a different model for this particular node. Will be used for response generation.
  • Fine-tuning Examples: Can finetune conversation response, and transition. Read more at Finetune Examples