What’s LLM States

When going through a conversation, a lot of the time it’s structured like a state machine or like a tree. For example, a customer service representitive might ask for personal information in the beginning of call, then provide product information in the middle, and book an appointment at the end. Each part would require drastically different scripts, and need different exeternal API calls.

To ensure that your agent would follow the correct script, you can use LLM States to define the different states of the LLM, so that during each state, the prompt is shorter and more focused, and the tool choices are confined and more relevant.

However, if your conversation is pretty straightforward and tool choice is already limited, you probably don’t need this.

When to Use LLM States

You should use LLM states if you encounter the following issue while using single prompt:

  1. Tools or functions are triggered unexpectedly.
  2. The flow is too complex for a single prompt.
  3. The agent frequently produces inaccurate or imaginary responses.
  4. Agent does not follow instructions well.

Define LLM States

Check out Create Retell LLM for detailed API spec.
  1. Define a state name.
    • Must be consisted of a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64 (no space allowed).
    • it has to be unique
    • best to be a natural language string that LLM can understand like information_collection.
  2. Define the prompt for the state. The prompt fed into the LLM will be the general prompt, followed by this state prompt. Generally you can write about personality, conversation style, and other things about agent that don’t change in general prompt, and write about the specific task and goals in the state prompt.
  3. Define the tools that are available in this state. The tools available for LLM during this state would be the combination of general tools and state tools. You usually put universally used tools like end_call in general tools, and put tools that are only relevant to the specific state in the state tools, like book_appointment. Read more about Tool Calling.
  4. Define the edges. This is where you define the transitions between states. You can transition from one state to other states, but generally we don’t recommend to have too many edges as that can confuse LLM and you don’t want a wrong state transition.
    • The edges are implemented as a function call in the LLM, so to make sure the transition happens at the right time, it’s best to explicitly mention when to transition in the state prompt. For example, you have a state called information_collection and you want to transition to product_information after the customer has provided their personal information, you can write in the state prompt Ask the following questions: ..... After all the questions are answered by users, transition to product_information.
    • Sometimes you want to carry some information from the current state into the new state (to use in prompt), you can do that by defning the parameters in the edge. For example, you want to carry the user’s name into the future states, you can define the edge.parameters like this:
      {
          "type": "object",
          "properties": {
              "user_name": {
                  "type": "string",
                  "description": "User full name.",
              },
          },
          "required": "user_name"
      }
      
      And in the future state, you can use the user_name dynamically to modify the prompts and tool descriptions, just like the dynamic variables you passed when initiating the call. Read more about Dynamic Variables.
  5. Define the starting state. This is the state your LLM will begin with.

LLM States Best Practices

  • Group together steps that are closely related into one state, and share the same set of tools. This way you have less states, it’s easier to manage, and will have less chance of error in state transition.
  • Don’t create too many edges. When there’re excessive edges, it’s more error prone to make a wrong transition.
  • Make sure the state transitions are clear and explicit. Mention in the prompt when to transition to the next state.
  • If you find your agent get confused and do not follow the step after entering the new state, that’s probably because the existing call and states do not have any correlation. You can consider adding the first few steps of the new state to the previous state (basically duplicate the steps), so that LLM can know where exactly it is.