PATCH
/
update-retell-llm
/
{llm_id}
import Retell from 'retell-sdk';

const client = new Retell({
  apiKey: 'YOUR_RETELL_API_KEY',
});

async function main() {
  const llmResponse = await client.llm.update('16b980523634a6dc504898cda492e939', {
    begin_message: 'Hey I am a virtual assistant calling from Retell Hospital.',
  });

  console.log(llmResponse.llm_id);
}

main();
{
  "llm_id": "oBeDLoLOeuAbiuaMFXRtDOLriTJ5tSxD",
  "model": "gpt-4o",
  "s2s_model": "gpt-4o-realtime",
  "model_temperature": 0,
  "model_high_priority": true,
  "tool_call_strict_mode": true,
  "general_prompt": "You are ...",
  "general_tools": [
    {
      "type": "end_call",
      "name": "end_call",
      "description": "End the call with user."
    }
  ],
  "states": [
    {
      "name": "information_collection",
      "state_prompt": "You will follow the steps below to collect information...",
      "edges": [
        {
          "destination_state_name": "appointment_booking",
          "description": "Transition to book an appointment."
        }
      ],
      "tools": [
        {
          "type": "transfer_call",
          "name": "transfer_to_support",
          "description": "Transfer to the support team.",
          "number": "16175551212"
        }
      ]
    },
    {
      "name": "appointment_booking",
      "state_prompt": "You will follow the steps below to book an appointment...",
      "tools": [
        {
          "type": "book_appointment_cal",
          "name": "book_appointment",
          "description": "Book an annual check up.",
          "cal_api_key": "cal_live_xxxxxxxxxxxx",
          "event_type_id": 60444,
          "timezone": "America/Los_Angeles"
        }
      ]
    }
  ],
  "starting_state": "information_collection",
  "begin_message": "Hey I am a virtual assistant calling from Retell Hospital.",
  "knowledge_base_ids": [
    "<string>"
  ],
  "last_modification_timestamp": 1703413636133
}

Authorizations

Authorization
string
header
required

Authentication header containing API key (find it in dashboard). The format is "Bearer YOUR_API_KEY"

Path Parameters

llm_id
string
required

Unique id of the Retell LLM to be updated.

Body

application/json
model
enum<string> | null

Select the underlying text LLM. If not set, would default to gpt-4o.

Available options:
gpt-4o,
gpt-4o-mini,
claude-3.5-sonnet,
claude-3-haiku,
claude-3.5-haiku
s2s_model
enum<string> | null

Select the underlying speech to speech model. Can only set this or model, not both.

Available options:
gpt-4o-realtime,
gpt-4o-mini-realtime
model_temperature
number

If set, will control the randomness of the response. Value ranging from [0,1]. Lower value means more deterministic, while higher value means more random. If unset, default value 0 will apply. Note that for tool calling, a lower value is recommended.

model_high_priority
boolean

If set to true, will use high priority pool with more dedicated resource to ensure lower and more consistent latency, default to false. This feature usually comes with a higher cost.

tool_call_strict_mode
boolean

Only applicable when model is gpt-4o or gpt-4o mini. If set to true, will use structured output to make sure tool call arguments follow the json schema. The time to save a new tool or change to a tool will be longer as additional processing is needed. Default to false.

general_prompt
string | null

General prompt appended to system prompt no matter what state the agent is in.

  • System prompt (with state) = general prompt + state prompt.

  • System prompt (no state) = general prompt.

general_tools
object[] | null

A list of tools the model may call (to get external knowledge, call API, etc). You can select from some common predefined tools like end call, transfer call, etc; or you can create your own custom tool (last option) for the LLM to use.

  • Tools of LLM (with state) = general tools + state tools + state transitions

  • Tools of LLM (no state) = general tools

states
object[] | null

States of the LLM. This is to help reduce prompt length and tool choices when the call can be broken into distinct states. With shorter prompts and less tools, the LLM can better focus and follow the rules, minimizing hallucination. If this field is not set, the agent would only have general prompt and general tools (essentially one state).

starting_state
string | null

Name of the starting state. Required if states is not empty.

begin_message
string | null

First utterance said by the agent in the call. If not set, LLM will dynamically generate a message. If set to "", agent will wait for user to speak first.

knowledge_base_ids
string[] | null

A list of knowledge base ids to use for this resource. Set to null to remove all knowledge bases.

Response

200
application/json
Successfully updated an Retell LLM.
llm_id
string
required

Unique id of Retell LLM.

last_modification_timestamp
integer
required

Last modification timestamp (milliseconds since epoch). Either the time of last update or creation if no updates available.

model
enum<string> | null

Select the underlying text LLM. If not set, would default to gpt-4o.

Available options:
gpt-4o,
gpt-4o-mini,
claude-3.5-sonnet,
claude-3-haiku,
claude-3.5-haiku
s2s_model
enum<string> | null

Select the underlying speech to speech model. Can only set this or model, not both.

Available options:
gpt-4o-realtime,
gpt-4o-mini-realtime
model_temperature
number

If set, will control the randomness of the response. Value ranging from [0,1]. Lower value means more deterministic, while higher value means more random. If unset, default value 0 will apply. Note that for tool calling, a lower value is recommended.

model_high_priority
boolean

If set to true, will use high priority pool with more dedicated resource to ensure lower and more consistent latency, default to false. This feature usually comes with a higher cost.

tool_call_strict_mode
boolean

Only applicable when model is gpt-4o or gpt-4o mini. If set to true, will use structured output to make sure tool call arguments follow the json schema. The time to save a new tool or change to a tool will be longer as additional processing is needed. Default to false.

general_prompt
string | null

General prompt appended to system prompt no matter what state the agent is in.

  • System prompt (with state) = general prompt + state prompt.

  • System prompt (no state) = general prompt.

general_tools
object[] | null

A list of tools the model may call (to get external knowledge, call API, etc). You can select from some common predefined tools like end call, transfer call, etc; or you can create your own custom tool (last option) for the LLM to use.

  • Tools of LLM (with state) = general tools + state tools + state transitions

  • Tools of LLM (no state) = general tools

states
object[] | null

States of the LLM. This is to help reduce prompt length and tool choices when the call can be broken into distinct states. With shorter prompts and less tools, the LLM can better focus and follow the rules, minimizing hallucination. If this field is not set, the agent would only have general prompt and general tools (essentially one state).

starting_state
string | null

Name of the starting state. Required if states is not empty.

begin_message
string | null

First utterance said by the agent in the call. If not set, LLM will dynamically generate a message. If set to "", agent will wait for user to speak first.

knowledge_base_ids
string[] | null

A list of knowledge base ids to use for this resource. Set to null to remove all knowledge bases.