Create Retell LLM
Create a new Retell LLM
Authorizations
Authentication header containing API key (find it in dashboard). The format is "Bearer YOUR_API_KEY"
Body
Select the underlying text LLM. If not set, would default to gpt-4o.
gpt-4o
, gpt-4o-mini
, claude-3.5-sonnet
, claude-3-haiku
Select the underlying speech to speech model. Can only set this or model, not both.
gpt-4o-realtime
If set, will control the randomness of the response. Value ranging from [0,1]. Lower value means more deterministic, while higher value means more random. If unset, default value 0 will apply. Note that for tool calling, a lower value is recommended.
Only applicable when model is gpt-4o or gpt-4o mini. If set to true, will use structured output to make sure tool call arguments follow the json schema. The time to save a new tool or change to a tool will be longer as additional processing is needed. Default to false.
General prompt appended to system prompt no matter what state the agent is in.
-
System prompt (with state) = general prompt + state prompt.
-
System prompt (no state) = general prompt.
A list of tools the model may call (to get external knowledge, call API, etc). You can select from some common predefined tools like end call, transfer call, etc; or you can create your own custom tool (last option) for the LLM to use.
-
Tools of LLM (with state) = general tools + state tools + state transitions
-
Tools of LLM (no state) = general tools
States of the LLM. This is to help reduce prompt length and tool choices when the call can be broken into distinct states. With shorter prompts and less tools, the LLM can better focus and follow the rules, minimizing hallucination. If this field is not set, the agent would only have general prompt and general tools (essentially one state).
Name of the starting state. Required if states is not empty.
First utterance said by the agent in the call. If not set, LLM will dynamically generate a message. If set to "", agent will wait for user to speak first.
For inbound phone calls, if this webhook is set, will POST to it to retrieve dynamic variables to use for the call. Without this, there's no way to pass dynamic variables for inbound calls.
A list of knowledge base ids to use for this resource. Set to null to remove all knowledge bases.
Response
Unique id of Retell LLM.
Last modification timestamp (milliseconds since epoch). Either the time of last update or creation if no updates available.
Select the underlying text LLM. If not set, would default to gpt-4o.
gpt-4o
, gpt-4o-mini
, claude-3.5-sonnet
, claude-3-haiku
Select the underlying speech to speech model. Can only set this or model, not both.
gpt-4o-realtime
If set, will control the randomness of the response. Value ranging from [0,1]. Lower value means more deterministic, while higher value means more random. If unset, default value 0 will apply. Note that for tool calling, a lower value is recommended.
Only applicable when model is gpt-4o or gpt-4o mini. If set to true, will use structured output to make sure tool call arguments follow the json schema. The time to save a new tool or change to a tool will be longer as additional processing is needed. Default to false.
General prompt appended to system prompt no matter what state the agent is in.
-
System prompt (with state) = general prompt + state prompt.
-
System prompt (no state) = general prompt.
A list of tools the model may call (to get external knowledge, call API, etc). You can select from some common predefined tools like end call, transfer call, etc; or you can create your own custom tool (last option) for the LLM to use.
-
Tools of LLM (with state) = general tools + state tools + state transitions
-
Tools of LLM (no state) = general tools
States of the LLM. This is to help reduce prompt length and tool choices when the call can be broken into distinct states. With shorter prompts and less tools, the LLM can better focus and follow the rules, minimizing hallucination. If this field is not set, the agent would only have general prompt and general tools (essentially one state).
Name of the starting state. Required if states is not empty.
First utterance said by the agent in the call. If not set, LLM will dynamically generate a message. If set to "", agent will wait for user to speak first.
For inbound phone calls, if this webhook is set, will POST to it to retrieve dynamic variables to use for the call. Without this, there's no way to pass dynamic variables for inbound calls.
A list of knowledge base ids to use for this resource. Set to null to remove all knowledge bases.
Was this page helpful?