You can create an agent and Retell LLM object using dashboard easily. Right now in Dashbaord it’s a one to one mapping between agent and LLM, but
you can have multiple agents using the same LLM in API.
You can use the agent_id, llm_id and llm_url in API.
Suitable for simple use cases where there’s one core task / topic for the call and a few tools the LLM can access.
Read more about how to define tools in Tool Calling.
// install the sdk: https://docs.retellai.com/get-started/sdkimport Retell from'retell-sdk';import{ LlmResponse }from"retell-sdk/resources/llm.mjs";const retellClient =newRetell({
apiKey:"YOUR_API_KEY",});const llm: LlmResponse =await retellClient.llm.create({
general_prompt:"You are a friendly agent that helps people retrieves weather information.",
begin_message:"Hi, I'm your virtual weather assistant, how can I help you?",
general_tools:[{
type:"end_call",
name:"end_call",
description:"Hang up the call, triggered only after appointment successfully scheduled.",},{
type:"custom",
name:"get_weather",
description:"Get the current weather, called when user is asking whether of a specific city.",
parameters:{
type:"object",
properties:{
city:{
type:"string",
description:"The city for which the weather is to be fetched.",},},
required:["city"],},
speak_during_execution:true,
speak_after_execution:true,
url:"http://your-server-url-here/get_weawther",},],});console.log(llm);
Alternatively, you can create a stateful multi-prompt LLM which is suitable for complicated use cases where there are multiple stages / themes of call, each with access to different set of tools.
Read more about how to define states in LLM States.
// install the sdk: https://docs.retellai.com/get-started/sdkimport Retell from'retell-sdk';import{ LlmResponse }from"retell-sdk/resources/llm.mjs";const retellClient =newRetell({// Find the key in dashboard
apiKey:"YOUR_API_KEY",});const llm: LlmResponse =await retellClient.llm.create({
general_prompt:"You are ...",
general_tools:[{
type:"end_call",
name:"end_call",
description:"End the call with user only when user explicitly requests it.",},],
states:[{
name:"information_collection",
state_prompt:"You will follow the steps below to collect information...",
edges:[{
destination_state_name:"appointment_booking",
description:"Transition to book an appointment if the user is due for an annual checkup based on the last checkup time collected.",},],
tools:[{
type:"transfer_call",
name:"transfer_to_support",
description:"Transfer to the support team when user seems angry or explicitly requests a human agent",number:"16175551212",},],},{
name:"appointment_booking",
state_prompt:"You will follow the steps below to book an appointment...",
tools:[{
type:"book_appointment_cal",
name:"book_appointment",
description:"Book an annual check up when user provided name, email, phone number, and have selected a time.",
cal_api_key:"cal_live_xxxxxxxxxxxx",
event_type_id:60444,
timezone:"America/Los_Angeles",},],},],
starting_state:"information_collection",
begin_message:"Hey I am a virtual assistant calling from Retell Hospital.",});console.log(llm);
To utilize the LLM you’ve developed, obtain the llm_websocket_url either from the API response or
by copying and pasting it from the dashboard. Then, input it into an agent where you can customize
and establish an entity for conducting voice conversations. Read more in Agent Guide.
Set begin_message in Create Retell LLM to a string.
You can use Dynamic Variables
to customize this message for each call (ex: Hey {{user_name}}, welcome to Retell.).