The LLM Playground provides a convenient environment for testing your AI agents without making actual web or phone calls. This interactive testing interface enables:

  • Rapid prototyping and debugging of agent responses
  • Testing different conversation scenarios
  • Immediate feedback on agent behavior
  • Faster development iterations
1

Access the LLM Playground

  1. Navigate to your agent’s detail page
  2. Click on the “Test LLM” tab
  3. (Optional) Choose “Manual Chat” if you are using conversation flow agent.
  4. You’ll see the chat interface where you can start testing
Screenshot of the LLM Playground interface

LLM Playground Interface

2

Test Basic Conversations

  1. Type your message in the input field
  2. Observe the agent’s response
3

Test Function Calling

  1. Use prompts that should trigger specific functions
  2. Verify that functions are called with correct parameters
Screenshot of the function calling test
4

Iterate and Refine

  1. Monitor agent behavior and responses
  2. Update prompts or functions as needed
  3. Click the “delete” button to reset conversations
  4. Test the updated behavior
Screenshot of the iteration process

Iterate and test

5

Save Test Cases

  1. Click the “Save” button to store your test conversation
  2. Add a descriptive name for the test case
  3. Access saved tests from the agent detail page
Screenshot of saving a test case

Save test case

Screenshot of saved test cases

Saved test cases

6

Test Dynamic Variables

  1. Use dynamic variables in your prompts
  2. Verify that variables are properly interpolated
  3. Test different variable values and scenarios
Screenshot of dynamic variables

Best Practices

  • Start with simple conversations and gradually test more complex scenarios
  • Save important test cases for regression testing
  • Test edge cases and error handling
  • Document unexpected behaviors for future reference