Testing Agents
The SipPulse AI platform offers two testing environments to validate your agents before integrating them with production channels like WhatsApp, SIP or web widgets:
- Chat Playground: Test text-based interactions
- Voice Playground: Test real-time voice interactions
Accessing the Playgrounds
- Navigate to Agents and select an agent
- To access the Chat Playground: click on the agent name or ⋮ → Open Chat
- To access the Voice Playground: click ⋮ → Voice Playground
Switching Between Playgrounds
On the agent page, use the Chat and Voice buttons at the top of the conversation area to switch between modes.
Chat Playground
The Chat Playground is your sandbox for testing text-based interactions, checking responses and validating agent behavior step by step.
Interface Anatomy
| Region | Location | Function |
|---|---|---|
| Conversation List | Left corner | Independent histories; useful for testing variations. |
| Message Flow | Center | Where you chat with the Agent in real time. |
| Control Bar | Top section | Manages conversation (new, refresh, close, reopen, information). |
| Information Sidebar | Right corner (visible only with the ℹ️ button active) | Context metrics, costs, status, model and tools at that moment. |
Control Bar
| Icon | Action | Notes |
|---|---|---|
| ➕ | New conversation | Creates a blank history. |
| 🔄 | Refresh | Reloads messages. |
| ⏹ | Close conversation | Marks conversation as Closed. Does not accept new messages until reopened. If there is a Post-Analysis, it runs now. |
| ▶️ | Reopen conversation | Appears only when status is Closed. Allows continuing the interaction. |
| ℹ️ | Information | Shows/hides the Information Sidebar. |
Conversation List
- Retention: Free plan users keep conversations for 7 days; then they are removed.
- Visible status: The badge on the right indicates one of four possible states:
Pending— waiting for first external interactionActive— open and waiting for responseRunning— the Agent is generating a responseClosed— ended by user or system
Information Sidebar
The Sidebar displays a snapshot of the configurations that the Agent had at the moment the conversation was created. Later changes to the Agent's configurations (model, tools, parameters, etc.) are not applied retroactively — they will only take effect in new conversations.
| Field | What it shows |
|---|---|
| Triggered by | Origin channel (for tests: UI). If the channel is external (WhatsApp, SIP), the input field is disabled and chat becomes read-only. |
| Status | One of the four states listed above. |
| Model | LLM used in that conversation (does not change even if the Agent is edited). |
| Context | Tokens used / model limit. Useful to predict context cuts. |
| Total cost | Sum of input + output tokens converted to currency. |
| Speed | Average tokens/s during generation. |
| Tools | List of available tool calls at the moment the conversation was created. |
| Post-Analysis | Structured result, if configured and the conversation is closed. |
Additional Instructions & Variables
Before the first message you can add additional instructions that are concatenated to the System Instructions for that conversation only.
Usage examples:
- Pass temporary context: "This conversation is about order #12345."
- Personalize the conversation: "You are talking to Mariana."
If the agent has variables in its instructions, fill in the values before starting the conversation.
Voice Playground
The Voice Playground lets you test real-time voice interactions. It is ideal for validating agents that will be deployed via SIP or other voice channels.
Prerequisites
To use the Voice Playground, your agent must have:
- TTS model configured: Navigate to the Voice tab in the agent settings
- TTS voice selected: Choose the desired voice for responses
Required Configuration
The Connect button remains disabled if the agent does not have a voice (TTS) configuration.
Voice Playground Interface
Transcription Area
Displays in real time:
- Your speech (transcribed automatically)
- Agent responses (text and audio)
Control Bar
| Element | Function |
|---|---|
| Allow Microphone | Requests browser permission (first use) |
| Mute/Unmute | Silences or activates your microphone |
| Audio Visualizer | Shows the waveform of your microphone input |
| State Indicator | Displays the current agent state |
| Connect/Disconnect | Starts or ends the voice session |
Agent States
During a voice conversation, the indicator shows the current state:
| State | Color | Description |
|---|---|---|
| Idle | Gray | Agent is waiting |
| Listening | Green | Agent is listening to your speech |
| Thinking | Purple | Agent is processing a response |
| Speaking | Blue | Agent is speaking |
Usage Flow
Allow Microphone
- Click "Allow Microphone" the first time
- Accept the browser permission prompt
- The audio visualizer appears when the microphone is active
Connect to the Agent
- Click Connect to start the session
- The system connects via LiveKit for real-time audio
- Wait for the connection to be established
Talk by Voice
- Speak normally into your microphone
- Your speech is transcribed automatically
- The agent responds via audio and text
Disconnect
- Click Disconnect to end the session
- The conversation is saved in the conversation list
Session Settings
Click the settings icon to access:
- Additional instructions: Extra context for this session
- Variables: Values for the variables defined in the agent
Note
Changes to session settings only take effect before connecting. If you are already connected, disconnect first.
Testing Best Practices
General
- Test before production – cover all critical scenarios, exceptions and alternative flows.
- Compare models – some tasks work better on larger models; others require lower latency or cost. Create separate conversations for each model and evaluate.
- Monitor context and cost – check if the Agent exceeds 75% of the model's token limit; simplify instructions when necessary.
- Validate tools – ensure each tool call returns the expected format and that the Agent handles errors (timeouts, empty responses, etc.).
- Document learnings – record failures and applied adjustments to build a history of improvements.
Chat Playground
- Test error scenarios (empty responses, timeouts).
- Validate behavior with different types of input.
- Use additional instructions to simulate specific contexts.
Voice Playground
- Test in a quiet environment for better speech recognition.
- Validate the agent's response latency.
- Test different accents and speaking speeds.
- Confirm that the selected TTS voice is appropriate for the use case.
- Verify behavior when you interrupt the agent mid-response.
Post-Analysis
When closing a conversation (in either the Chat or Voice Playground), the system automatically executes the structured analyses configured in the Agent. The result appears in the Sidebar for consultation.
For more information on structured analyses, see Structured Analysis.
Related Resources
- Agent Configuration – Complete reference for profile, instructions, tools and parameters.
- Structured Analysis – How to create automatic metrics for each conversation.
- Agent Deployment – Connect your Agent to WhatsApp, HTTP API and more.
- Conversation Monitoring – Track conversations in real time.
