Skip to content

Testing the Agent

The Test Chat is your sandbox to check, step by step, if an Agent responds as expected — before integrating it to production channels like WhatsApp, SIP or Web widgets.

When you create or click on the name of an Agent, the platform automatically opens a page to interact with the agent. You can also access this page by clicking on the agent name or the button (⋮ → Open chat).

Interface Anatomy

RegionLocationFunction
Conversation ListLeft cornerIndependent histories; useful for testing variations.
Message FlowCenterWhere you chat with the Agent in real time.
Control BarTop sectionManages conversation (new, refresh, close, reopen, information).
Information SidebarRight corner (visible only with ℹ️ button active)Context metrics, costs, status, model and tools at that moment.

Control Bar

IconActionNotes
New conversationCreates a blank history.
🔄RefreshReloads messages
Close conversationMarks conversation as Closed. Does not accept new messages until reopened. If there's Post-Analysis, it's executed now.
▶️ (alternative icon)Reopen conversationAppears only when status is Closed. Allows continuing the interaction.
ℹ️InformationShows/hides the Information Sidebar.

Conversation List

  • Retention: Free plan users keep conversations for 7 days; then they are removed.

  • Visible status: The badge on the right indicates one of four possible states:

    • Pending — waiting for first external interaction
    • Active — open and waiting for response
    • Running — the Agent is generating response
    • Closed — ended by user or system

Information Sidebar

The Sidebar displays a snapshot of the configurations that the Agent had at the moment when the conversation was created. Later changes to the Agent's configurations (model, tools, parameters, etc.) are not applied retroactively to this conversation — they will only apply to new conversations.

FieldWhat it shows
Triggered byOrigin channel (for tests: UI). If the channel is external (WhatsApp, SIP), the input field is disabled and chat becomes read-only.
StatusOne of the four states listed above.
ModelLLM used in that conversation (doesn't change even if the Agent is edited).
ContextTokens used / model limit. Useful to predict context cuts.
Total costSum of input + output tokens converted to currency.
SpeedAverage tokens/s during generation.
ToolsList of available tool calls at the moment of conversation creation.
Post-AnalysisStructured result, if configured and conversation is closed.

Additional Instructions & Dynamic Variables

Before the first message appears + Additional instructions. The typed text is concatenated to System Instructions — only for this conversation.

Usage examples

  • Pass temporary context: "This conversation is about order #12345."
  • Personalize the conversation: "You are talking to Mariana

If there are variables in the instructions, there will be a space to fill in the values before starting the conversation. Variables are automatically replaced at the beginning of the conversation.

4. Post-Analysis (Optional)

When closing a conversation, the system automatically executes the structured analyses configured in the Agent. The result appears in the Sidebar for consultation.

5. Testing Best Practices

  1. Test before production – cover all critical scenarios, exceptions and alternative flows.
  2. Compare models – some tasks work better on larger models; others require lower latency or cost. Create separate conversations for each model and evaluate.
  3. Monitor context and cost – check if the Agent exceeds 75% of the model's token limit; simplify instructions when necessary.
  4. Validate tools – ensure each tool call returns the expected format and that the Agent handles errors (timeouts, empty responses, etc.).
  5. Document learnings – record failures and applied adjustments to compose a history of improvements.

© 2025 SipPulse AI – All rights reserved