Skip to content

SipPulse AI Intelligent Agents

This document is the central reference guide for understanding, configuring, and operating Agents on the SipPulse AI platform. It details fundamental concepts, features, and best practices to maximize the potential of your Agents.

What is a SipPulse AI Agent?

An Agent in SipPulse AI is a sophisticated software entity that acts as an intelligent orchestrator. Using a Large Language Model (LLM) as its computational "brain," an Agent is designed to:

  1. Conduct Coherent Dialogues: Maintain natural and contextually relevant conversations with users
  2. Manage Memory and Context: Retain information from past interactions to inform future responses, ensuring continuity
  3. Execute Tools (Tool Calls): Interact autonomously with external and internal systems, such as APIs, knowledge bases (RAG), and telephony features (SIP)

This ability to integrate dialogue, memory, and action into a unified flow fundamentally distinguishes an Agent from a "pure" LLM.

LLM vs. SipPulse AI Agent

Understanding the distinction between a "pure" LLM and a SipPulse AI Agent is crucial for architecting effective solutions:

Feature"Pure" LLM (Traditional)SipPulse AI Agent
Main PurposeText generation based on a promptOrchestration of tasks, dialogue, and tool execution to achieve objectives
Tool ExecutionCan suggest a function call (usually in JSON or text), but does not execute itActively executes APIs, RAG, SIP, and other configured tools
Memory and ContextGenerally stateless. Conversation history must be explicitly sent with each callActively maintains and manages conversation history and context
Flow OrchestrationThe developer controls the logical flow, deciding when to call the LLM and when to execute external codeThe Agent, guided by its instructions, autonomously decides when and how to use each tool
Data AccessLimited to the knowledge it was trained on and what is provided in the current promptCan dynamically connect to internal databases, third-party APIs, calendars, CRM systems, etc.
AutonomyLow. Reacts to each input in isolationHigh. Can perform multiple steps to solve a problem or complete a task

Strategic Tip

When using a "pure" LLM for tasks that require interaction with the outside world (e.g., fetching order data), your code needs to:

  1. Interpret the user's intent
  2. Ask the LLM to format the parameters for an API
  3. Extract these parameters from the LLM's response
  4. Call the external API
  5. Receive the API response
  6. Inject this response back into the LLM so it can formulate a reply to the user

With a SipPulse AI Agent, you define the tool schema (e.g., "fetchOrderStatus" with parameter "orderNumber"). The Agent handles parameter extraction, tool invocation, and use of the result transparently and integrated into the conversation flow.

Project Scoping

Agents belong to the active project. When you create an agent, it is associated with the project currently selected in the Project Switcher. The agents list shows only agents within the active project.

Agents Table

The agents table displays key information at a glance:

  • Name: Agent name and description
  • Deployments: Active deployment channels (SIP, WhatsApp, Chat Widget, API) shown as badges
  • Created by: Which team member created the agent
  • Status: Whether the agent is active

Copy to Projects

You can copy an agent to another project using the Copy to Projects action in the agent's context menu. This creates a duplicate of the agent (including instructions, tools, and settings) in the target project. This is useful when you want to reuse an agent configuration across multiple projects.

Templates

When creating a new agent, you can choose from pre-configured templates for common use cases (Customer Service, Sales, Healthcare, etc.). Templates include ready-to-use instructions and settings that you can customize after creation.

Import and Export Agents

The platform allows you to export and import agent configurations in JSON format, making it easy to back up, share, and migrate agents across organizations.

Exporting an Agent

To export an agent's configuration:

  1. Navigate to Agents in the side menu
  2. Locate the desired agent in the list
  3. Click the agent's actions menu ()
  4. Select Export Agent
  5. A JSON file will be downloaded named after the agent (e.g., Receptionist.json)

What is exported:

  • Agent name and description
  • System instructions
  • LLM model settings
  • Parameters (temperature, top_p, max_tokens)
  • Voice settings (TTS)
  • Configured tools (APIs, RAG, built-in)
  • Post-conversation analysis settings

What is NOT exported:

  • Agent ID
  • Deployment configurations (WhatsApp, SIP, Chat Widget)
  • Conversation history
  • Organization data

Importing an Agent

To import an agent from a JSON file:

  1. Navigate to Agents in the side menu
  2. Click the Import button
  3. Select the agent's .json file
  4. The creation form will be pre-filled with the settings from the file
  5. Review the settings and make any necessary adjustments
  6. Click Create to save the new agent

Use cases

Use the import/export feature to:

  • Backup: Keep copies of critical agents for safekeeping
  • Sharing: Share agent configurations with colleagues or other teams
  • Migration: Move agents between organizations or environments
  • Versioning: Save different versions of an agent during development

WARNING

When importing an agent, deployment configurations (WhatsApp, SIP, Chat Widget) are not included. You will need to reconfigure the deployment channels after importing.

Best Practices

Adopting these practices can significantly improve the effectiveness, reliability, and maintainability of your Agents:

Start Simple and Iterate

  • Why: Trying to build an overly complex Agent from the start can lead to debugging difficulties and unpredictable results
  • How: Start with a minimal set of clear instructions and one or two essential tools. Test thoroughly and add complexity gradually

Keep Instructions Focused

  • Why: Overly long instructions or unnecessary information can confuse the LLM, increase latency, and raise costs
  • How: Prioritize using tool calls to fetch dynamic information instead of "pasting" large volumes of data directly into the instructions

Prioritize Clarity in Tool Instructions

  • Why: The Agent relies on tool descriptions to understand when and how to use them. Ambiguous descriptions lead to errors
  • How: Be very clear about the tool's purpose, the parameters it accepts, and what it returns. Provide examples if possible. You don't need to repeat the tool description in the Agent's instructions, as it is already available to the LLM

Test Rigorously Before Deployment

  • Why: Agents may interact unexpectedly with users and external systems. Undetected issues can lead to poor user experiences or errors in production
  • How: Use the platform to interact with the Agent in a test environment. Simulate a variety of scenarios, including edge cases and unexpected inputs. Verify that the Agent responds correctly and executes tools as expected

Test Different Models

  • Why: Different models have distinct capabilities and limitations. What works well on one model may not work on another
  • How: Test your Agent with various models available on the platform. Compare performance in terms of accuracy, latency, and cost. Choose the model that best meets your needs