Structured Analyses
Structured Analyses allow you to extract insights and metrics from conversations or any text, returning results in JSON according to the fields you defined in the visual interface, without having to deal directly with schemas.
Overview
What they are: Think of them as AI assistants you configure to read texts (such as emails, call transcripts, or chats) and extract specific information — like numbers, keywords, or yes/no answers. The result is always an organized and standardized summary (in JSON format), ready to use, without worrying about complex technical data formatting.
When to use: They are ideal for automating tasks such as:
Analyzing the sentiment expressed in customer feedback (positive, negative, neutral).
Identifying the main topics discussed in support or sales conversations.
Collecting data for custom performance indicators, quality, or any other important metric for your business.
Main benefits:
Standardization of reports and integrations: By generating deterministic results in JSON, it facilitates integration with any external system such as CRMs, dashboards, Google Sheets, or BI tools for automatic analysis of calls, conversations, and texts.
Time and resource savings: Automates processes that would be manual and laborious, allowing the AI to perform complex analyses in seconds that would take hours for humans.
Analysis flexibility: Allows you to create different types of analyses to extract specific insights according to your business needs.
Scalability: Processes large volumes of conversations consistently, maintaining analysis quality even with significant data growth.
Creating an Analysis
Click New Analysis in the upper right corner. You will see two options:
- Create new: start an analysis from scratch.
- Use template: choose one of the available templates (Summary, Translation, Text Classification, Sentiment Analysis, Entity Extraction, Structural Analysis, Discourse Analysis, Conversation Analysis, Agent Performance Analysis, etc.) as a starting point.
When selecting Create new, the creation modal opens blank. When selecting Use template, the modal opens with the template fields pre-filled — you can modify, remove, or add fields as needed.
In the creation modal, fill in:
- Model: choose a compatible model.
- Name: unique identifier (e.g.,
performance_atendimento
). - Description: purpose of the analysis (e.g., “Evaluates response time and ticket resolution”). This description is important as the LLM model will use it to understand what you expect from it.
Important Points
Model Compatibility: When creating an analysis, the list of models will only show those that support structured data extraction. This means not all language models (LLMs) available on the SipPulse AI platform will be compatible with this feature.
Cost Model: The use of Structured Analyses is charged per execution. The cost is calculated based on the volume of input and output tokens processed by the chosen AI model.
Define the Structure:
- Click + Field to add new results.
- Choose the Type (Number / Text / Boolean / Enum / List of text / List of numbers / etc)
- Enter the Name of the field in
snake_case
format (e.g.,customer_name
,total_value
). This pattern uses lowercase letters and separates words with underscores (_). Usingsnake_case
is important to maintain consistency and facilitate reading and integration of JSON results. - Add a clear Description for the field, explaining what it represents. This will help the AI model understand what to extract and also make it easier for other users to understand the results.
- Mark Required if it is essential for your workflow.
Click Save to create the analysis.
Tip
Templates are just inspirations and help speed up configuration. You can use, edit, or discard any pre-filled field.
Testing in the Interface
After creating your analysis, you will see it in the analyses table. To test it, follow the steps below:
- In the list of analyses, click ⚗️ Test.
- In the Run Structured Analysis modal, paste your text or transcript.
- Click Run and check the resulting JSON.
Tip
Use tests to prototype and validate the analysis structure before integrating it into production.
Integration with Agents (Post-Analysis)
Integration with agents is a very powerful feature, allowing you to automate the execution of structured analyses when a conversation ends. This is especially useful to ensure that all interactions are analyzed and reports are generated automatically.
- In Agents → Create/Edit → Post-Analysis tab, enable the toggles for the desired analyses.
- Save the agent.
Analyses will be executed whenever the conversation is closed, regardless of the method:
- Voice call: when the agent ends the call.
- Chat: invocation of
end_dialog
in WhatsApp or chat.- UI: click the End conversation button in the conversation window.
- API: call to the endpoint
POST /threads/{id}/close
(see docs).
Getting results
Via interface
If you want to see the analyses in the interface, click the information button of the conversation (top right corner, "i" icon) you are monitoring. If there is any analysis, you will see it listed in the section titled Post-Analysis.
Attention
To view analyses in the interface, make sure that:
- The conversation has been closed.
- The agent associated with the conversation has the desired analyses enabled in the Post-Analysis tab.
Otherwise, the Post-Analysis section will not be visible.
Via API
You can access the conversation from the endpoint GET /threads/{id}
(see docs).
In the response, you will see a post_analysis
field with the analysis results. Example response:
{
"id": "thr_0196d90af689765f974e5d4c8631d47c",
"title": "Greeting in Portuguese",
"status": "closed",
"history": [ /* messages */ ],
"post_analysis": [
{
"name": "suspicious_conversation",
"description": "Performs an analysis to determine if a given conversation is suspicious or not.",
"content": {
"score": 0,
"reason": "The conversation is cordial and does not present suspicious content."
}
},
{
"name": "sentiment",
"description": "Determines the sentiment of the conversation",
"content": {
"sentiment": "neutral"
}
}
]
}
Via Webhook
You can also receive analyses via webhook by subscribing to the thread.closed
event. If you are not familiar with webhooks on the platform, check the Webhooks documentation.
Within the webhook, you will receive a thread.closed
event with the following payload:
{
"event": "thread.closed",
"data": {
"id": "thr_0196d90af689765f974e5d4c8631d47c",
"title": "Greeting in Portuguese",
"status": "closed",
"history": [ /* messages */ ],
"post_analysis": [ /* analysis results */ ]
}
}
Note: omit sensitive or large fields (
history
,agent_snapshot
) to focus on the results.
API Integration Examples
You can incorporate structured analyses into any flow in your system by simply calling the execution endpoint.
Important
- Requirements: you need to have an analysis created and its ID to execute the analysis. You can find the ID in the interface by clicking the ID icon button.
- Authentication: use the
api-key
header with your API token. - Format: the content must be sent as a JSON with the
content
field containing the text to be analyzed. - Return: the analysis result will be returned in the
content
field of the JSON.
// Run structured analysis via API
enumir async function executeAnalysis(
content: string,
analysisId: string
): Promise<any> {
const baseUrl = 'https://api.sippulse.ai';
const apiKey = process.env.SIPPULSE_API_KEY;
const headers = {
'Content-Type': 'application/json',
'api-key': apiKey,
};
const response = await fetch(
`${baseUrl}/structured-analyses/${analysisId}/execute`,
{
method: 'POST',
headers,
body: JSON.stringify({ content }),
}
);
if (!response.ok) {
throw new Error(`Error running analysis: ${response.status}`);
}
return response.json();
}
// Usage example:
(async () => {
const sampleContent = "Hello! I would like to know if this service was satisfactory.";
const analysisId = "YOUR_ANALYSIS_ID";
try {
const result = await executeAnalysis(sampleContent, analysisId);
console.log("Analysis result:", JSON.stringify(result.content ?? result, null, 2));
} catch (err) {
console.error("Execution failed:", err);
}
})();
import requests
import json
import os
API_BASE = 'https://api.sippulse.ai' # Abstracted in variable
SIPPULSE_API_KEY = os.environ.get('SIPPULSE_API_KEY')
def execute_analysis(content: str, analysis_id: str) -> dict:
"""Runs structured analysis via API"""
headers = {
'Content-Type': 'application/json',
'api-key': SIPPULSE_API_KEY # Added API Key
}
response = requests.post(
f"{API_BASE}/structured-analyses/{analysis_id}/execute", # Using API_BASE
headers=headers,
json={'content': content}
)
if not response.ok:
raise Exception(f"Error running analysis: {response.status_code} {response.text}")
return response.json()
# Usage example:
if __name__ == "__main__":
sample_content = "Hello! I would like to know if this service was satisfactory."
analysis_id = "YOUR_ANALYSIS_ID" # Replace with your analysis ID
if not SIPPULSE_API_KEY:
print("Error: The SIPPULSE_API_KEY environment variable is not set.")
else:
try:
result = execute_analysis(sample_content, analysis_id)
print("Analysis result:", json.dumps(result.get('content', result), indent=2))
except Exception as e:
print("Execution failed:", e)
Use Case: Audio Transcription and Analysis
In scenarios involving call transcription, you can automate the entire transcription and analysis flow in two steps:
- Transcribing the call audio.
- Extracting performance reports, sentiment, and quality indicators or any other desired metric.
- Storing the results in a database or sending them to another system.
Why use this flow:
- Automatically leverage call recordings to generate insights without manual intervention.
- Integrate with call center or CRM systems, triggering analyses as soon as the audio is available.
- Monitor service quality and business metrics (response time, satisfaction, etc.).
Recommendation: add
format=diarization
in the transcription request to separate speakers, improving the accuracy of metrics such as speaking time per participant and sentiment analysis.How to execute:
- Direct call: send the audio via the transcription API and, upon receiving the response, call the structured analysis endpoint in the same flow.
- Via Webhook: configure a transcription webhook in Webhooks → event
asr.transcribe
. Upon receiving the webhook, extractbody.text
and trigger the analysis endpoint automatically.
import FormData from 'form-data';
import fs from 'fs';
const API_BASE = 'https://api.sippulse.ai';
const API_KEY = process.env.SIPPULSE_API_KEY;
async function transcribeAndAnalyze(
fileBuffer: Buffer,
analysisId: string
) {
// 1. Transcribe audio with diarization
const form = new FormData();
form.append('file', fileBuffer, 'audio.wav');
form.append('model', 'pulse-precision');
form.append('format', 'diarization');
const asrRes = await fetch(`${API_BASE}/asr/transcribe`, {
method: 'POST',
body: form,
headers: {
'Content-Type': `multipart/form-data`,
'api-key': API_KEY,
}
});
if (!asrRes.ok) throw new Error('Transcription error');
const { text } = await asrRes.json();
// 2. Run structured analysis
const analysisRes = await fetch(
`${API_BASE}/structured-analyses/${analysisId}/execute`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json', 'api-key': API_KEY },
body: JSON.stringify({ content: text }),
}
);
if (!analysisRes.ok) throw new Error('Analysis error');
const { content } = await analysisRes.json();
return content;
}
// Full execution example
(async () => {
const fileBuffer = fs.readFileSync('audio.wav');
const analysisId = 'YOUR_ANALYSIS_ID'; // You can copy your analysis ID in the interface by clicking the ID button
try {
const results = await transcribeAndAnalyze(fileBuffer, analysisId);
console.log('Transcription and analysis results:', JSON.stringify(results, null, 2));
} catch (err) {
console.error('Error:', err);
}
})();
import requests
import json
import os
from io import BytesIO
API_BASE = 'https://api.sippulse.ai' # Abstracted in variable
SIPPULSE_API_KEY = os.environ.get('SIPPULSE_API_KEY')
def transcribe_and_analyze(
file_bytes: bytes,
analysis_id: str
) -> dict: # Changed return type to dict to match 'content'
headers_asr = {
'api-key': SIPPULSE_API_KEY # Added API Key for ASR
}
# 1. Transcription with diarization
resp_asr = requests.post(
f"{API_BASE}/asr/transcribe", # Using API_BASE
files={
'file': ('audio.wav', BytesIO(file_bytes)),
'model': (None, 'pulse-precision'),
'format': (None, 'diarization')
},
headers=headers_asr # Added headers with API Key
)
resp_asr.raise_for_status()
transcript = resp_asr.json()['text']
headers_analysis = {
'Content-Type': 'application/json',
'api-key': SIPPULSE_API_KEY # Added API Key for Analysis
}
# 2. Structured analysis
resp_analysis = requests.post(
f"{API_BASE}/structured-analyses/{analysis_id}/execute", # Using API_BASE
headers=headers_analysis, # Added headers with API Key
json={'content': transcript}
)
resp_analysis.raise_for_status()
# Returns the analysis content, similar to the TypeScript example
return resp_analysis.json().get('content', resp_analysis.json())
if __name__ == '__main__':
if not SIPPULSE_API_KEY:
print("Error: The SIPPULSE_API_KEY environment variable is not set.")
else:
# Simulate reading an audio file for the example
# In a real case, you would read an existing file:
# with open('audio.wav', 'rb') as f:
# file_bytes = f.read()
# For this example, let's create a placeholder bytes
try:
# Create a dummy audio.wav file for the example to run
if not os.path.exists('audio.wav'):
with open('audio.wav', 'wb') as f:
f.write(b'dummy audio data') # Placeholder content
with open('audio.wav', 'rb') as f:
file_bytes = f.read()
analysis_id = 'YOUR_ANALYSIS_ID' # Replace with your analysis ID
results = transcribe_and_analyze(file_bytes, analysis_id)
print('Transcription and analysis results:', json.dumps(results, indent=2))
except FileNotFoundError:
print("Error: The file 'audio.wav' was not found. Create an audio file with this name or adjust the path.")
except Exception as e:
print('Error:', e)
Analyzing Transcription from Webhook
By configuring a webhook for the stt.transcription
event, you can automate the analysis of transcribed content in real time. This is especially useful for processing call recordings or audios sent for transcription via API.
How the flow works
- The audio is sent for transcription through the SipPulse API
- When the transcription is complete, a webhook is triggered to your endpoint
- Your server processes the webhook, extracts the transcribed text, and sends it for structured analysis
- The results can be stored in your database or forwarded to other systems
Important considerations
- Security validation: Always validate the webhook signature to ensure the request is legitimate
- Asynchronous processing: To avoid timeouts, process complex analyses asynchronously
- Fault tolerance: Implement queues and retries to ensure no transcription is lost
If you are not experienced with webhooks, it is recommended to check the Webhooks documentation to understand how to configure and validate requests.
See below practical examples in TypeScript and Python to implement this flow:
import express, { Request, Response } from 'express';
import { createHmac } from 'crypto';
const app = express();
app.use(express.json());
// Shared secret to validate requests
const WEBHOOK_SECRET = process.env.WEBHOOK_SECRET || '';
function isSignatureValid(payload: string, timestamp: string, signature: string): boolean {
const message = `${payload}:${timestamp}`;
const hmac = createHmac('sha256', WEBHOOK_SECRET).update(message).digest('hex');
return hmac === signature;
}
app.post('/webhook/stt', async (req: Request, res: Response) => {
const event = req.headers['x-event'];
const timestamp = req.headers['x-timestamp'] as string;
const signature = req.headers['x-signature'] as string;
const payload = JSON.stringify(req.body);
// Check event
if (event !== 'stt.transcription') {
// If not the expected event, return 200 OK without processing
return res.status(200)
}
// Check timestamp window (±5 minutes)
if (Math.abs(Date.now() - Date.parse(timestamp)) > 5 * 60 * 1000) {
return res.status(400).json({ error: 'Timestamp expired' });
}
// Validate signature
if (!isSignatureValid(payload, timestamp, signature)) {
return res.status(400).json({ error: 'Invalid signature' });
}
// Process webhook
const { text } = req.body.payload;
const analysisId = process.env.ANALYSIS_ID;
const baseUrl = 'https://api.sippulse.ai';
const apiKey = process.env.SIPPULSE_API_KEY;
const headers = {
'Content-Type': 'application/json',
'api-key': apiKey,
};
try {
const response = await fetch(
`${baseUrl}/structured-analyses/${analysisId}/execute`,
{
method: 'POST',
headers,
body: JSON.stringify({ content: text }),
}
);
const result = await response.json();
console.log('Analysis performed', result.content);
res.status(200).send('OK');
} catch (err) {
console.error('Error processing transcription webhook:', err);
res.status(500).send('Internal error');
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => console.log(`Express server running on port ${PORT}`));
from fastapi import FastAPI, Request, HTTPException, Header
import hashlib
import hmac
import time
import json
import os
import requests
from datetime import datetime
from typing import Optional
app = FastAPI()
# Shared secret to validate requests
WEBHOOK_SECRET = os.environ.get('WEBHOOK_SECRET', '')
def is_signature_valid(payload: str, timestamp: str, signature: str) -> bool:
message = f"{payload}:{timestamp}"
expected = hmac.new(
WEBHOOK_SECRET.encode(),
message.encode(),
hashlib.sha256
).hexdigest()
return hmac.compare_digest(expected, signature)
@app.post('/webhook/stt')
async def receive_stt_webhook(
request: Request,
x_event: Optional[str] = Header(None, alias="x-event"),
x_timestamp: Optional[str] = Header(None, alias="x-timestamp"),
x_signature: Optional[str] = Header(None, alias="x-signature")
):
# Check event
if x_event != 'stt.transcription':
# If not the expected event, return 200 OK without processing
return {"status": "ok"}
# Get request body as string
body = await request.body()
payload = body.decode('utf-8')
# Check timestamp window (±5 minutes)
timestamp_dt = datetime.fromisoformat(x_timestamp.replace('Z', '+00:00'))
time_diff = abs((datetime.now().timestamp() - timestamp_dt.timestamp()))
if time_diff > 5 * 60:
raise HTTPException(status_code=400, detail="Timestamp expired")
# Validate signature
if not is_signature_valid(payload, x_timestamp, x_signature):
raise HTTPException(status_code=400, detail="Invalid signature")
# Process webhook
data = json.loads(payload)
text = data.get('payload', {}).get('text')
analysis_id = os.environ.get('ANALYSIS_ID')
base_url = 'https://api.sippulse.ai'
api_key = os.environ.get('SIPPULSE_API_KEY')
headers = {
'Content-Type': 'application/json',
'api-key': api_key
}
try:
response = requests.post(
f"{base_url}/structured-analyses/{analysis_id}/execute",
headers=headers,
json={'content': text}
)
response.raise_for_status()
result = response.json()
print('Analysis performed', result.get('content'))
return {"status": "ok"}
except Exception as e:
print('Error processing transcription webhook:', e)
raise HTTPException(status_code=500, detail="Internal error")
if __name__ == "__main__":
import uvicorn
port = int(os.environ.get('PORT', 3000))
uvicorn.run(app, host="0.0.0.0", port=port)
Tip
To learn how to configure secrets, endpoints, and webhook events on the platform, see the Webhooks documentation.
Best Practices
- Test each analysis in the interface before enabling it in agents.
- Use clear descriptions in fields to make the JSON easier to understand.