Skip to content

Post Analysis

Post Analysis automatically processes conversations after they end, extracting structured insights, summaries, and metrics. This powerful feature turns raw conversations into actionable business data without manual review.

Post Analysis configuration

What is Post Analysis?

When a conversation ends, Post Analysis runs a separate AI analysis pass that can:

CapabilityExample Output
Summarize conversations"Customer inquired about billing, resolved by explaining payment due date"
Extract structured data{ "topic": "billing", "resolved": true, "sentiment": "satisfied" }
Score interactionsCustomer satisfaction: 4/5, Agent performance: 92%
Classify intentsPrimary: "billing_inquiry", Secondary: "plan_upgrade_interest"
Flag follow-ups"Customer requested callback about enterprise pricing"

Additional Token Cost

Post Analysis runs a separate LLM call after each conversation, which incurs additional token costs. The cost depends on conversation length and analysis complexity.


How It Works

Conversation Ends

Post Analysis Triggered

┌─────────────────────────────────────┐
│  LLM analyzes full conversation:    │
│  - Applies your analysis prompts    │
│  - Extracts structured data         │
│  - Generates summaries              │
└─────────────────────────────────────┘

Results stored and/or sent via webhook

Trigger timing:

  • Voice calls: When call ends
  • Chat: When thread is marked complete or times out
  • API threads: When explicitly closed

Configuration

Enabling Post Analysis

  1. Navigate to Agent Configuration > Advanced
  2. Find Post Analysis section
  3. Toggle Enable Post Analysis
  4. Configure your analysis schemas

Analysis Schema

Define what data to extract using JSON Schema:

json
{
  "type": "object",
  "properties": {
    "summary": {
      "type": "string",
      "description": "2-3 sentence summary of the conversation"
    },
    "topic": {
      "type": "string",
      "enum": ["billing", "technical_support", "sales", "general", "complaint"],
      "description": "Primary topic of the conversation"
    },
    "resolved": {
      "type": "boolean",
      "description": "Whether the customer's issue was resolved"
    },
    "sentiment": {
      "type": "string",
      "enum": ["positive", "neutral", "negative"],
      "description": "Customer's overall sentiment"
    },
    "follow_up_needed": {
      "type": "boolean",
      "description": "Whether follow-up action is required"
    },
    "follow_up_notes": {
      "type": "string",
      "description": "Details about required follow-up, if any"
    }
  },
  "required": ["summary", "topic", "resolved", "sentiment"]
}

Analysis Prompt

Customize instructions for the analysis:

markdown
Analyze this customer support conversation and extract the following:

1. **Summary**: Provide a brief, factual summary of what was discussed and the outcome.

2. **Topic Classification**: Categorize into one of these topics:
   - billing: Payment, invoices, charges, refunds
   - technical_support: Product issues, bugs, how-to questions
   - sales: Pricing, plans, upgrades, new purchases
   - general: General inquiries, account management
   - complaint: Dissatisfaction, escalation requests

3. **Resolution**: Was the customer's primary question/issue resolved in this conversation?

4. **Sentiment**: Based on the customer's language and responses, gauge their overall sentiment.

5. **Follow-up**: Note if any action items or callbacks were promised.

Be objective and base your analysis on the actual conversation content.

Example Schemas

Customer Support Analysis

json
{
  "type": "object",
  "properties": {
    "summary": {
      "type": "string",
      "description": "Concise summary of the support interaction"
    },
    "issue_category": {
      "type": "string",
      "enum": ["account_access", "billing", "product_bug", "feature_request", "how_to", "other"]
    },
    "resolution_status": {
      "type": "string",
      "enum": ["resolved", "escalated", "pending", "unresolved"]
    },
    "customer_effort_score": {
      "type": "integer",
      "minimum": 1,
      "maximum": 5,
      "description": "Estimated effort customer had to exert (1=easy, 5=difficult)"
    },
    "escalation_reason": {
      "type": "string",
      "description": "If escalated, explain why"
    }
  }
}

Sales Qualification Analysis

json
{
  "type": "object",
  "properties": {
    "lead_quality": {
      "type": "string",
      "enum": ["hot", "warm", "cold", "unqualified"],
      "description": "Lead qualification score"
    },
    "budget_mentioned": {
      "type": "boolean"
    },
    "budget_range": {
      "type": "string",
      "description": "Estimated budget if mentioned"
    },
    "timeline": {
      "type": "string",
      "description": "Purchase timeline if discussed"
    },
    "decision_maker": {
      "type": "boolean",
      "description": "Is the contact a decision maker?"
    },
    "competitor_mentioned": {
      "type": "array",
      "items": { "type": "string" },
      "description": "Any competitors mentioned"
    },
    "next_step": {
      "type": "string",
      "description": "Recommended next action"
    }
  }
}

Quality Assurance Analysis

json
{
  "type": "object",
  "properties": {
    "greeting_proper": {
      "type": "boolean",
      "description": "Did the agent properly greet the customer?"
    },
    "identification_verified": {
      "type": "boolean",
      "description": "Was customer identity verified when required?"
    },
    "accurate_information": {
      "type": "boolean",
      "description": "Was information provided accurate?"
    },
    "professional_tone": {
      "type": "boolean",
      "description": "Did agent maintain professional tone throughout?"
    },
    "closing_proper": {
      "type": "boolean",
      "description": "Did agent properly close the conversation?"
    },
    "compliance_issues": {
      "type": "array",
      "items": { "type": "string" },
      "description": "Any compliance concerns noted"
    },
    "overall_score": {
      "type": "integer",
      "minimum": 0,
      "maximum": 100
    }
  }
}

Accessing Analysis Results

In the Platform

  1. Navigate to Conversations or Thread History
  2. Select a completed conversation
  3. View the Analysis tab to see extracted data

Via Webhook

Configure a webhook to receive analysis results automatically:

json
{
  "event": "conversation.analyzed",
  "conversation_id": "conv_abc123",
  "agent_id": "agt_xyz",
  "timestamp": "2025-01-18T15:30:00Z",
  "analysis": {
    "summary": "Customer called about a billing discrepancy...",
    "topic": "billing",
    "resolved": true,
    "sentiment": "positive",
    "follow_up_needed": false
  }
}

See Webhooks for configuration details.

Via API

Retrieve analysis for a specific conversation:

bash
GET /api/conversations/{conversation_id}/analysis
Authorization: Bearer {api_key}

Best Practices

1. Keep Schemas Focused

Don't try to extract everything. Focus on data you'll actually use:

❌ Over-engineered:

json
{
  "properties": {
    "summary": {},
    "detailed_summary": {},
    "brief_summary": {},
    "executive_summary": {},
    // ... 30 more fields
  }
}

✅ Focused:

json
{
  "properties": {
    "summary": {},
    "topic": {},
    "resolved": {},
    "follow_up": {}
  }
}

2. Use Enums for Consistency

Structured values make reporting easier:

json
{
  "sentiment": {
    "type": "string",
    "enum": ["positive", "neutral", "negative"]
  }
}

This ensures you get consistent values rather than variations like "happy", "satisfied", "good", "positive".

3. Write Clear Descriptions

The description field guides the analysis:

json
{
  "customer_effort_score": {
    "type": "integer",
    "minimum": 1,
    "maximum": 5,
    "description": "Rate how much effort the customer had to exert: 1=issue resolved immediately, 5=customer had to repeat themselves multiple times or got transferred"
  }
}

4. Test with Real Conversations

Before deploying, test your schema against actual conversation transcripts to verify:

  • Fields are extractable from typical conversations
  • Categories/enums cover common cases
  • Analysis prompt produces accurate results

5. Consider Cost vs. Value

Post Analysis uses additional tokens. Balance:

  • Analyze all conversations for high-value insights
  • Sample conversations for quality monitoring
  • Skip analysis for very short interactions

Use Cases

Customer Success Dashboard

Track resolution rates, sentiment trends, and common issues:

MetricSource Field
Resolution Rateresolved (boolean)
Sentiment Trendsentiment (enum)
Top Issuestopic (enum) count
Follow-up Queuefollow_up_needed filter

Sales Pipeline Integration

Push qualified leads to your CRM:

javascript
// Webhook handler example
if (analysis.lead_quality === 'hot' && analysis.decision_maker) {
  await crm.createOpportunity({
    contact: conversation.customer,
    source: 'AI Agent',
    budget: analysis.budget_range,
    timeline: analysis.timeline,
    notes: analysis.summary
  });
}

Quality Monitoring

Automatically flag conversations for review:

javascript
// Alert on low quality scores
if (analysis.overall_score < 70 || analysis.compliance_issues.length > 0) {
  await alertTeam('QA Review Needed', conversation.id);
}

Troubleshooting

Analysis Not Running

  • Verify Post Analysis is enabled
  • Check that conversations are being properly closed
  • Review schema for syntax errors

Incorrect Extractions

  • Improve description fields in schema
  • Add examples to your analysis prompt
  • Ensure enum values cover all cases
  • Test with representative conversation samples

High Costs

  • Reduce schema complexity
  • Use shorter analysis prompts
  • Consider sampling instead of analyzing all conversations
  • Use smaller models for analysis if available