When usage drops or engagement stalls, /customer-health-analyst scores every account, so you can intervene before churn. — Claude Skill
A Claude Skill for Claude Code by Nick Jensen — run /customer-health-analyst in Claude·Updated
Score account health, flag churn risk, and surface at-risk cohorts.
- Multi-signal health scoring across product usage, support tickets, and NPS
- Cohort-level churn prediction with configurable risk thresholds
- Executive dashboards with drill-down by segment, tier, and CSM
- Automated at-risk account alerts with recommended next actions
- Usage trend analysis with week-over-week and month-over-month deltas
Who this is for
What it does
Run /customer-health-analyst with your usage data export to flag the 15-20% of accounts showing early churn signals before your Monday CS standup.
Use /customer-health-analyst to generate executive dashboards showing GRR trends, cohort retention curves, and logo churn by segment — ready for quarterly board decks.
Feed /customer-health-analyst your product telemetry to identify accounts with 30%+ usage decline, then trigger CSM outreach before renewal conversations.
Run /customer-health-analyst on accounts 30-60 days post-launch to catch those stuck below activation thresholds and route them to onboarding specialists.
How it works
Ingest account data — product usage logs, support ticket history, NPS responses, and billing events — into a unified health model.
Calculate composite health scores using weighted signals: login frequency, feature adoption depth, support sentiment, and expansion velocity.
Segment accounts into health tiers (green / yellow / red) with configurable thresholds tuned to your churn history.
Generate cohort-level trends and individual account cards with specific risk drivers and recommended interventions.
Output executive dashboards, CSM action lists, and alert triggers for integration into your existing workflows.
Example
account_id,mrr,logins_30d,features_used,open_tickets,nps_score,days_since_last_login ACME-Corp,12000,45,8,1,9,2 Beta-Inc,8500,3,2,4,-1,18 Gamma-Ltd,22000,28,5,0,7,5 Delta-Co,6000,0,1,6,-3,35
ACME-Corp: 92/100 (Green) — Strong adoption, low support load Gamma-Ltd: 74/100 (Yellow) — Moderate usage, feature adoption below tier average Beta-Inc: 31/100 (Red) — 3 logins in 30d, 4 open tickets, negative NPS Delta-Co: 12/100 (Red) — Zero logins in 35 days, 6 open tickets, detractor
Beta-Inc: Schedule executive sponsor call this week. Assign onboarding specialist to re-activate core workflows. Delta-Co: Escalate to VP CS immediately. Account shows full disengagement pattern — likely evaluating alternatives.
Metrics this improves
Works with
Customer Health Analyst
Expert guidance for customer health scoring, predictive analytics, and data-driven customer success strategies. Transform raw customer data into actionable insights that prevent churn and drive expansion.
Philosophy
Customer health is not a single metric — it's a predictive system:
- Measure what matters — Health scores should predict outcomes, not just track activity
- Lead, don't lag — Focus on indicators that predict churn before it's too late
- Segment for action — Different customers need different interventions
- Automate detection — Scale health monitoring across your entire customer base
- Close the loop — Analytics without action is just expensive data collection
How This Skill Works
When invoked, apply the guidelines in rules/ organized by:
health-*— Health score design, weighting, and calibrationindicators-*— Leading vs lagging indicator analysischurn-*— Prediction modeling and early warning systemsusage-*— Analytics and adoption metricsrisk-*— Identification, escalation, and interventiondata-*— Enrichment and customer 360 developmentcohort-*— Analysis and benchmarkingexecutive-*— Reporting and dashboardssegmentation-*— Customer tiers and scoring models
Core Frameworks
The Health Score Hierarchy
┌─────────────────────────────────────────────────────────────────┐
│ COMPOSITE HEALTH SCORE │
│ (0-100) │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ PRODUCT │ │ENGAGEMENT│ │ GROWTH │ │ SUPPORT │ │
│ │ USAGE │ │ │ │ SIGNALS │ │ HEALTH │ │
│ │ (35%) │ │ (25%) │ │ (20%) │ │ (20%) │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
├─────────────────────────────────────────────────────────────────┤
│ COMPONENT METRICS │
│ │
│ Usage: Engagement: Growth: Support: │
│ - DAU/MAU - NPS score - Seat trend - Ticket volume │
│ - Features - CSM meetings - Usage trend - Resolution time │
│ - Depth - Email opens - Expansion - Sentiment │
│ - Breadth - Logins - Contract - Escalations │
│ │
└─────────────────────────────────────────────────────────────────┘
Leading vs Lagging Indicators
| Type | Definition | Examples | Action Window |
|---|---|---|---|
| Leading | Predict future outcomes | Usage decline, engagement drop | 60-90 days |
| Coincident | Move with outcomes | Support sentiment, NPS | 30-60 days |
| Lagging | Confirm after the fact | Churn, revenue loss | Too late |
Customer Health States
┌─────────────────────────────────────────────────────────────────┐
│ │
│ THRIVING ──→ HEALTHY ──→ NEUTRAL ──→ AT-RISK ──→ CRITICAL │
│ (85+) (70-84) (50-69) (30-49) (<30) │
│ │
│ Expand Monitor Engage Intervene Escalate │
│ │
└─────────────────────────────────────────────────────────────────┘
Health Score Components
| Component | Weight | Key Metrics | Why It Matters |
|---|---|---|---|
| Product Usage | 30-40% | DAU/MAU, feature adoption, depth | Usage predicts value realization |
| Engagement | 20-25% | NPS, CSM contact, responsiveness | Relationship strength indicator |
| Growth Signals | 15-20% | Seat expansion, usage trend | Investment signals commitment |
| Support Health | 15-20% | Ticket volume, sentiment, resolution | Frustration predicts churn |
| Financial | 5-10% | Payment history, contract length | Financial commitment level |
Churn Risk Factors
| Factor | Risk Weight | Detection Method |
|---|---|---|
| Champion departure | Critical | Contact tracking, LinkedIn |
| Usage decline >30% | High | Product analytics |
| Negative NPS (0-6) | High | Survey responses |
| Support escalations | High | Ticket analysis |
| Missed renewal meeting | High | CSM activity tracking |
| Contract downgrade | Very High | Billing data |
| Competitor mentions | High | Call transcripts, tickets |
| Budget review mentions | Medium | CSM notes |
The Analytics Stack
| Layer | Purpose | Tools/Methods |
|---|---|---|
| Collection | Gather raw data | Product events, CRM, support |
| Processing | Clean and transform | ETL, data pipelines |
| Calculation | Compute scores | Scoring algorithms |
| Storage | Historical tracking | Data warehouse |
| Visualization | Present insights | Dashboards, reports |
| Action | Trigger interventions | Alerting, automation |
Key Metrics
| Metric | Formula | Target |
|---|---|---|
| Health Score Accuracy | Churn predicted / Actual churn | >70% |
| Leading Indicator Correlation | Correlation to outcomes | >0.6 |
| Score Distribution | % in each health tier | Bell curve |
| Intervention Success Rate | Saved / Intervened | >40% |
| Time to Detection | Days before risk → action | <14 days |
| False Positive Rate | False alerts / Total alerts | <20% |
Executive Dashboard KPIs
| KPI | Definition | Benchmark |
|---|---|---|
| Gross Revenue Retention | Retained ARR / Starting ARR | 85-95% |
| Net Revenue Retention | (Retained + Expansion) / Starting | 100-130% |
| Logo Retention | Retained customers / Starting | 90-95% |
| Health Score Average | Mean across customer base | 65-75 |
| At-Risk Revenue | ARR with health <50 | <15% |
| Expansion Rate | Customers expanded / Total | 15-30% |
Cohort Analysis Framework
| Cohort Type | Segments By | Use Case |
|---|---|---|
| Time-based | Sign-up month/quarter | Retention trends |
| Behavioral | Feature usage patterns | Activation success |
| Value-based | ARR tier | Segment economics |
| Industry | Vertical | Product-market fit |
| Acquisition | Channel/source | Marketing efficiency |
Anti-Patterns
- Vanity health scores — Scores that look good but don't predict outcomes
- Over-weighted product usage — Ignoring relationship and sentiment signals
- Lagging indicator focus — Measuring what already happened
- One-size-fits-all thresholds — Same scores mean different things for different segments
- Manual-only health tracking — Can't scale without automation
- Score without action — Calculating risk without intervention playbooks
- Annual calibration only — Health models need continuous refinement
- Ignoring data quality — Garbage in, garbage out
Reference documents
title: Section Organization
1. Health Score Design (health)
Impact: CRITICAL Description: Health score architecture, component selection, weight assignment, scoring algorithms, threshold calibration, and model validation.
2. Leading vs Lagging Indicators (indicators)
Impact: CRITICAL Description: Indicator identification, predictive signal development, correlation analysis, signal prioritization, and action trigger design.
3. Churn Prediction (churn)
Impact: CRITICAL Description: Prediction model development, feature engineering, risk scoring, early warning systems, and intervention timing optimization.
4. Usage Analytics (usage)
Impact: HIGH Description: Engagement measurement, feature adoption tracking, usage patterns, behavioral analysis, and adoption benchmarking.
5. Risk Identification (risk)
Impact: CRITICAL Description: Risk signal detection, escalation frameworks, intervention playbooks, stakeholder communication, and save strategies.
6. Data Enrichment (data)
Impact: HIGH Description: Data source integration, enrichment strategies, data quality management, 360-degree customer view, and data governance.
7. Cohort Analysis (cohort)
Impact: HIGH Description: Cohort definition, retention curve analysis, comparative benchmarking, segment performance, and trend identification.
8. Executive Reporting (executive)
Impact: HIGH Description: KPI selection, dashboard design, data storytelling, executive presentations, and board reporting.
9. Segmentation & Scoring (segmentation)
Impact: MEDIUM-HIGH Description: Customer tier definition, behavioral clustering, value-based segmentation, scoring model design, and segment-specific strategies.
title: Churn Prediction Modeling impact: CRITICAL tags: churn-prediction, machine-learning, risk-scoring, early-warning
Churn Prediction Modeling
Impact: CRITICAL
Effective churn prediction gives you 60-90 days of lead time to intervene. A well-calibrated model can reduce churn by 15-30% by enabling proactive outreach to at-risk accounts before they decide to leave.
The Churn Prediction Pipeline
┌──────────────────────────────────────────────────────────────────┐
│ CHURN PREDICTION PIPELINE │
├──────────────────────────────────────────────────────────────────┤
│ │
│ DATA FEATURES MODEL SCORING │
│ COLLECTION → ENGINEERING → TRAINING → & ALERTS │
│ │
│ • Product • Usage decay • Logistic • Daily risk │
│ • CRM • Engagement • Random • Threshold │
│ • Support • Sentiment • XGBoost • Routing │
│ • Financial • Growth • Neural • Actions │
│ │
├──────────────────────────────────────────────────────────────────┤
│ FEEDBACK LOOP │
│ │
│ Actual Outcomes → Model Refinement → Improved Accuracy │
│ │
└──────────────────────────────────────────────────────────────────┘
Feature Categories for Churn Models
| Category | Features | Predictive Value |
|---|---|---|
| Usage Metrics | DAU/MAU, feature adoption, session depth | High |
| Usage Trends | 30/60/90-day slopes, velocity changes | Very High |
| Engagement | NPS, CSM touchpoints, email responsiveness | High |
| Support | Ticket volume, sentiment, escalations | High |
| Financial | Payment issues, contract length, pricing tier | Medium |
| Organizational | Champion status, stakeholder changes | High |
| Firmographics | Company size, industry, growth stage | Medium |
| Temporal | Tenure, contract timing, seasonality | Medium |
Good Feature Engineering
Feature: Usage Velocity (30-Day)
Definition:
velocity_30d = (usage_current - usage_30d_ago) / usage_30d_ago
Why It's Predictive:
- Captures direction AND magnitude of change
- Declining velocity precedes churn by 60-90 days
- More predictive than static usage levels
Implementation:
SELECT
customer_id,
(current_usage - lag_30d_usage) / NULLIF(lag_30d_usage, 0) as velocity_30d
FROM customer_usage
WHERE lag_30d_usage > 0
Feature Distribution:
- Retained customers: mean velocity = +0.05
- Churned customers: mean velocity = -0.28
- Separation is clear and actionable
Bad Feature Engineering
Feature: Total Logins (All-Time)
Problems:
✗ Doesn't account for tenure
✗ No directional information
✗ Old customers always score higher
✗ Not predictive of future behavior
Better Alternative:
- Login frequency (logins per week)
- Login trend (this month vs. last month)
- Days since last login
Feature Reality:
- Retained customers: mean = 1,247 logins
- Churned customers: mean = 892 logins
- Overlap is massive, low predictive value
Model Selection Guide
| Model Type | Pros | Cons | Best For |
|---|---|---|---|
| Logistic Regression | Interpretable, fast | Less accurate | Baseline, regulated industries |
| Random Forest | Handles non-linear, robust | Less interpretable | Medium datasets |
| XGBoost | High accuracy, handles imbalance | Complex tuning | Large datasets, accuracy focus |
| Neural Network | Captures complex patterns | Black box, needs lots of data | Very large datasets |
| Survival Analysis | Time-to-event prediction | Specialized | When timing matters |
Model Training Process
Step 1: Data Preparation
├── Define churn (90-day non-renewal? Contract cancellation?)
├── Set observation window (features from T-90 to T-0)
├── Set outcome window (churn in next 90 days)
└── Handle class imbalance (SMOTE, class weights)
Step 2: Feature Selection
├── Calculate feature importance (univariate)
├── Remove correlated features (>0.8 correlation)
├── Engineer interaction features
└── Normalize/standardize as needed
Step 3: Model Training
├── Split: 70% train, 15% validation, 15% test
├── Train multiple model types
├── Tune hyperparameters on validation set
└── Select best model by validation AUC
Step 4: Model Evaluation
├── Test set performance (AUC, precision, recall)
├── Calibration check (predicted vs. actual probabilities)
├── Feature importance review
└── Business metric simulation
Step 5: Deployment
├── Productionize scoring pipeline
├── Set up monitoring and alerts
├── Document model and features
└── Plan retraining schedule
Model Performance Metrics
| Metric | Formula | Target | Interpretation |
|---|---|---|---|
| AUC-ROC | Area under ROC curve | >0.75 | Discrimination ability |
| Precision | TP / (TP + FP) | >0.60 | Of predicted churns, % correct |
| Recall | TP / (TP + FN) | >0.70 | Of actual churns, % caught |
| F1 Score | 2 × (P × R) / (P + R) | >0.65 | Balanced accuracy |
| Lift | Model precision / Base rate | >3x | Improvement over random |
Threshold Selection
Tradeoff: Precision vs. Recall
High Threshold (e.g., >0.7 probability):
✓ High precision (fewer false positives)
✗ Low recall (miss some actual churns)
→ Use when intervention is expensive
Low Threshold (e.g., >0.3 probability):
✓ High recall (catch more actual churns)
✗ Low precision (more false positives)
→ Use when missing churn is expensive
Optimal Threshold:
- Calculate cost of false positive (unnecessary intervention)
- Calculate cost of false negative (missed churn)
- Find threshold that minimizes total expected cost
Risk Tiering System
| Tier | Probability | % of Customers | Action |
|---|---|---|---|
| Critical | >70% | 5-10% | Immediate executive intervention |
| High | 50-70% | 10-15% | CSM manager involvement |
| Medium | 30-50% | 15-20% | CSM proactive outreach |
| Low | 10-30% | 30-40% | Standard monitoring |
| Minimal | <10% | 20-30% | Expansion focus |
Early Warning System Design
┌──────────────────────────────────────────────────────────────────┐
│ EARLY WARNING SYSTEM │
├──────────────────────────────────────────────────────────────────┤
│ │
│ Daily Scoring Pipeline: │
│ ├── Pull latest customer data │
│ ├── Calculate features │
│ ├── Score all customers │
│ └── Update risk tiers │
│ │
│ Alert Triggers: │
│ ├── Risk tier change (e.g., Low → Medium) │
│ ├── Probability increase >20 points │
│ ├── Critical signals detected │
│ └── Combination triggers │
│ │
│ Alert Routing: │
│ ├── Critical → CSM + Manager + VP (Slack + Email) │
│ ├── High → CSM + Manager (Email) │
│ ├── Medium → CSM (Dashboard + Email) │
│ └── Low → Dashboard only │
│ │
└──────────────────────────────────────────────────────────────────┘
Intervention Optimization
| Lead Time | Intervention Success Rate | Recommended Actions |
|---|---|---|
| 90+ days | 55-65% | Strategic value review |
| 60-90 days | 45-55% | Executive engagement |
| 30-60 days | 30-40% | Intensive support |
| <30 days | 15-25% | Save offer |
| At cancellation | 5-15% | Exit interview + win-back plan |
Model Monitoring Dashboard
Churn Prediction Model Health
┌─────────────────────────────────────────────────────────────┐
│ MODEL PERFORMANCE (Rolling 90 Days) │
├─────────────────────────────────────────────────────────────┤
│ │
│ AUC-ROC: 0.78 (target: >0.75) ✓ │
│ Precision: 0.62 (target: >0.60) ✓ │
│ Recall: 0.71 (target: >0.70) ✓ │
│ Lift at 10%: 4.2x (target: >3x) ✓ │
│ │
├─────────────────────────────────────────────────────────────┤
│ PREDICTION ACCURACY │
│ │
│ Actual Churns: 47 │
│ Predicted (>50%): 38 │
│ Correctly Predicted: 33 │
│ Surprise Churns: 14 │
│ False Alarms: 5 │
│ │
├─────────────────────────────────────────────────────────────┤
│ FEATURE IMPORTANCE (Top 5) │
│ │
│ 1. Usage velocity (30d) ████████████ 28% │
│ 2. NPS trend ████████ 19% │
│ 3. Support sentiment ███████ 15% │
│ 4. Champion engagement ██████ 13% │
│ 5. Feature adoption trend █████ 11% │
│ │
├─────────────────────────────────────────────────────────────┤
│ ALERTS │
│ │
│ ⚠ Recall dropped 5% vs. prior period │
│ ⚠ Feature drift detected in usage metrics │
│ │
└─────────────────────────────────────────────────────────────┘
Model Maintenance Schedule
| Activity | Frequency | Owner | Deliverable |
|---|---|---|---|
| Accuracy review | Weekly | Data team | Performance report |
| Feature drift check | Weekly | Data team | Drift alerts |
| Threshold review | Monthly | CS + Data | Updated thresholds |
| Full retraining | Quarterly | Data team | New model version |
| Feature review | Quarterly | CS + Data | Feature updates |
| Major overhaul | Annually | Data team | Architecture review |
Churn Model Checklist
□ Data Quality
□ Churn definition is clear and consistent
□ Historical data covers 12+ months
□ Feature data is complete and accurate
□ Class imbalance addressed appropriately
□ Feature Engineering
□ Features are predictive (tested)
□ No data leakage (future info in features)
□ Features are interpretable
□ Trends included, not just levels
□ Model Development
□ Train/validation/test split done properly
□ Cross-validation used for tuning
□ Multiple model types compared
□ Hyperparameters optimized
□ Model Evaluation
□ Performance meets targets
□ Model is calibrated (probabilities accurate)
□ No obvious bias by segment
□ Business simulation validates value
□ Deployment
□ Scoring pipeline automated
□ Monitoring in place
□ Alerts configured
□ Documentation complete
□ Operations
□ Retraining schedule defined
□ Drift monitoring active
□ Feedback loop from CS team
□ Regular accuracy reviews
Anti-Patterns
- Predicting the past — Data leakage giving false accuracy
- One model fits all — Ignoring segment differences
- Set and forget — Models decay without retraining
- Ignoring false positives — Intervention fatigue from bad predictions
- Probability as certainty — Treating 60% risk as definite churn
- No action mapping — Predictions without intervention playbooks
- Over-engineering — Complex models when simple works
- Ignoring surprise churns — Not investigating model failures
title: Cohort Analysis & Benchmarking impact: HIGH tags: cohort-analysis, benchmarking, retention-curves, segment-analysis
Cohort Analysis & Benchmarking
Impact: HIGH
Cohort analysis reveals patterns hidden in aggregate data. By grouping customers with shared characteristics and tracking them over time, you can identify which customer segments thrive, which struggle, and what drives the difference. Benchmarking puts your performance in context.
The Cohort Analysis Framework
┌──────────────────────────────────────────────────────────────────┐
│ COHORT ANALYSIS PROCESS │
├──────────────────────────────────────────────────────────────────┤
│ │
│ DEFINE TRACK ANALYZE ACTION │
│ COHORTS → OVER TIME → PATTERNS → INSIGHTS │
│ │
│ • Time-based • Retention • Compare • Why differ? │
│ • Behavioral • Revenue • Identify • What works? │
│ • Value-based • Engagement • Benchmark • Optimize │
│ • Acquisition • Health • Trend • Predict │
│ │
└──────────────────────────────────────────────────────────────────┘
Cohort Definition Types
| Cohort Type | Definition Basis | Use Case |
|---|---|---|
| Time-based | Sign-up month/quarter | Retention trend analysis |
| Acquisition | Channel, campaign, source | Marketing efficiency |
| Behavioral | Feature adoption, activation | Product-market fit |
| Value-based | ARR tier, contract value | Segment economics |
| Industry | Vertical, company type | Product-market fit by segment |
| Size | Employee count, seats | Segment strategy |
| Geography | Region, country | Market expansion |
| Plan | Pricing tier, feature set | Monetization optimization |
Retention Cohort Analysis (Time-Based)
Monthly Retention by Signup Cohort
Cohort Month 0 Month 1 Month 2 Month 3 Month 6 Month 12
────────────────────────────────────────────────────────────────────
Jan 2024 100% 88% 82% 78% 71% 65%
Feb 2024 100% 91% 85% 81% 74% -
Mar 2024 100% 89% 84% 80% 72% -
Apr 2024 100% 92% 87% 83% - -
May 2024 100% 90% 86% - - -
Jun 2024 100% 93% - - - -
Jul 2024 100% - - - - -
Insights:
✓ Month 1 retention improving (88% → 93%)
✓ Month 6 retention stable around 72%
⚠ Q1 cohorts showing lower long-term retention
Action: Investigate Jan cohort for onboarding issues
Revenue Retention Cohort Analysis
Net Revenue Retention by Signup Quarter
Cohort Q0 Q1 Q2 Q3 Q4 Q5 Q6
────────────────────────────────────────────────────────────
Q1 2023 100% 98% 102% 108% 115% 118% 122%
Q2 2023 100% 101% 106% 112% 119% 124% -
Q3 2023 100% 99% 104% 109% 116% - -
Q4 2023 100% 102% 108% 114% - - -
Q1 2024 100% 103% 110% - - - -
Q2 2024 100% 104% - - - - -
Analysis:
├── All cohorts achieve >100% NRR (expansion > churn)
├── Q2 2024 showing strongest early expansion
├── Typical trajectory: 100% → 110% → 120% by Year 2
└── Cohort maturity required for full picture
Good Cohort Visualization
Retention Curve by Customer Segment
100% ┤● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
│ ╲
90% ┤ ╲ ● ● ● ● ● ● ● ● ● ● ● ● ● Enterprise
│ ╲ ╲
80% ┤ ╲ ● ● ● ● ● ● ● ● ● ● Mid-Market
│ ╲ ╲
70% ┤ ╲ ● ● ● ● ● ● ● SMB
│ ╲
60% ┤ ╲ ● ● ● ● Startup
│
50% ┤
└────┬────┬────┬────┬────┬────┬────┬────┬────
1 2 3 4 5 6 9 12
Months Since Signup
Key Insights:
1. Enterprise: 95% retention at month 12 (target: 90%)
2. Mid-Market: 82% retention at month 12 (on target)
3. SMB: 71% retention at month 12 (below 75% target)
4. Startup: 58% retention at month 12 (investigate)
Bad Cohort Analysis
Customer Retention Report
Total customers: 2,500
Active customers: 2,150
Retention rate: 86%
Problems:
✗ No time dimension
✗ No segmentation
✗ No trend analysis
✗ No benchmark comparison
✗ Point-in-time snapshot only
✗ Blends all cohort maturities
✗ No actionable insights
Behavioral Cohort Analysis
Retention by Activation Behavior (First 30 Days)
Behavior Cohort Month 6 Retention Index
──────────────────────────────────────────────────────────────
Completed core workflow 89% 1.48x
Invited 3+ team members 84% 1.40x
Used 5+ features 81% 1.35x
Attended onboarding webinar 78% 1.30x
Created 10+ [objects] 75% 1.25x
Basic activation only 60% 1.00x
No activation (signed up only) 32% 0.53x
Implications:
1. Core workflow completion is strongest retention predictor
2. Team invitation = social commitment = retention
3. Focus onboarding on these high-impact behaviors
4. Users who don't activate are unlikely to retain
Value-Based Cohort Analysis
Retention & NRR by Initial ARR Tier
Tier ARR Range Logo Retention NRR Avg Health
─────────────────────────────────────────────────────────────────
Enterprise >$100K 96% 135% 82
Upper MM $50K-$100K 93% 122% 76
Lower MM $20K-$50K 88% 112% 71
SMB $5K-$20K 78% 98% 64
Startup <$5K 62% 85% 52
Insights:
├── Enterprise segment is profitable (high retention, expansion)
├── SMB requires efficiency focus (lower retention, no expansion)
├── Startup segment may not be viable at scale
├── Health score correlates with retention across tiers
└── Consider minimum viable customer criteria
Benchmarking Framework
| Metric | Your Value | Industry 25th | Industry Median | Industry 75th | Best in Class |
|---|---|---|---|---|---|
| Gross Retention | 88% | 82% | 88% | 93% | 97% |
| Net Retention | 108% | 95% | 105% | 115% | 130% |
| Month 1 Retention | 91% | 85% | 90% | 94% | 97% |
| Year 1 Retention | 78% | 70% | 78% | 85% | 92% |
| Health Score Avg | 68 | 55 | 65% | 72 | 80 |
Industry Benchmark Sources
| Source | Best For | Data Quality | Access |
|---|---|---|---|
| OpenView | SaaS benchmarks | High | Free reports |
| Gainsight | CS metrics | High | Customer only |
| ChartMogul | Revenue metrics | High | Customer only |
| ProfitWell | Pricing, retention | Medium-High | Free + paid |
| SaaS Capital | Financial metrics | High | Free reports |
| Bessemer | Cloud metrics | High | Free reports |
| KBCM | Private SaaS | High | Annual report |
Cohort Comparison Best Practices
Comparing Cohorts Effectively
1. Same Time Window
✓ Compare Jan 2024 at Month 6 to Jan 2023 at Month 6
✗ Compare Jan 2024 at Month 6 to Jan 2023 at Month 12
2. Normalize for Seasonality
✓ Account for holiday slowdowns, fiscal year patterns
✗ Compare Q4 directly to Q1 without adjustment
3. Statistical Significance
✓ Ensure cohort size supports conclusions (n > 30)
✗ Draw conclusions from cohorts of 5 customers
4. Consistent Definitions
✓ Same retention definition across cohorts
✗ Changing what "active" means mid-analysis
5. Account for Mix Shifts
✓ Note if segment composition changed
✗ Compare blended metrics when mix shifted significantly
Cohort Analysis Dashboard
Cohort Analysis Dashboard
┌─────────────────────────────────────────────────────────────────┐
│ RETENTION TRENDS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Month 1 Retention (6-month trend): 91% → 89% → 90% → 93% │
│ Status: ✓ Improving │
│ │
│ Month 6 Retention (6-month trend): 72% → 71% → 73% → 74% │
│ Status: ✓ Stable/Improving │
│ │
│ Month 12 Retention (trailing): 65% │
│ Status: ⚠ Below 70% target │
│ │
├─────────────────────────────────────────────────────────────────┤
│ SEGMENT COMPARISON (Month 6) │
│ │
│ Enterprise: ████████████████████ 94% (↑ vs prior) │
│ Mid-Market: █████████████████░░░ 82% (= vs prior) │
│ SMB: ██████████████░░░░░░ 71% (↓ vs prior) │
│ Startup: ████████████░░░░░░░░ 58% (↓ vs prior) │
│ │
│ ⚠ Alert: SMB retention declining - investigate │
│ │
├─────────────────────────────────────────────────────────────────┤
│ BEHAVIORAL COHORT INSIGHTS │
│ │
│ Highest retention cohort: Multi-user activation (89%) │
│ Lowest retention cohort: Single feature users (52%) │
│ Biggest gap: 37 percentage points │
│ │
│ Recommendation: Focus onboarding on multi-user + multi-feature │
│ │
└─────────────────────────────────────────────────────────────────┘
Cohort Analysis Checklist
□ Cohort Definition
□ Clear criteria for cohort membership
□ Mutually exclusive cohorts (no overlap)
□ Meaningful segment differences
□ Sufficient sample size per cohort
□ Metric Selection
□ Primary metric defined (retention, NRR, etc.)
□ Time windows specified
□ Calculation methodology documented
□ Edge cases handled (partial periods, etc.)
□ Data Preparation
□ Data completeness verified
□ Historical data sufficient for trends
□ Consistent definitions over time
□ Cohort assignment logic validated
□ Analysis Execution
□ Retention curves plotted
□ Segment comparisons completed
□ Trends over time identified
□ Statistical significance checked
□ Benchmarking
□ Internal benchmarks established
□ Industry benchmarks sourced
□ Peer comparisons available
□ Best-in-class targets defined
□ Actionability
□ Key insights documented
□ Root causes investigated
□ Recommendations developed
□ Actions assigned and tracked
Anti-Patterns
- Single cohort obsession — Focusing on one segment without context
- Insufficient sample size — Drawing conclusions from tiny cohorts
- Ignoring seasonality — Comparing Q4 to Q1 without adjustment
- Inconsistent definitions — Changing metrics mid-analysis
- Survivorship bias — Only analyzing retained customers
- No benchmarks — Can't assess "good" without comparison
- Analysis paralysis — Too many cohorts, no action
- Stale analysis — Running cohort analysis once, never updating
title: Customer Data Enrichment & 360 View impact: HIGH tags: data-enrichment, customer-360, data-quality, data-integration
Customer Data Enrichment & 360 View
Impact: HIGH
A complete customer view is the foundation of effective health scoring and risk prediction. Data enrichment fills gaps, adds context, and creates a unified picture that enables proactive customer success. Without comprehensive data, even the best health models fail.
The Customer 360 Architecture
┌──────────────────────────────────────────────────────────────────┐
│ CUSTOMER 360 VIEW │
├──────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ COMPANY │ │ CONTACTS │ │ CONTRACT │ │
│ │ PROFILE │ │ & ROLES │ │ DETAILS │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ PRODUCT │ │ SUPPORT │ │ BILLING │ │
│ │ USAGE │ │ HISTORY │ │ HISTORY │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ COMMS │ │ SUCCESS │ │ EXTERNAL │ │
│ │ HISTORY │ │ METRICS │ │ SIGNALS │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────────┐ │
│ │ HEALTH SCORE │ │
│ │ & RISK MODEL │ │
│ └─────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────┘
Data Source Categories
| Category | Data Types | Sources | Refresh Frequency |
|---|---|---|---|
| Firmographics | Company size, industry, location | Clearbit, ZoomInfo, LinkedIn | Monthly |
| Technographics | Tech stack, integrations used | BuiltWith, G2, product data | Monthly |
| Intent Signals | Research activity, content engagement | Bombora, 6sense, website | Weekly |
| Financial | Funding, revenue, growth | Crunchbase, PitchBook | Monthly |
| Social | News, sentiment, job postings | LinkedIn, news APIs | Daily |
| Contact | Email, phone, role, hierarchy | CRM, LinkedIn, email tools | Weekly |
| Behavioral | Product usage, engagement | Product analytics | Real-time |
| Feedback | NPS, CSAT, surveys | Survey tools | Event-driven |
Good Data Enrichment Strategy
Customer Profile: Acme Corp
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
COMPANY INFORMATION (Enriched)
├── Legal Name: Acme Corporation
├── Industry: SaaS / B2B Technology
├── Employee Count: 450 (↑ 15% YoY)
├── Annual Revenue: $45M (estimated)
├── Funding: Series C, $28M raised
├── HQ Location: San Francisco, CA
├── Founded: 2018
└── Growth Stage: Scale-up
TECHNOGRAPHICS
├── CRM: Salesforce
├── Marketing: HubSpot
├── Support: Zendesk
├── Analytics: Mixpanel
├── Integrations Active: Salesforce, Slack
└── Potential Integrations: HubSpot, Zendesk
CONTACT INTELLIGENCE
├── Decision Makers: 3 identified
├── Champion: Sarah Chen (Head of Ops)
├── Executive Sponsor: Michael Torres (VP)
├── Billing Contact: Finance team
├── Power Users: 8 identified
└── Stakeholder Health: Strong
INTENT SIGNALS
├── Competitor Research: None detected
├── Content Engagement: 12 articles last month
├── Webinar Attendance: Attended 2 of 3 offered
└── Community Activity: Active in user group
EXTERNAL SIGNALS
├── Recent News: Announced new product line
├── Job Postings: Hiring 3 ops roles (expansion signal)
├── LinkedIn Activity: Champion posted about our product
└── Sentiment: Positive social mentions
DERIVED INSIGHTS
├── Expansion Potential: High (hiring, growing)
├── Churn Risk Factors: None detected
├── Recommended Actions: Upsell conversation
└── Next Best Action: Schedule expansion QBR
Bad Data Enrichment Strategy
Customer Profile: Acme Corp
Company Name: Acme Corp
Contact: Sarah
Email: [email protected]
Plan: Enterprise
MRR: $10,000
Problems:
✗ Minimal company context
✗ No firmographic enrichment
✗ No contact role or hierarchy
✗ No intent or external signals
✗ No usage data integration
✗ No derived insights
✗ No next best action
✗ Static, not dynamic data
Key Enrichment Fields
| Field | Source | Use in Health Scoring |
|---|---|---|
| Employee count | Clearbit, ZoomInfo | Growth signal, seat potential |
| Industry | Clearbit | Segment benchmarking |
| Funding stage | Crunchbase | Expansion potential |
| Tech stack | BuiltWith | Integration opportunities |
| Job postings | Growth/contraction signals | |
| News mentions | News APIs | Organizational changes |
| Social sentiment | LinkedIn, Twitter | Brand health |
| Contact changes | Champion risk | |
| Competitor research | Intent data | Competitive threat |
Contact Enrichment Strategy
Contact Hierarchy Mapping:
Executive Level
├── CEO: John Smith
│ └── Relationship: Met once, annual review
├── CFO: Lisa Wong
│ └── Relationship: Billing escalations only
└── VP Operations: Michael Torres (Exec Sponsor)
└── Relationship: Monthly check-ins ✓
Management Level
├── Head of Ops: Sarah Chen (Champion)
│ └── Relationship: Weekly calls ✓
├── IT Director: David Park
│ └── Relationship: Technical contact
└── Finance Manager: Amy Liu
└── Relationship: Billing contact
User Level
├── Power Users: 8 identified
├── Regular Users: 23 active
└── Dormant Users: 4 inactive
Stakeholder Health Score: 78/100
├── Champion strength: Strong
├── Multi-threading: Good (4 relationships)
├── Executive access: Moderate
└── Risk: Champion single point of failure
Data Quality Framework
| Dimension | Definition | Target | Measurement |
|---|---|---|---|
| Completeness | % of fields populated | >85% | Filled fields / Total fields |
| Accuracy | % of correct data | >90% | Validated / Total records |
| Freshness | Age of data | <30 days | Days since last update |
| Consistency | Data matches across systems | >95% | Matching / Total |
| Uniqueness | No duplicate records | >99% | Unique / Total records |
Data Quality Dashboard
Customer Data Quality Report
┌─────────────────────────────────────────────────────────────────┐
│ OVERALL DATA QUALITY SCORE: 81% │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Completeness by Category: │
│ ├── Company Profile: ████████████████████ 92% │
│ ├── Contact Data: █████████████████░░░ 78% │
│ ├── Product Usage: ████████████████████ 96% │
│ ├── Support History: █████████████████░░░ 82% │
│ ├── External Signals: ████████████░░░░░░░░ 58% │
│ └── Financial Data: ██████████████████░░ 89% │
│ │
│ Data Freshness: │
│ ├── Updated <7 days: 65% of accounts │
│ ├── Updated 8-30 days: 28% of accounts │
│ └── Updated >30 days: 7% of accounts (⚠ stale) │
│ │
│ Data Issues: │
│ ├── Missing champion: 23 accounts │
│ ├── Invalid email: 12 contacts │
│ ├── Duplicate contacts: 8 records │
│ └── Stale firmographics: 34 accounts │
│ │
└─────────────────────────────────────────────────────────────────┘
Integration Architecture
| System | Data Flow | Frequency | Key Fields |
|---|---|---|---|
| CRM (Salesforce) | Bi-directional | Real-time | Contacts, opportunities, notes |
| Product | Inbound | Hourly | Usage events, feature adoption |
| Support (Zendesk) | Inbound | Real-time | Tickets, sentiment, resolution |
| Billing (Stripe) | Inbound | Real-time | Payments, invoices, MRR |
| Enrichment (Clearbit) | Inbound | Daily | Firmographics, contacts |
| Intent (Bombora) | Inbound | Weekly | Research signals, topics |
| Health Score | Outbound | Daily | Score, risk tier, signals |
Data Governance Principles
1. Single Source of Truth
- Define master system for each data type
- Health score is calculated, not stored in CRM
- CRM is master for relationships
- Product database is master for usage
2. Ownership
- Each data field has a defined owner
- Owner responsible for quality
- Regular audits by data team
3. Access Control
- Sensitive data (PII) protected
- Role-based access
- Audit logging enabled
4. Privacy Compliance
- GDPR / CCPA compliant enrichment
- Consent management
- Data retention policies
- Right to deletion supported
Enrichment ROI Calculation
| Metric | Before Enrichment | After Enrichment | Improvement |
|---|---|---|---|
| Health score accuracy | 62% | 78% | +16% |
| Churn prediction lead time | 45 days | 72 days | +27 days |
| CSM research time | 25 min/account | 8 min/account | -68% |
| Expansion identification | 35% | 58% | +23% |
| False positive rate | 32% | 18% | -14% |
Data Enrichment Checklist
□ Core Customer Data
□ Company name and legal entity
□ Industry and sub-industry
□ Employee count and trend
□ Location (HQ and offices)
□ Website and social profiles
□ Contact Data
□ Key contacts identified
□ Roles and hierarchy mapped
□ Email and phone validated
□ LinkedIn profiles linked
□ Champion and sponsor flagged
□ Financial Data
□ Contract details accurate
□ MRR/ARR calculated correctly
□ Payment history current
□ Renewal dates tracked
□ Expansion history captured
□ Behavioral Data
□ Product usage integrated
□ Support tickets linked
□ Communication history captured
□ Engagement metrics calculated
□ Feature adoption tracked
□ External Signals
□ Firmographic enrichment active
□ Intent data flowing
□ News monitoring enabled
□ Job posting tracking
□ Social sentiment captured
□ Data Quality
□ Completeness monitored
□ Freshness tracked
□ Duplicates resolved
□ Validation rules in place
□ Regular audits scheduled
Anti-Patterns
- Data silos — Product data separate from CRM separate from support
- Manual enrichment — Relying on CSMs to research and update
- Stale data — Firmographics from years ago
- Over-collection — Gathering data without clear use case
- No single source of truth — Conflicting data across systems
- Privacy violations — Enriching without consent
- Ignoring data quality — Garbage in, garbage out
- Under-utilization — Rich data not surfaced to users
title: Executive Reporting & Dashboards impact: HIGH tags: executive-reporting, dashboards, kpis, data-storytelling
Executive Reporting & Dashboards
Impact: HIGH
Executive reporting transforms customer health data into strategic business insights. The goal isn't just presenting metrics — it's enabling better decisions about customer investments, product direction, and company strategy. The best reports tell a story that drives action.
The Executive Reporting Hierarchy
┌──────────────────────────────────────────────────────────────────┐
│ REPORTING HIERARCHY │
├──────────────────────────────────────────────────────────────────┤
│ │
│ BOARD LEVEL │
│ └── High-level health, NRR, strategic risks │
│ Frequency: Quarterly │
│ │
│ C-SUITE LEVEL │
│ └── Portfolio health, trends, strategic accounts │
│ Frequency: Monthly │
│ │
│ VP/DIRECTOR LEVEL │
│ └── Team performance, segment health, initiatives │
│ Frequency: Weekly │
│ │
│ MANAGER LEVEL │
│ └── Individual accounts, risk alerts, action items │
│ Frequency: Daily │
│ │
└──────────────────────────────────────────────────────────────────┘
Key Executive KPIs
| KPI | Definition | Target | Frequency |
|---|---|---|---|
| Net Revenue Retention (NRR) | (Starting + Expansion - Churn) / Starting | 100-130% | Monthly |
| Gross Revenue Retention (GRR) | Retained ARR / Starting ARR | 85-95% | Monthly |
| Logo Retention | Retained Customers / Starting | 90-95% | Monthly |
| Expansion Rate | Customers with expansion / Total | 15-30% | Monthly |
| Health Score Distribution | % in each health tier | Bell curve | Weekly |
| At-Risk ARR | ARR where health <50 | <15% | Weekly |
| Time to Value | Days to activation | <30 days | Monthly |
| CSM Efficiency | ARR per CSM | $2-5M | Quarterly |
Good Executive Dashboard
Customer Success Executive Dashboard
Period: January 2025
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PORTFOLIO HEALTH SUMMARY
Total ARR: $24.5M Health Score Avg: 72 (↑ 3)
Customers: 485 At-Risk ARR: $2.1M (8.6%)
NRR (Trailing 12M): 112% Time to Value: 22 days
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
REVENUE METRICS vs. Prior Month
Gross Retention: 92% ↑ +1%
Net Retention: 108% ↑ +2%
Expansion Revenue: $412K ↑ +15%
Churned Revenue: $198K ↓ -22% (improvement)
Contraction: $89K ↓ -8% (improvement)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
HEALTH DISTRIBUTION
Thriving (85+): ████████████░░░░░░░░ 28% ($6.9M)
Healthy (70-84): ██████████████░░░░░░ 38% ($9.3M)
Neutral (50-69): ████████░░░░░░░░░░░░ 22% ($5.4M)
At-Risk (30-49): ████░░░░░░░░░░░░░░░░ 9% ($2.2M)
Critical (<30): █░░░░░░░░░░░░░░░░░░░ 3% ($0.7M)
Trend: Distribution improving (at-risk down from 12%)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
STRATEGIC ACCOUNTS STATUS (Top 20 by ARR)
Green: 14 accounts ($8.2M ARR)
Yellow: 4 accounts ($2.1M ARR)
Red: 2 accounts ($1.4M ARR) ← Executive attention required
Red Accounts:
1. GlobalTech Inc ($850K) - Champion departure, exec engaged
2. MegaCorp ($550K) - Competitive threat, QBR scheduled
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
KEY WINS THIS MONTH
✓ Saved $420K at-risk ARR (TechFlow, DataPro)
✓ Closed $380K expansion (Acme Corp +$150K, 3 others)
✓ NPS improved 8 points (32 → 40)
KEY RISKS TO WATCH
⚠ 3 renewals >$100K in next 60 days at health <60
⚠ Enterprise segment NPS declined 5 points
⚠ Q2 cohort showing early retention weakness
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Bad Executive Dashboard
Customer Success Report - January
Customers: 485
ARR: $24,500,000
Health Score: 72
NPS: 40
Churned: 12 customers
New: 28 customers
Support Tickets: 1,247
Problems:
✗ No context or trends
✗ No targets or benchmarks
✗ No segmentation
✗ No actionable insights
✗ Mixing operational and strategic metrics
✗ No risk visibility
✗ No narrative
✗ No recommendations
Board-Level Reporting
Board Report: Customer Success (Q4 2024)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
EXECUTIVE SUMMARY
Net Revenue Retention of 112% demonstrates strong customer health
and expansion motion. At-risk ARR has decreased 25% since Q3,
indicating improved early intervention effectiveness.
Key achievements:
• Reduced churn rate from 1.8% to 1.2% monthly
• Expanded NRR from 105% to 112%
• Decreased time-to-value from 34 to 22 days
Areas requiring investment:
• Enterprise segment engagement (NPS declining)
• Proactive risk detection (surprise churn rate 18%)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
KEY METRICS
Q4 2024 Q3 2024 YoY Target
Net Revenue Ret. 112% 105% +18% 110% ✓
Gross Revenue Ret. 92% 90% +4% 90% ✓
Logo Retention 94% 93% +2% 92% ✓
NPS 40 32 +12 35 ✓
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
STRATEGIC RISKS
1. Enterprise Engagement (Medium Risk)
- NPS declined 5 points in segment
- Two $500K+ accounts in yellow status
- Mitigation: Executive business reviews, product investment
2. Market Competition (Low-Medium Risk)
- Competitor mentions up 15% in support tickets
- No significant losses yet
- Mitigation: Competitive intelligence program launched
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Q1 2025 PRIORITIES
1. Launch enterprise engagement program
2. Reduce surprise churn rate to <10%
3. Achieve 115% NRR target
Dashboard Design Principles
| Principle | Description | Example |
|---|---|---|
| Hierarchy | Most important metrics first | NRR at top, details below |
| Context | Always show comparisons | vs. target, vs. prior period |
| Trend | Show direction, not just level | Arrows, sparklines |
| Actionability | Link to next steps | "2 accounts need attention" |
| Segmentation | Break down aggregates | By segment, tier, CSM |
| Simplicity | 5-7 key metrics max | Remove nice-to-haves |
| Consistency | Same layout each period | Enables quick comparison |
Data Storytelling Framework
The Situation-Complication-Resolution Framework
SITUATION (What's happening)
"Our customer portfolio grew 22% this year to $24.5M ARR
across 485 customers."
COMPLICATION (Why it matters)
"However, our at-risk ARR has increased to $2.1M (8.6%),
driven primarily by declining engagement in the enterprise
segment where NPS dropped 5 points."
RESOLUTION (What we're doing)
"We're launching a dedicated enterprise success program
with executive business reviews, which has shown 40%
improvement in similar situations. Expected impact:
reduce at-risk enterprise ARR by 50% in Q1."
KEY INSIGHT
Lead with the insight, not the data.
Bad: "Health scores averaged 72 this month."
Good: "Customer health improved for the 3rd consecutive month,
driven by our new onboarding program which reduced
time-to-value by 35%."
Reporting Cadence
| Report | Audience | Frequency | Content Focus |
|---|---|---|---|
| Daily Alerts | CSM, Manager | Daily | Critical risks, action items |
| Weekly Ops | CS Team | Weekly | Pipeline, at-risk, wins |
| Monthly Review | VP, C-Suite | Monthly | Metrics, trends, initiatives |
| QBR | Exec Team, Board | Quarterly | Strategy, risks, investments |
| Annual Review | Board | Annually | YoY performance, strategy |
Dashboard Metrics by Audience
| Metric | Board | C-Suite | VP | Manager |
|---|---|---|---|---|
| NRR/GRR | Y | Y | Y | - |
| Health Distribution | Summary | Y | Y | Y |
| At-Risk ARR | $ amount | Y | Y | Account list |
| Churn Analysis | Trends | Details | Details | Accounts |
| CSM Performance | - | Summary | Details | Individual |
| Risk Alerts | - | Critical | All | Assigned |
| Renewal Pipeline | - | Summary | Y | Y |
Report Automation
| Component | Automation Level | Tools |
|---|---|---|
| Data Collection | Fully automated | ETL, data warehouse |
| Metric Calculation | Fully automated | SQL, dbt |
| Dashboard Refresh | Fully automated | Looker, Tableau, Metabase |
| Alert Generation | Fully automated | Workflow tools, Slack |
| Insight Generation | Semi-automated | Templates + human review |
| Narrative Writing | Manual | CS leadership |
| Distribution | Automated | Email, Slack |
Executive Presentation Checklist
□ Pre-Meeting Preparation
□ Data refreshed and validated
□ Key metrics calculated correctly
□ Narrative prepared and reviewed
□ Anticipated questions researched
□ Backup slides ready
□ Content Structure
□ Executive summary on first slide
□ Key metrics with context
□ Trends and comparisons shown
□ Strategic risks highlighted
□ Wins and successes celebrated
□ Clear recommendations included
□ Ask/investment needs specified
□ Visual Design
□ Consistent formatting
□ Clear hierarchy
□ Minimal clutter
□ Actionable insights highlighted
□ Red/yellow/green status clear
□ Delivery
□ Lead with insights, not data
□ Tell a story
□ Acknowledge challenges honestly
□ Provide recommendations
□ Allow time for questions
□ Document action items
Good vs Bad Metrics Presentation
| Approach | Bad | Good |
|---|---|---|
| Format | "NRR was 108%" | "NRR of 108% (↑ 3% vs Q3, on track to 110% target)" |
| Context | "12 customers churned" | "12 customers churned ($198K), down 22% from prior month" |
| Insight | "Health score is 72" | "Health improved 3 points, driven by new onboarding program" |
| Action | "At-risk ARR is $2.1M" | "At-risk ARR of $2.1M — 3 accounts need exec intervention" |
| Trend | "NPS is 40" | "NPS reached 40 (+8 points YTD), highest in company history" |
Anti-Patterns
- Data dump — Too many metrics without narrative
- No benchmarks — Metrics without targets or comparisons
- Vanity focus — Highlighting good metrics, hiding problems
- Stale reporting — Manual processes creating delays
- One-size-fits-all — Same report for board and manager
- No action items — Reporting without recommendations
- Surprise reveals — Board learns about risks first in meeting
- Metric overload — 50 KPIs when 5 would suffice
title: Health Model Validation & Calibration impact: HIGH tags: model-validation, calibration, accuracy, continuous-improvement
Health Model Validation & Calibration
Impact: HIGH
A health score model is only valuable if it accurately predicts outcomes. Without regular validation and calibration, models drift, accuracy degrades, and teams lose confidence. Continuous validation ensures your health scores remain actionable and trustworthy.
The Validation Lifecycle
┌──────────────────────────────────────────────────────────────────┐
│ MODEL VALIDATION LIFECYCLE │
├──────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────┐ │
│ │ BUILD │ │
│ └────┬────┘ │
│ │ │
│ ▼ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ DEPLOY │ ───► │ MONITOR │ ───► │ ANALYZE │ │
│ └─────────┘ └────┬────┘ └────┬────┘ │
│ ▲ │ │ │
│ │ ▼ ▼ │
│ │ ┌─────────┐ ┌─────────┐ │
│ └───────────│ REFINE │ ◄─── │ CALIBRATE│ │
│ └─────────┘ └─────────┘ │
│ │
└──────────────────────────────────────────────────────────────────┘
Key Validation Metrics
| Metric | Definition | Target | Red Flag |
|---|---|---|---|
| Churn Prediction Accuracy | Predicted churns / Actual churns | >70% | <50% |
| Surprise Churn Rate | Churns with health >60 / Total churns | <20% | >35% |
| False Positive Rate | False at-risk / Flagged at-risk | <30% | >50% |
| Score-Outcome Correlation | Pearson correlation (score, outcome) | >0.5 | <0.3 |
| Lift at 10% | Top decile churn rate / Overall rate | >3x | <2x |
| Score Distribution | Spread across 0-100 range | Normal | Bimodal/Skewed |
| Calibration Error | Avg(Predicted prob - Actual prob) | <5% | >15% |
Good Model Validation Report
Health Score Model Validation Report
Period: Q4 2024
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
EXECUTIVE SUMMARY
Model performance meets targets across key metrics.
Prediction accuracy improved 8% vs. Q3 following
feature updates. One area of concern: enterprise
segment showing higher surprise churn rate.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PREDICTION ACCURACY
Actual Churns: 47
Predicted (Health <50): 38
Correctly Predicted: 33
Surprise Churns (>60): 14
Accuracy Rate: 70% (target: 70%) ✓
Surprise Churn Rate: 30% (target: <20%) ⚠
False Positive Rate: 28% (target: <30%) ✓
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SCORE-OUTCOME CORRELATION
Retention Expansion NPS
Health Score 0.62 0.48 0.55
Correlation Strong Moderate Moderate
Prior Quarter Current
Retention Corr. 0.58 0.62 ↑
Expansion Corr. 0.45 0.48 ↑
NPS Corr. 0.52 0.55 ↑
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
LIFT ANALYSIS
Decile Avg Score Churn Rate Lift
1 (High) 12 42% 5.2x ← Good separation
2 28 31% 3.9x
3 38 22% 2.8x
4 48 15% 1.9x
5 56 11% 1.4x
6 63 9% 1.1x
7 70 7% 0.9x
8 76 5% 0.6x
9 83 3% 0.4x
10 (Low) 91 1% 0.1x
Overall churn rate: 8%
Top decile lift: 5.2x (target: >3x) ✓
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SEGMENT ANALYSIS
Segment Accuracy Surprise Rate Status
Enterprise 62% 38% ⚠ Needs attention
Mid-Market 74% 22% ✓ On track
SMB 72% 28% ✓ On track
Startup 68% 32% ~ Monitor
Enterprise segment investigation:
- 5 of 14 surprise churns were enterprise
- Common pattern: Champion departure not detected
- Recommendation: Add LinkedIn monitoring signal
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
CALIBRATION CHECK
Score Range Predicted Risk Actual Risk Gap
0-20 85% 78% -7%
20-40 60% 55% -5%
40-60 35% 32% -3%
60-80 15% 12% -3%
80-100 5% 4% -1%
Avg Calibration Error: 4% (target: <5%) ✓
Model is slightly overconfident in high-risk scores.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
RECOMMENDATIONS
1. [HIGH] Add champion monitoring to enterprise scoring
2. [MEDIUM] Recalibrate high-risk thresholds
3. [LOW] Review startup segment feature weights
Bad Model Validation Report
Health Score Report
Model accuracy: 70%
Churns predicted: 33/47
Status: Working fine.
Problems:
✗ No trend analysis
✗ No segment breakdown
✗ No calibration check
✗ No feature analysis
✗ No actionable recommendations
✗ No comparison to prior period
✗ No investigation of failures
Surprise Churn Analysis Framework
Surprise Churn Investigation: AccountName
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ACCOUNT PROFILE
├── ARR: $85,000
├── Tenure: 18 months
├── Health Score at Churn: 72 (Healthy)
├── Health Score 30 days prior: 74
└── Health Score 90 days prior: 71
CHURN REASON: Competitor displacement
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SIGNAL ANALYSIS: WHAT WE MISSED
Signal Present? In Model? Why Missed?
─────────────────────────────────────────────────────────────
Competitor research Yes No No intent data
Champion job search Yes No No LinkedIn tracking
Reduced engagement Subtle Yes Below threshold
Support complaints No - No signal
Usage decline Minor Yes Below threshold
ROOT CAUSE: Champion was evaluating alternatives
while maintaining appearance of engagement.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
MODEL IMPROVEMENT OPPORTUNITIES
1. Add intent data signals (competitor research)
2. Add LinkedIn monitoring for key contacts
3. Lower threshold for engagement decline
4. Create composite "quiet leaving" indicator
5. Weight recent trend more heavily
EXPECTED IMPACT: Could have caught this 60 days earlier
Calibration Techniques
| Technique | When to Use | How It Works |
|---|---|---|
| Platt Scaling | Score not well-calibrated | Fit logistic regression on scores |
| Isotonic Regression | Non-monotonic calibration | Non-parametric adjustment |
| Temperature Scaling | Neural network outputs | Single parameter adjustment |
| Threshold Tuning | Business-driven calibration | Adjust based on capacity |
| Segment Adjustment | Different segments behave differently | Segment-specific thresholds |
Model Drift Detection
Drift Monitoring Dashboard
┌─────────────────────────────────────────────────────────────────┐
│ FEATURE DRIFT MONITORING │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Feature Baseline Current Drift │
│ ──────────────────────────────────────────────────────────── │
│ Usage velocity (30d) -0.02 -0.08 ⚠ DRIFT │
│ NPS score 42 38 ~ Minor │
│ Support tickets/mo 2.3 2.5 ✓ Stable │
│ Feature adoption 58% 55% ✓ Stable │
│ Champion engagement 0.72 0.68 ~ Minor │
│ │
│ ⚠ Alert: Usage velocity distribution shifted significantly │
│ Recommend: Investigate cause, consider retraining │
│ │
├─────────────────────────────────────────────────────────────────┤
│ OUTCOME DRIFT │
│ │
│ Metric Baseline Current Status │
│ ──────────────────────────────────────────────────────────── │
│ Monthly churn rate 1.5% 1.8% ~ Monitor │
│ Score-churn correlation 0.62 0.58 ~ Monitor │
│ Prediction accuracy 72% 68% ⚠ Watch │
│ │
└─────────────────────────────────────────────────────────────────┘
Validation Schedule
| Activity | Frequency | Owner | Output |
|---|---|---|---|
| Accuracy tracking | Weekly | Data team | Dashboard update |
| Surprise churn review | Per event | CS + Data | Investigation report |
| Drift monitoring | Weekly | Data team | Drift alerts |
| Segment analysis | Monthly | Data team | Segment report |
| Full validation | Quarterly | CS + Data | Validation report |
| Model retraining | Quarterly | Data team | New model version |
| Threshold calibration | Quarterly | CS leadership | Updated thresholds |
Threshold Calibration Process
Step 1: Analyze current distribution
├── Plot health scores vs. outcomes
├── Identify natural breakpoints
└── Calculate churn rate by score band
Step 2: Assess operational capacity
├── How many at-risk accounts can CSMs handle?
├── What's the cost of false positives?
└── What's the cost of missed churns?
Step 3: Optimize thresholds
├── Set thresholds to balance precision/recall
├── Consider segment-specific adjustments
└── Align with intervention capacity
Step 4: Validate proposed changes
├── Backtest on historical data
├── Calculate expected false positive/negative rates
└── Estimate resource requirements
Step 5: Implement and monitor
├── Update threshold configuration
├── Communicate to CS team
├── Track performance post-change
└── Adjust if needed
Feature Importance Review
| Feature | Current Weight | Q3 Weight | Correlation | Recommendation |
|---|---|---|---|---|
| Usage velocity | 28% | 25% | 0.58 | Maintain |
| NPS trend | 19% | 22% | 0.51 | Maintain |
| Support sentiment | 15% | 14% | 0.42 | Maintain |
| Champion engagement | 13% | 12% | 0.45 | Increase to 15% |
| Feature adoption | 11% | 13% | 0.38 | Reduce to 10% |
| Billing health | 8% | 8% | 0.32 | Maintain |
| Contract signals | 6% | 6% | 0.28 | Maintain |
Model Governance Checklist
□ Validation Process
□ Weekly accuracy tracking automated
□ Surprise churn review process defined
□ Drift alerts configured
□ Quarterly full validation scheduled
□ Documentation
□ Model architecture documented
□ Feature definitions captured
□ Threshold rationale recorded
□ Version history maintained
□ Change Management
□ Change approval process defined
□ A/B testing capability available
□ Rollback plan documented
□ Communication plan for changes
□ Stakeholder Alignment
□ CS leadership reviews validation reports
□ Data team owns model maintenance
□ Feedback loop from CSMs formalized
□ Executive sponsor engaged
□ Continuous Improvement
□ New feature experimentation process
□ Segment-specific tuning allowed
□ Industry benchmark tracking
□ Model improvement backlog maintained
Anti-Patterns
- Set and forget — Never validating after initial launch
- Aggregate-only analysis — Missing segment-specific issues
- No surprise churn investigation — Not learning from failures
- Threshold stagnation — Never adjusting as business changes
- Ignoring drift — Features change meaning over time
- No documentation — Model logic in one person's head
- Validation without action — Reports with no follow-through
- Perfect-seeking — Waiting for 100% accuracy vs. iterating
title: Health Score Design & Weighting impact: CRITICAL tags: health-score, weighting, methodology, scoring-algorithm
Health Score Design & Weighting
Impact: CRITICAL
A well-designed health score predicts customer outcomes 60-90 days before they happen. Poor health scores are vanity metrics that provide false confidence while customers silently churn.
The Health Score Equation
Health Score = Σ (Component Score × Weight)
Where:
- Each component is normalized to 0-100
- Weights sum to 100%
- Final score ranges 0-100
Component Selection Criteria
| Criterion | Question | Example |
|---|---|---|
| Predictive | Does this signal future outcomes? | Usage decline predicts churn |
| Measurable | Can we reliably track this? | Login frequency vs. "satisfaction" |
| Actionable | Can we influence this? | Feature adoption (yes) vs. company size (no) |
| Timely | Do we get the signal early enough? | Leading indicators only |
| Available | Do we have access to this data? | CRM data vs. internal discussions |
Standard Health Score Components
| Component | Typical Weight | Sub-Metrics |
|---|---|---|
| Product Usage | 30-40% | DAU/MAU, feature breadth, depth, frequency |
| Engagement | 20-25% | NPS, CSM touchpoints, email responsiveness |
| Growth Signals | 15-20% | Seat expansion, usage trend, contract growth |
| Support Health | 15-20% | Ticket volume, sentiment, resolution satisfaction |
| Financial Health | 5-10% | Payment history, contract terms, billing issues |
Weight Assignment by Business Model
| Business Model | Usage | Engagement | Growth | Support | Financial |
|---|---|---|---|---|---|
| Self-serve SaaS | 45% | 15% | 20% | 15% | 5% |
| Enterprise SaaS | 30% | 30% | 15% | 15% | 10% |
| Usage-based | 50% | 15% | 20% | 10% | 5% |
| High-touch services | 20% | 40% | 15% | 20% | 5% |
Good Health Score Design
Health Score v2.0 - Enterprise Accounts
Component: Product Usage (35%)
├── DAU/MAU ratio (10%)
│ └── 30-day rolling average
├── Feature adoption score (10%)
│ └── % of key features used
├── Usage depth (10%)
│ └── Actions per session
└── Core workflow completion (5%)
└── % completing primary use case
Component: Engagement (25%)
├── Relationship NPS (10%)
│ └── Most recent score
├── CSM touchpoints (8%)
│ └── Meetings held vs. scheduled
└── Communication responsiveness (7%)
└── Email response rate
Component: Growth Signals (20%)
├── Seat expansion trend (8%)
│ └── 90-day user growth rate
├── Usage expansion trend (7%)
│ └── 90-day consumption growth
└── Contract expansion (5%)
└── Any expansion in last year
Component: Support Health (20%)
├── Ticket sentiment (8%)
│ └── AI-analyzed support conversations
├── Resolution satisfaction (7%)
│ └── Post-ticket CSAT
└── Escalation frequency (5%)
└── Escalations per month
Scoring:
- All sub-metrics normalized to 0-100
- Component score = weighted average of sub-metrics
- Final score = weighted sum of components
Bad Health Score Design
Health Score v1.0 (Problems Identified)
Components:
├── Product Usage (70%) ← Over-weighted single category
│ └── Total logins ← Vanity metric
│
├── Support Tickets (15%) ← Direction unclear
│ └── Total tickets opened ← More tickets = lower score?
│
└── Contract Value (15%) ← Not predictive
└── ARR ← Bigger customers ≠ healthier
Problems:
✗ Over-reliance on single category
✗ Logins don't measure value
✗ Tickets could be good (engaged) or bad (frustrated)
✗ ARR doesn't predict retention
✗ No engagement or relationship signals
✗ No leading indicators
Scoring Algorithm Examples
Linear Scoring:
Score = (Actual Value / Target Value) × 100
Cap at 100, floor at 0
Example: DAU/MAU
Target: 40%
Actual: 32%
Score: (32/40) × 100 = 80
Threshold-Based Scoring:
If DAU/MAU >= 50%: Score = 100
If DAU/MAU >= 40%: Score = 80
If DAU/MAU >= 30%: Score = 60
If DAU/MAU >= 20%: Score = 40
If DAU/MAU >= 10%: Score = 20
If DAU/MAU < 10%: Score = 0
Trend-Adjusted Scoring:
Base Score = Current metric score
Trend Factor = (Current - 30 days ago) / 30 days ago
Adjusted Score = Base Score × (1 + Trend Factor × 0.2)
Example:
Base Score: 70
Usage up 15%: 70 × 1.03 = 72.1
Usage down 15%: 70 × 0.97 = 67.9
Health Score Thresholds
| Score Range | Status | Color | Action Priority |
|---|---|---|---|
| 85-100 | Thriving | Green | Expansion focus |
| 70-84 | Healthy | Light Green | Monitor, optimize |
| 50-69 | Neutral | Yellow | Proactive engagement |
| 30-49 | At-Risk | Orange | Immediate intervention |
| 0-29 | Critical | Red | Executive escalation |
Threshold Calibration Process
Step 1: Historical Analysis
- Pull 12+ months of health scores
- Tag customers by outcome (churned, retained, expanded)
- Plot score distribution by outcome
Step 2: Threshold Identification
- Find score ranges where outcomes diverge
- Identify clear "danger zones"
- Map to intervention capacity
Step 3: Validation
- Apply thresholds prospectively
- Track prediction accuracy
- Measure false positive/negative rates
Step 4: Refinement
- Adjust thresholds quarterly
- Segment-specific thresholds if needed
- Document rationale for changes
Health Score Validation Metrics
| Metric | Target | Calculation |
|---|---|---|
| Churn Prediction Accuracy | >70% | Predicted churn / Actual churn |
| False Positive Rate | <25% | False at-risk / Total at-risk |
| False Negative Rate | <15% | Surprise churns / Total churns |
| Score-Outcome Correlation | >0.5 | Pearson correlation |
| Segment Consistency | Similar | Same score ≈ same outcomes |
Segment-Specific Scoring Considerations
| Segment | Adjustment |
|---|---|
| Enterprise | Weight relationship higher, usage patterns differ |
| SMB | Weight product usage higher, less CSM touchpoints |
| New customers | Separate onboarding score, don't penalize low tenure |
| High-growth | Adjust for rapid seat expansion volatility |
| Seasonal | Normalize for expected usage patterns |
Health Score Implementation Checklist
□ Component Selection
□ Each component has clear predictive value
□ All data sources are reliable and available
□ Metrics are actionable (we can influence them)
□ No duplicate signals across components
□ Weight Assignment
□ Weights based on historical correlation analysis
□ Weights sum to 100%
□ No single component dominates (max 40%)
□ Weights documented with rationale
□ Scoring Logic
□ All sub-metrics normalized consistently (0-100)
□ Handling for missing data defined
□ Edge cases documented (new customers, etc.)
□ Calculation logic peer-reviewed
□ Threshold Definition
□ Thresholds based on outcome analysis
□ Clear actions mapped to each threshold
□ Thresholds validated against historical data
□ Segment-specific adjustments if needed
□ Operational Readiness
□ Score calculation automated
□ Update frequency defined (daily/weekly)
□ Alerting configured for threshold crossings
□ Dashboard visibility for CS team
□ Ongoing Governance
□ Quarterly calibration review scheduled
□ Accuracy metrics tracked
□ Feedback loop from CS team
□ Version history maintained
Anti-Patterns
- Kitchen sink scoring — Including every metric regardless of predictive value
- Equal weighting — All components at 20% without analysis
- Binary signals — Using yes/no when degree matters
- Static thresholds — Never recalibrating as business changes
- Ignoring tenure — New customers scored same as mature ones
- Vanity components — Metrics that feel important but don't predict
- Over-fitting — Optimizing for historical data, failing on new patterns
- No documentation — Scoring logic understood by one person only
title: Leading vs Lagging Indicator Analysis impact: CRITICAL tags: leading-indicators, lagging-indicators, predictive-signals, correlation
Leading vs Lagging Indicator Analysis
Impact: CRITICAL
By the time you see lagging indicators (churn, downgrades), it's often too late. Leading indicators give you the 60-90 day window needed to intervene effectively. The best customer success teams obsess over leading indicators.
Indicator Classification Framework
Timeline to Outcome:
────────────────────────────────────────────────────────►
│ │
│ LEADING COINCIDENT LAGGING │
│ (60-90 days) (30-60 days) (0-30 days) │
│ │
│ ✓ Actionable ~ Urgent ✗ Historical │
│ ✓ Predictive ~ Confirmatory ✗ Reactive │
│ ✓ Proactive ~ Responsive ✗ Post-mortem │
│ │
└───────────────────────────────────────────────────────┘
Common Indicator Categories
| Category | Leading (60-90 days) | Coincident (30-60 days) | Lagging (0-30 days) |
|---|---|---|---|
| Usage | Feature adoption declining | Login frequency dropping | Account dormant |
| Engagement | Missed scheduled meetings | Unresponsive to outreach | No contact 60+ days |
| Sentiment | Support ticket tone change | NPS score drop | Cancellation request |
| Financial | Contract questions | Downgrade inquiry | Non-renewal notice |
| Organizational | Champion on LinkedIn | New stakeholder introduced | Champion departed |
Leading Indicator Catalog
| Indicator | Signal Type | Detection Method | Action Window |
|---|---|---|---|
| DAU/MAU declining >20% | Usage | Product analytics | 90 days |
| Key feature abandonment | Usage | Event tracking | 75 days |
| Power user disengagement | Usage | User segmentation | 60 days |
| CSM meeting cancellations | Engagement | Calendar tracking | 60 days |
| Exec sponsor unresponsive | Engagement | Communication logs | 75 days |
| Support ticket sentiment shift | Sentiment | NLP analysis | 45 days |
| Renewal meeting not scheduled | Financial | CSM activity | 90 days |
| Budget/cost questions | Financial | Call transcripts | 60 days |
| Champion job change signals | Organizational | LinkedIn tracking | 90 days |
| New stakeholder evaluation | Organizational | CSM notes | 60 days |
Good Leading Indicator Analysis
Indicator: Feature Adoption Decline
Definition:
- Customer using <50% of features used at peak
- Measured over rolling 30-day window
- Compared to their own historical baseline
Why It's Leading:
- Precedes churn by 75 days on average
- Indicates value not being realized
- Actionable through enablement
Detection:
- Automated daily feature usage calculation
- Alert when adoption drops below threshold
- Trend visualization in health dashboard
Correlation Analysis:
- 68% of customers with this signal churned within 120 days
- Only 12% of customers without this signal churned
- Predictive accuracy: 73%
Action Trigger:
When detected → CSM outreach within 48 hours
Goal → Feature re-enablement or use case pivot
Bad Leading Indicator Analysis
Indicator: Low NPS Score
Problems:
✗ NPS is often coincident or lagging, not leading
✗ By the time NPS drops, issues are entrenched
✗ Quarterly surveys miss the window
✗ NPS alone lacks actionability
Better Approach:
- Track NPS trend (leading signal: declining NPS)
- Combine with other signals (NPS + usage decline)
- Use transactional NPS for faster feedback
- Look at verbatim comments for leading signals
Correlation Reality:
- Static low NPS: 45% correlation to churn
- Declining NPS trend: 72% correlation to churn
- The trend is the leading indicator, not the score
Correlation Analysis Methodology
Step 1: Define Outcomes
- Primary: Churn (Y/N)
- Secondary: Expansion, Contraction, NRR
Step 2: Identify Candidate Signals
- List all measurable customer behaviors
- Include product, engagement, support, financial
Step 3: Time-Shift Analysis
For each signal at each lag period (30, 60, 90, 120 days):
- Calculate correlation to outcome
- Identify optimal prediction window
Step 4: Signal Ranking
- Rank by correlation strength
- Consider actionability
- Assess data availability
Step 5: Combine for Prediction
- Build composite leading indicator score
- Validate on holdout data
- Monitor ongoing accuracy
Correlation Strength Benchmarks
| Correlation | Interpretation | Action |
|---|---|---|
| >0.7 | Strong predictor | High priority signal |
| 0.5-0.7 | Moderate predictor | Include in model |
| 0.3-0.5 | Weak predictor | Combine with others |
| <0.3 | Not predictive | Exclude or investigate |
Signal Combination Matrix
| If Signal A... | And Signal B... | Risk Level | Action |
|---|---|---|---|
| Usage declining | Engagement stable | Medium | Enablement focus |
| Usage stable | Engagement declining | Medium | Relationship focus |
| Usage declining | Engagement declining | High | Executive intervention |
| Usage declining | Support tickets increasing | Critical | Immediate escalation |
| Champion active | Usage declining | Medium-High | Champion conversation |
| Champion inactive | Usage stable | Medium | Find new champion |
Good Indicator Monitoring Dashboard
Leading Indicator Dashboard
┌─────────────────────────────────────────────────────────┐
│ LEADING INDICATOR ALERTS (Last 7 Days) │
├─────────────────────────────────────────────────────────┤
│ │
│ Critical (Action Required): 12 │
│ ├── Usage Decline >30% 5 │
│ ├── Champion Departure Detected 3 │
│ └── Renewal Meeting Not Scheduled 4 │
│ │
│ Warning (Monitor Closely): 28 │
│ ├── Feature Adoption Declining 12 │
│ ├── Engagement Score Down 9 │
│ └── Support Sentiment Shift 7 │
│ │
├─────────────────────────────────────────────────────────┤
│ INDICATOR TRENDS (90-Day) │
│ │
│ Usage Decline Alerts: ▲ +15% vs prior period │
│ Champion Departures: ▼ -8% vs prior period │
│ Engagement Drops: ─ Flat vs prior period │
│ │
├─────────────────────────────────────────────────────────┤
│ PREDICTION ACCURACY (Last Quarter) │
│ │
│ Churns Predicted: 42/51 (82%) │
│ False Positives: 15/42 (36%) │
│ Avg Lead Time: 67 days │
│ │
└─────────────────────────────────────────────────────────┘
Building Your Leading Indicator Model
| Step | Action | Output |
|---|---|---|
| 1 | Collect 12+ months historical data | Data set |
| 2 | Tag outcomes (churn, retain, expand) | Labeled data |
| 3 | Calculate all signals at various time lags | Signal matrix |
| 4 | Run correlation analysis | Ranked signals |
| 5 | Select top 5-8 leading indicators | Indicator set |
| 6 | Define thresholds for each | Alert rules |
| 7 | Build composite score | Leading indicator score |
| 8 | Validate on holdout data | Accuracy metrics |
| 9 | Implement monitoring | Automated alerts |
| 10 | Refine quarterly | Continuous improvement |
Action Triggers by Signal
| Signal | Threshold | Action | Owner | SLA |
|---|---|---|---|---|
| Usage decline | >25% MoM | CSM outreach | CSM | 48 hrs |
| Feature abandonment | Key feature unused 30+ days | Enablement call | CSM | 1 week |
| Champion departure | LinkedIn change detected | Stakeholder mapping | CSM + Manager | 24 hrs |
| NPS decline | Drop of 3+ points | Root cause analysis | CSM | 1 week |
| Support sentiment | Negative trend detected | Service review | Support Lead | 48 hrs |
| Meeting cancellation | 2+ consecutive | Manager check-in | CSM Manager | 1 week |
| Budget questions | Detected in call | Value realization review | CSM | 48 hrs |
Indicator Validation Checklist
□ Predictive Power
□ Correlation to outcome >0.5
□ Consistent across customer segments
□ Maintains accuracy over time
□ Not just correlating with another signal
□ Actionability
□ Clear intervention available
□ Enough lead time to act (60+ days)
□ Team has capacity to respond
□ Success interventions documented
□ Reliability
□ Data source is consistent
□ Signal can be calculated automatically
□ Missing data handling defined
□ False positive rate acceptable (<30%)
□ Operationalization
□ Real-time or near-real-time detection
□ Alerts configured and routed correctly
□ Playbook exists for each signal
□ Feedback loop to improve model
Anti-Patterns
- Lagging indicator focus — Tracking churn rate instead of churn predictors
- Single indicator reliance — One signal without confirmation
- Ignoring signal combinations — Missing that A + B together is critical
- Static thresholds — Not adjusting for segment or seasonality
- No validation — Using indicators without testing predictive power
- Action-less alerts — Signals without defined responses
- Too many indicators — Alert fatigue from over-monitoring
- Ignoring false positives — Not refining to reduce noise
title: Risk Identification & Escalation impact: CRITICAL tags: risk-identification, escalation, intervention, save-strategies
Risk Identification & Escalation
Impact: CRITICAL
Early risk identification and well-defined escalation processes are the difference between saving an at-risk account and conducting a post-mortem. A structured approach ensures no customer falls through the cracks and interventions happen with enough lead time to succeed.
The Risk Escalation Framework
┌──────────────────────────────────────────────────────────────────┐
│ RISK ESCALATION PATH │
├──────────────────────────────────────────────────────────────────┤
│ │
│ LOW RISK MEDIUM RISK HIGH RISK CRITICAL │
│ Health: 70+ Health: 50-69 Health: 30-49 Health: <30│
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐│
│ │ Monitor │ ───► │ Engage │ ────► │Intervene│ ───► │Escalate ││
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘│
│ │
│ Owner: CSM Owner: CSM Owner: CSM + Owner: VP + │
│ Manager Executive │
│ │
│ SLA: Weekly SLA: 1 week SLA: 48 hours SLA: 24 hrs │
│ review outreach intervention response │
│ │
└──────────────────────────────────────────────────────────────────┘
Risk Signal Categories
| Category | Signals | Severity | Detection Method |
|---|---|---|---|
| Usage | Declining logins, feature abandonment, dormant users | High | Product analytics |
| Engagement | Missed meetings, unresponsive, no exec access | High | CRM tracking |
| Sentiment | Negative NPS, complaints, support escalations | High | Survey + Support |
| Financial | Payment issues, contract questions, budget concerns | Very High | Billing + CSM notes |
| Organizational | Champion leaving, reorg, M&A | Critical | LinkedIn + news |
| Competitive | Competitor mentions, RFP activity, feature comparisons | Very High | Call transcripts |
| Contractual | Short contract, no auto-renew, upcoming expiration | Medium | Contract data |
Risk Signal Severity Matrix
| Signal | Severity | Time Sensitivity | Required Action |
|---|---|---|---|
| Champion departure | Critical | 24 hours | Executive outreach |
| Cancellation request | Critical | Same day | Save team activation |
| Competitor evaluation | Very High | 48 hours | Executive involvement |
| Usage decline >50% | High | 48 hours | CSM intervention |
| Payment failure | High | 24 hours | Billing + CSM outreach |
| Negative NPS response | High | 72 hours | Closed-loop follow-up |
| Missed QBR | Medium | 1 week | Manager involvement |
| Contract expiring <90 days | Medium | 1 week | Renewal discussion |
| Support escalation | Medium | 48 hours | Service recovery |
Good Risk Identification System
Risk Alert: Acme Corp
Account: Acme Corp
ARR: $125,000
Health Score: 42 (was 68 last month)
CSM: Jane Smith
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
RISK SIGNALS DETECTED:
1. Champion Status Change (CRITICAL)
└── Sarah Johnson updated LinkedIn to new company
└── Detected: 2 hours ago
└── She represented 65% of account activity
2. Usage Decline (HIGH)
└── 34% decrease in DAU over 30 days
└── Key feature "Reports" unused for 14 days
└── Trend accelerating
3. Support Sentiment (MEDIUM)
└── Last 3 tickets rated "Dissatisfied"
└── Average sentiment score: 2.1/5 (was 4.2)
RISK SCORE: 78/100 (High Risk)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
RECOMMENDED ACTIONS:
1. [IMMEDIATE] Contact account to identify new champion
2. [48 HOURS] Schedule executive check-in
3. [1 WEEK] Arrange re-onboarding for new stakeholders
ESCALATION: Manager + VP CS notified
SLA: Response required within 24 hours
Bad Risk Identification System
Alert: Account health decreased
Account: Acme Corp
Health Score: 42
Alert: Health score below threshold
Problems:
✗ No specific signals identified
✗ No context on what changed
✗ No severity classification
✗ No recommended actions
✗ No escalation path
✗ No SLA defined
✗ No ARR/impact context
Escalation Matrix
| Trigger | First Responder | Escalate To | Executive Involvement |
|---|---|---|---|
| Health drops >15 points | CSM | None initially | If no improvement in 2 weeks |
| Health drops >25 points | CSM | CSM Manager | VP if no improvement in 1 week |
| Health score <40 | CSM + Manager | VP CS | CEO for strategic accounts |
| Churn signal detected | CSM | Manager + VP | Based on ARR tier |
| Champion departure | CSM | Manager | VP for accounts >$100K |
| Competitive threat | CSM + Manager | VP CS + Exec | CEO for strategic |
| Cancellation request | Save Team | VP CS | CEO for top 20 accounts |
Intervention Playbooks
Playbook: Champion Departure
Trigger: Key contact leaves company
Severity: Critical
SLA: 24 hour initial response
Day 1:
□ Verify departure (LinkedIn, email bounce, etc.)
□ Identify replacement contact
□ Executive-to-executive outreach to maintain relationship
□ Update CRM with new stakeholder map
Day 2-7:
□ Schedule intro call with new champion
□ Offer re-onboarding/training
□ QBR to re-establish value baseline
□ Document new success criteria
Day 8-30:
□ Accelerate engagement cadence
□ Monthly check-ins (vs. quarterly)
□ Feature adoption review
□ Executive sponsor assignment if needed
Playbook: Usage Decline
Trigger: Usage down >25% over 30 days
Severity: High
SLA: 48 hour initial contact
Day 1-2:
□ Analyze usage data for root cause
□ Identify which users/features affected
□ CSM outreach: "I noticed [specific change], is everything okay?"
□ Offer support call
Day 3-7:
□ Deep-dive call to understand context
□ Create action plan with customer
□ Enablement session if adoption issue
□ Executive involvement if strategic issue
Day 8-30:
□ Weekly check-ins during recovery
□ Monitor usage daily
□ Adjust plan based on progress
□ Escalate if no improvement by day 14
Playbook: Competitive Threat
Trigger: Competitor mention detected
Severity: Very High
SLA: 48 hour executive response
Day 1:
□ Alert CSM, Manager, and VP
□ Gather intelligence (what competitor, why looking)
□ Prepare competitive battle card
□ Schedule executive call
Day 2-3:
□ Executive-to-executive engagement
□ Understand specific evaluation criteria
□ Address gaps or concerns directly
□ Reinforce unique value proposition
Day 4-14:
□ Provide additional proof points (case studies, ROI)
□ Offer executive references
□ Consider strategic concessions if needed
□ Document outcome and learnings
Save Team Structure
| Role | Responsibility | When Engaged |
|---|---|---|
| CSM | First line, relationship management | Always |
| CSM Manager | Strategy, additional resources | Health <50 |
| VP Customer Success | Executive relationships, approvals | Health <35 or $100K+ ARR |
| Executive Sponsor | Peer-level engagement | Strategic accounts |
| Product | Roadmap discussions, custom solutions | Feature gaps |
| Finance | Pricing, contract flexibility | Commercial objections |
Save Offer Guidelines
| Offer Type | When to Use | Approval Required | Success Rate |
|---|---|---|---|
| Extended support | Adoption/enablement issues | CSM | 45% |
| Professional services | Implementation gaps | Manager | 40% |
| Feature access | Missing functionality | Manager | 35% |
| Contract pause | Timing/budget issues | VP | 30% |
| Pricing concession | Cost objections | VP + Finance | 25% |
| Custom development | Critical feature gap | Executive | 20% |
Risk Review Cadence
| Review Type | Frequency | Attendees | Focus |
|---|---|---|---|
| Daily Standup | Daily | CSM Team | Critical alerts |
| Team Review | Weekly | CSM + Manager | At-risk accounts |
| Leadership Review | Weekly | VP + Directors | High-value at-risk |
| Executive Review | Monthly | C-Suite | Strategic accounts |
| Portfolio Review | Quarterly | All CS | Trends, patterns |
Risk Documentation Template
## At-Risk Account Analysis
**Account:** [Name]
**ARR:** [Amount]
**Health Score:** [Current] (was [Previous])
**Risk Level:** [Critical/High/Medium]
**Date Identified:** [Date]
### Risk Signals
| Signal | Severity | Date Detected |
|--------|----------|---------------|
| [Signal 1] | [Level] | [Date] |
| [Signal 2] | [Level] | [Date] |
### Root Cause Analysis
[What's driving the risk]
### Stakeholder Impact
- Champion: [Status]
- Executive Sponsor: [Status]
- End Users: [Status]
### Action Plan
| Action | Owner | Due Date | Status |
|--------|-------|----------|--------|
| [Action 1] | [Name] | [Date] | [Status] |
| [Action 2] | [Name] | [Date] | [Status] |
### Outcome
[ ] Saved
[ ] Churned
[ ] In Progress
### Lessons Learned
[What we'll do differently]
Escalation Checklist
□ Risk Identification
□ Specific signals documented
□ Severity classified correctly
□ Root cause hypothesized
□ ARR impact quantified
□ Initial Response
□ CSM contacted within SLA
□ Customer context gathered
□ Quick win opportunities identified
□ Escalation need assessed
□ Escalation Execution
□ Right people involved
□ Clear ask defined
□ Timeline established
□ Customer expectations set
□ Intervention
□ Action plan created
□ Customer agreement obtained
□ Progress tracking in place
□ Success criteria defined
□ Resolution
□ Outcome documented
□ Lessons captured
□ Process improvements identified
□ Stakeholders informed
Anti-Patterns
- Alert fatigue — Too many low-priority alerts mask real risks
- Single signal reliance — Missing multi-factor risk patterns
- Slow escalation — Waiting too long to involve leadership
- No playbooks — Ad-hoc response to predictable situations
- Discount-first saves — Training customers to threaten churn
- Ignoring small accounts — Risk exists at all ARR levels
- No documentation — Same mistakes repeated
- Hero culture — Depending on individuals vs. process
title: Customer Segmentation & Tier Scoring impact: MEDIUM-HIGH tags: segmentation, tier-scoring, customer-tiers, behavioral-clustering
Customer Segmentation & Tier Scoring
Impact: MEDIUM-HIGH
Not all customers are equal — and treating them equally means over-investing in some and under-investing in others. Effective segmentation enables right-sized engagement models, focused resources, and segment-specific success strategies. Tier scoring determines service levels.
The Segmentation Framework
┌──────────────────────────────────────────────────────────────────┐
│ SEGMENTATION DIMENSIONS │
├──────────────────────────────────────────────────────────────────┤
│ │
│ VALUE HEALTH POTENTIAL │
│ (Current Worth) (Current State) (Future Worth) │
│ │
│ • ARR / MRR • Health score • Growth rate │
│ • Lifetime value • Engagement level • Expansion │
│ • Contract length • Risk tier capacity │
│ • Payment history • NPS/Sentiment • Strategic │
│ │
│ ↓ │
│ │
│ CUSTOMER TIER ASSIGNMENT │
│ │
│ Enterprise │ Growth │ Scale │ Tech-touch │
│ │
└──────────────────────────────────────────────────────────────────┘
Value-Based Segmentation
| Tier | ARR Range | % of Customers | % of ARR | Engagement Model |
|---|---|---|---|---|
| Enterprise | >$100K | 5-10% | 40-50% | High-touch, named CSM |
| Mid-Market | $25K-$100K | 15-25% | 25-35% | Pooled CSM, proactive |
| SMB | $5K-$25K | 30-40% | 15-25% | Scaled, digital-first |
| Starter | <$5K | 30-40% | 5-10% | Tech-touch, self-serve |
Tier Scoring Model
Customer Tier Score Calculation
TIER SCORE = (Value Score × 0.4) + (Potential Score × 0.35) + (Strategic Score × 0.25)
Value Score Components (0-100):
├── ARR percentile (50%)
├── Contract length (25%)
└── Payment reliability (25%)
Potential Score Components (0-100):
├── Growth trajectory (40%)
├── Seat expansion capacity (30%)
└── Product fit depth (30%)
Strategic Score Components (0-100):
├── Brand recognition (35%)
├── Reference potential (35%)
└── Market influence (30%)
Tier Assignment:
├── Tier 1 (Enterprise): Score 80-100
├── Tier 2 (Growth): Score 60-79
├── Tier 3 (Scale): Score 40-59
└── Tier 4 (Tech-touch): Score 0-39
Good Tier Assignment
Customer Tier Assessment: TechCorp Inc.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
VALUE SCORE: 72/100
├── ARR: $65,000 (68th percentile) → 68 pts
├── Contract: 2-year agreement → 80 pts
└── Payment: Always on time → 100 pts
Weighted: 72
POTENTIAL SCORE: 85/100
├── Growth: 25% user growth last year → 90 pts
├── Expansion: Using 40% of seats → 75 pts
└── Product fit: 8/10 use cases match → 80 pts
Weighted: 85
STRATEGIC SCORE: 68/100
├── Brand: Known regional player → 60 pts
├── Reference: Willing, used once → 75 pts
└── Influence: 500 LinkedIn followers → 70 pts
Weighted: 68
TOTAL TIER SCORE: 75.3
Tier Assignment: TIER 2 (Growth)
Engagement Model: Pooled CSM with proactive touchpoints
Rationale: Strong potential for expansion, moderate current value
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Bad Tier Assignment
Customer Tier: Enterprise
Reason: They asked for a dedicated CSM.
Problems:
✗ No objective criteria
✗ Based on customer request, not value
✗ No scoring methodology
✗ No potential assessment
✗ No strategic consideration
✗ Will lead to misallocated resources
Behavioral Segmentation
| Segment | Behavior Pattern | Typical Needs | Engagement Focus |
|---|---|---|---|
| Champions | High usage, high NPS, advocates | Expansion, recognition | Advocacy programs |
| Power Users | Heavy usage, feature depth | Advanced training | Feature betas |
| Steady State | Consistent, moderate usage | Efficiency, stability | Check-ins, optimization |
| Light Touch | Minimal engagement, still renews | Self-service, cost focus | Digital nurture |
| Expanding | Growing seats/usage | Onboarding, enablement | Growth support |
| Declining | Usage trending down | Intervention, value proof | Proactive outreach |
| At-Risk | Multiple churn signals | Rescue, retention | Save playbooks |
Segment-Specific Success Strategies
ENTERPRISE SEGMENT ($100K+ ARR)
Engagement Model:
├── Named Strategic CSM (1:10-15 ratio)
├── Dedicated Executive Sponsor
├── Quarterly Business Reviews
├── Annual Strategic Planning
└── Direct access to product leadership
Success Activities:
├── Monthly strategic check-ins
├── Bi-weekly operational reviews
├── Custom success plans
├── Early access to roadmap
└── Executive-level escalation path
Metrics Focus:
├── Value realization / ROI
├── Stakeholder satisfaction
├── Strategic alignment
├── Expansion pipeline
└── Reference/advocacy activity
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TECH-TOUCH SEGMENT (<$5K ARR)
Engagement Model:
├── Automated, digital-first
├── Community-based support
├── Self-service resources
└── Exception-based human touch
Success Activities:
├── Automated onboarding sequences
├── In-app guidance and tutorials
├── Community forum engagement
├── Triggered outreach (risk, expansion)
└── Scaled webinars and office hours
Metrics Focus:
├── Activation rate
├── Feature adoption
├── Support ticket volume
├── Self-service resolution
└── Upgrade conversion rate
Customer Matrix: Value vs Health
HIGH VALUE
│
┌───────────────────┼───────────────────┐
│ │ │
│ AT-RISK │ CHAMPIONS │
│ HIGH VALUE │ HIGH VALUE │
│ │ │
│ Strategy: │ Strategy: │
│ Save & retain │ Expand & grow │
│ Executive focus │ Advocacy focus │
│ │ │
────┼───────────────────┼───────────────────┼────
LOW │ │ │ HIGH
HEALTH │ │ HEALTH
────┼───────────────────┼───────────────────┼────
│ │ │
│ AT-RISK │ HEALTHY │
│ LOW VALUE │ LOW VALUE │
│ │ │
│ Strategy: │ Strategy: │
│ Evaluate ROI │ Self-serve │
│ Tech-touch save │ Upgrade path │
│ │ │
└───────────────────┼───────────────────┘
│
LOW VALUE
Segment Migration Tracking
| From Tier | To Tier | Trigger | Action |
|---|---|---|---|
| SMB → Mid-Market | ARR >$25K | Auto-upgrade | Assign CSM |
| Mid-Market → Enterprise | ARR >$100K | Manual review | Strategic CSM assignment |
| Any → At-Risk | Health <40 | Auto-flag | Escalation playbook |
| At-Risk → Healthy | Health >60 for 60 days | Auto-restore | Return to normal model |
| Declining → Churned | Cancellation | Manual process | Win-back eligibility |
Resource Allocation by Segment
| Resource | Enterprise | Mid-Market | SMB | Tech-Touch |
|---|---|---|---|---|
| CSM Ratio | 1:10-15 | 1:30-50 | 1:100-200 | 1:1000+ |
| QBR Frequency | Quarterly | Semi-annual | Annual | None |
| Proactive Outreach | Monthly | Bi-monthly | Quarterly | Triggered only |
| Executive Access | Direct | Escalation | None | None |
| Custom Success Plan | Yes | Template | Self-service | None |
| Priority Support | Yes | Enhanced | Standard | Community |
Segmentation Dashboard
Segmentation Overview
┌─────────────────────────────────────────────────────────────────┐
│ TIER DISTRIBUTION │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Tier Customers ARR Health Avg NRR │
│ ───────────────────────────────────────────────────────────── │
│ Enterprise 48 $9.8M 78 125% │
│ Mid-Market 156 $8.2M 72 112% │
│ SMB 187 $4.8M 68 98% │
│ Tech-Touch 412 $2.2M 62 92% │
│ │
│ Total 803 $25.0M 68 108% │
│ │
├─────────────────────────────────────────────────────────────────┤
│ TIER MOVEMENT (Last Quarter) │
│ │
│ ↑ Upgraded: 34 customers (+$1.2M ARR impact) │
│ ↓ Downgraded: 12 customers (-$380K ARR impact) │
│ → Churned: 28 customers (-$520K ARR impact) │
│ ★ New: 67 customers (+$890K ARR impact) │
│ │
├─────────────────────────────────────────────────────────────────┤
│ TIER-SPECIFIC ALERTS │
│ │
│ Enterprise: 2 accounts at-risk (need exec attention) │
│ Mid-Market: 8 accounts approaching Enterprise threshold │
│ SMB: 15 accounts declining, intervention needed │
│ Tech-Touch: Upgrade candidates identified (12 accounts) │
│ │
└─────────────────────────────────────────────────────────────────┘
Segmentation Implementation Checklist
□ Segment Definition
□ Clear criteria for each tier
□ Scoring methodology documented
□ Thresholds validated against data
□ Edge case handling defined
□ Data Requirements
□ Value metrics available
□ Potential indicators tracked
□ Strategic scoring inputs defined
□ Automated calculation possible
□ Engagement Models
□ CSM ratios defined per tier
□ Touchpoint cadence specified
□ Resource allocation approved
□ Escalation paths documented
□ Migration Rules
□ Upgrade triggers defined
□ Downgrade criteria specified
□ Review process for changes
□ Customer communication plan
□ Technology Setup
□ Tier field in CRM
□ Automated tier calculation
□ CSM assignment automation
□ Reporting by segment
□ Team Readiness
□ CSMs understand segment strategies
□ Playbooks exist per segment
□ Training completed
□ Metrics tracked by segment
Anti-Patterns
- ARR-only tiers — Ignoring potential and strategic value
- Manual assignment — Subjective, inconsistent tiering
- Static segmentation — Not updating as customers change
- One-size engagement — Same model regardless of tier
- Segment leakage — Enterprise service for SMB pricing
- Ignoring potential — Only looking at current value
- No migration path — Customers stuck in initial tier
- Resource mismatch — High-touch for low-value, or vice versa
title: Usage Analytics & Adoption Metrics impact: HIGH tags: usage-analytics, adoption-metrics, engagement, product-analytics
Usage Analytics & Adoption Metrics
Impact: HIGH
Usage data is the most honest signal of customer health. Customers can tell you they're happy while silently disengaging — usage data tells the real story. Effective usage analytics separate healthy accounts from future churns 60-90 days in advance.
The Usage Analytics Hierarchy
┌──────────────────────────────────────────────────────────────────┐
│ USAGE ANALYTICS HIERARCHY │
├──────────────────────────────────────────────────────────────────┤
│ │
│ Level 1: ACTIVITY │
│ └── Are they logging in? │
│ Metrics: DAU, WAU, MAU, session count │
│ │
│ Level 2: ENGAGEMENT │
│ └── What are they doing? │
│ Metrics: Actions per session, time in app, feature usage │
│ │
│ Level 3: ADOPTION │
│ └── Are they using core features? │
│ Metrics: Feature adoption %, key workflow completion │
│ │
│ Level 4: VALUE │
│ └── Are they achieving outcomes? │
│ Metrics: Goals completed, ROI realized, business impact │
│ │
└──────────────────────────────────────────────────────────────────┘
Key Usage Metrics
| Metric | Definition | Formula | Target |
|---|---|---|---|
| DAU/MAU | Stickiness ratio | Daily active / Monthly active | 25-40% |
| L7/L30 | Weekly engagement | 7-day active / 30-day active | 40-60% |
| Sessions/User/Week | Usage frequency | Weekly sessions / Active users | 3-5+ |
| Actions per Session | Usage depth | Total actions / Sessions | 10-20+ |
| Feature Adoption Rate | Breadth | Features used / Available features | 40-60% |
| Power User % | Top engagement | Users >80th percentile / Total | 15-25% |
| Dormant % | Inactive accounts | No login 30+ days / Total | <10% |
Feature Adoption Framework
Feature Classification:
CORE (Must Use) EXPANSION (Growth) ADVANCED (Power)
├── Essential to ├── Multiplies value ├── Differentiating
│ basic value │ │ capabilities
├── Onboarding focus ├── Growth milestone ├── Power user features
├── 100% adoption ├── 40-60% adoption ├── 15-25% adoption
│ target │ target │ target
│ │ │
Examples: Examples: Examples:
- CRM: Contact mgmt - CRM: Automations - CRM: Custom objects
- Analytics: Dashboards - Analytics: Alerts - Analytics: API access
- Support: Tickets - Support: Self-service - Support: Integrations
Good Usage Dashboard Design
Customer Usage Dashboard: Acme Corp
┌─────────────────────────────────────────────────────────────────┐
│ OVERALL HEALTH Score: 72 │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ACTIVITY METRICS (Last 30 Days) │
│ ├── Active Users: 45 of 60 licensed (75%) │
│ ├── DAU/MAU: 28% (industry avg: 25%) │
│ ├── Sessions/User/Week: 3.2 (↓ from 4.1 last month) │
│ └── Trend: ⚠ Declining (-22% MoM) │
│ │
│ FEATURE ADOPTION │
│ ├── Core Features: ████████████████░░░░ 82% │
│ ├── Expansion Features: ████████████░░░░░░░░ 58% │
│ └── Advanced Features: ████░░░░░░░░░░░░░░░░ 21% │
│ │
│ TOP FEATURES BY USAGE │
│ 1. Dashboard views ████████████████████ 2,340 │
│ 2. Report exports ████████████████ 1,856 │
│ 3. Alert configuration ████████████ 1,247 │
│ 4. Team collaboration ████████ 892 │
│ 5. API calls ██████ 634 │
│ │
│ USER SEGMENTS │
│ ├── Power Users (5+/wk): 12 users (27%) │
│ ├── Regular (2-4/wk): 23 users (51%) │
│ ├── Light (1/wk): 7 users (16%) │
│ └── Dormant (0/wk): 3 users (7%) │
│ │
│ ⚠ ALERT: Usage declining 22% — recommend CSM outreach │
│ │
└─────────────────────────────────────────────────────────────────┘
Bad Usage Dashboard Design
Usage Report: Acme Corp
Total Logins: 12,456
Total Actions: 89,234
Features Available: 47
Features Used: 31
Problems:
✗ All-time totals, not recent activity
✗ No trend information
✗ No context (vs. baseline, vs. peers)
✗ No user-level breakdown
✗ No actionable insights
✗ Missing dormant user identification
✗ No health score integration
Usage Pattern Analysis
| Pattern | Definition | Health Signal | Action |
|---|---|---|---|
| Steady High | Consistent strong usage | Healthy | Expansion |
| Growing | Increasing over time | Very Healthy | Case study |
| Plateau | Stable but not growing | Neutral | Feature adoption push |
| Declining | Decreasing over time | At Risk | Intervention |
| Sporadic | Inconsistent engagement | Warning | Usage training |
| Concentrated | Few power users | Risk | Broaden adoption |
| Dormant | No recent activity | Critical | Re-activation |
User Segmentation by Usage
| Segment | Definition | % of Users | Strategy |
|---|---|---|---|
| Champions | Daily use, high depth, advocates | 10-15% | Expand, case studies |
| Power Users | Frequent use, feature breadth | 15-25% | Feature adoption |
| Regular Users | Consistent weekly use | 30-40% | Habit formation |
| Casual Users | Monthly, light use | 15-25% | Increase engagement |
| At-Risk | Declining usage | 10-15% | Re-engagement |
| Dormant | No use 30+ days | 5-10% | Reactivation |
Adoption Milestone Tracking
Customer Journey: Feature Adoption Milestones
Day 1: First Login ✓
Day 3: Complete profile setup ✓
Day 7: Create first [core object] ✓
Day 14: Invite team member ✓
Day 21: Set up first automation ○ ← Not completed
Day 30: Export first report ○
Day 45: Configure integration ○
Day 60: Build custom dashboard ○
Adoption Score: 57% (4 of 7 milestones)
Status: On track but automation milestone overdue
Recommendation:
- Schedule enablement session for automation setup
- Automation adoption correlates with 2.3x higher retention
Usage Benchmarking
| Metric | Your Average | Industry 25th | Industry 50th | Industry 75th |
|---|---|---|---|---|
| DAU/MAU | 28% | 18% | 25% | 35% |
| Feature Adoption | 52% | 35% | 48% | 62% |
| Sessions/Week | 3.2 | 2.0 | 3.5 | 5.0 |
| Power User % | 22% | 12% | 20% | 30% |
Alert Configuration
| Trigger | Threshold | Severity | Action |
|---|---|---|---|
| No login | 14+ days | Warning | Automated re-engagement email |
| No login | 30+ days | High | CSM outreach |
| Usage decline | >25% MoM | High | CSM intervention |
| Usage decline | >50% MoM | Critical | Manager escalation |
| Key user inactive | 7+ days | High | Immediate outreach |
| Feature abandonment | Core feature unused 14+ days | Medium | Usage training |
| Seat utilization | <50% active | Medium | License optimization |
Good Usage Analysis
Usage Deep Dive: Declining Account
Account: TechCorp Inc.
Health Score: 48 (was 72 three months ago)
Usage Trend: -34% over 90 days
Root Cause Analysis:
1. Champion Departure (Primary)
- Sarah Chen (main user, 45% of all activity) left company
- Remaining users haven't increased usage
- No new champion identified
2. Feature Concentration Risk
- 80% of usage was in 2 features
- Those features are now unused
- Other features never adopted
3. Team Turnover
- 3 of 8 licensed users are new (last 60 days)
- New users have not completed onboarding
- No enablement sessions scheduled
Recommendations:
1. Schedule call with new stakeholder to identify champion
2. Arrange onboarding for 3 new users
3. Feature adoption push for underutilized capabilities
4. Consider usage-based pricing adjustment if team shrinks further
Usage Metrics Collection Checklist
□ Activity Tracking
□ Login events with timestamp
□ Session duration
□ User identification
□ Device/platform tracking
□ Engagement Tracking
□ Feature usage events
□ Actions per session
□ Time spent per feature
□ Navigation patterns
□ Adoption Tracking
□ Feature first-use detection
□ Milestone completion
□ Workflow completion rates
□ Feature breadth score
□ Aggregations
□ Daily/weekly/monthly rollups
□ User-level aggregations
□ Account-level rollups
□ Trend calculations
□ Alerting
□ Inactivity alerts
□ Decline alerts
□ Anomaly detection
□ Threshold breach notifications
□ Visualization
□ Real-time dashboards
□ Historical trends
□ Cohort comparisons
□ Benchmark overlays
Anti-Patterns
- Vanity metrics — Total logins don't predict retention
- All-time totals — Recent activity matters more
- No user segmentation — Average usage hides problems
- Ignoring depth — Login without action isn't engagement
- Missing trends — Snapshots without trajectories
- No benchmarks — Can't assess without comparison
- Feature obsession — Activity without value delivery
- Data silos — Usage disconnected from health scores