AI SkillScore accountsCustomer Success

When usage drops or engagement stalls, /customer-health-analyst scores every account, so you can intervene before churn. — Claude Skill

A Claude Skill for Claude Code by Nick Jensen — run /customer-health-analyst in Claude·Updated

Compatible withChatGPT·Claude·Gemini·OpenClaw

Score account health, flag churn risk, and surface at-risk cohorts.

  • Multi-signal health scoring across product usage, support tickets, and NPS
  • Cohort-level churn prediction with configurable risk thresholds
  • Executive dashboards with drill-down by segment, tier, and CSM
  • Automated at-risk account alerts with recommended next actions
  • Usage trend analysis with week-over-week and month-over-month deltas

Who this is for

What it does

Weekly health review

Run /customer-health-analyst with your usage data export to flag the 15-20% of accounts showing early churn signals before your Monday CS standup.

Board-ready retention reporting

Use /customer-health-analyst to generate executive dashboards showing GRR trends, cohort retention curves, and logo churn by segment — ready for quarterly board decks.

Proactive save campaigns

Feed /customer-health-analyst your product telemetry to identify accounts with 30%+ usage decline, then trigger CSM outreach before renewal conversations.

Post-onboarding activation audit

Run /customer-health-analyst on accounts 30-60 days post-launch to catch those stuck below activation thresholds and route them to onboarding specialists.

How it works

1

Ingest account data — product usage logs, support ticket history, NPS responses, and billing events — into a unified health model.

2

Calculate composite health scores using weighted signals: login frequency, feature adoption depth, support sentiment, and expansion velocity.

3

Segment accounts into health tiers (green / yellow / red) with configurable thresholds tuned to your churn history.

4

Generate cohort-level trends and individual account cards with specific risk drivers and recommended interventions.

5

Output executive dashboards, CSM action lists, and alert triggers for integration into your existing workflows.

Example

Account usage data (CSV)
account_id,mrr,logins_30d,features_used,open_tickets,nps_score,days_since_last_login
ACME-Corp,12000,45,8,1,9,2
Beta-Inc,8500,3,2,4,-1,18
Gamma-Ltd,22000,28,5,0,7,5
Delta-Co,6000,0,1,6,-3,35
Health analysis report
Health Scores
ACME-Corp: 92/100 (Green) — Strong adoption, low support load
Gamma-Ltd: 74/100 (Yellow) — Moderate usage, feature adoption below tier average
Beta-Inc: 31/100 (Red) — 3 logins in 30d, 4 open tickets, negative NPS
Delta-Co: 12/100 (Red) — Zero logins in 35 days, 6 open tickets, detractor
Recommended Actions
Beta-Inc: Schedule executive sponsor call this week. Assign onboarding specialist to re-activate core workflows.
Delta-Co: Escalate to VP CS immediately. Account shows full disengagement pattern — likely evaluating alternatives.

Metrics this improves

Activation Rate
+15-25%
Customer Success
Churn Rate
-25-40%
Customer Success

Works with

Customer Health Analyst

Expert guidance for customer health scoring, predictive analytics, and data-driven customer success strategies. Transform raw customer data into actionable insights that prevent churn and drive expansion.

Philosophy

Customer health is not a single metric — it's a predictive system:

  1. Measure what matters — Health scores should predict outcomes, not just track activity
  2. Lead, don't lag — Focus on indicators that predict churn before it's too late
  3. Segment for action — Different customers need different interventions
  4. Automate detection — Scale health monitoring across your entire customer base
  5. Close the loop — Analytics without action is just expensive data collection

How This Skill Works

When invoked, apply the guidelines in rules/ organized by:

  • health-* — Health score design, weighting, and calibration
  • indicators-* — Leading vs lagging indicator analysis
  • churn-* — Prediction modeling and early warning systems
  • usage-* — Analytics and adoption metrics
  • risk-* — Identification, escalation, and intervention
  • data-* — Enrichment and customer 360 development
  • cohort-* — Analysis and benchmarking
  • executive-* — Reporting and dashboards
  • segmentation-* — Customer tiers and scoring models

Core Frameworks

The Health Score Hierarchy

┌─────────────────────────────────────────────────────────────────┐
│                    COMPOSITE HEALTH SCORE                       │
│                         (0-100)                                 │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐       │
│  │ PRODUCT  │  │ENGAGEMENT│  │ GROWTH   │  │ SUPPORT  │       │
│  │  USAGE   │  │          │  │ SIGNALS  │  │ HEALTH   │       │
│  │  (35%)   │  │  (25%)   │  │  (20%)   │  │  (20%)   │       │
│  └──────────┘  └──────────┘  └──────────┘  └──────────┘       │
│                                                                 │
├─────────────────────────────────────────────────────────────────┤
│                    COMPONENT METRICS                            │
│                                                                 │
│  Usage:        Engagement:    Growth:        Support:          │
│  - DAU/MAU     - NPS score    - Seat trend   - Ticket volume   │
│  - Features    - CSM meetings - Usage trend  - Resolution time │
│  - Depth       - Email opens  - Expansion    - Sentiment       │
│  - Breadth     - Logins       - Contract     - Escalations     │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Leading vs Lagging Indicators

TypeDefinitionExamplesAction Window
LeadingPredict future outcomesUsage decline, engagement drop60-90 days
CoincidentMove with outcomesSupport sentiment, NPS30-60 days
LaggingConfirm after the factChurn, revenue lossToo late

Customer Health States

┌─────────────────────────────────────────────────────────────────┐
│                                                                 │
│  THRIVING ──→ HEALTHY ──→ NEUTRAL ──→ AT-RISK ──→ CRITICAL    │
│    (85+)      (70-84)     (50-69)     (30-49)      (<30)       │
│                                                                 │
│  Expand       Monitor     Engage      Intervene    Escalate    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Health Score Components

ComponentWeightKey MetricsWhy It Matters
Product Usage30-40%DAU/MAU, feature adoption, depthUsage predicts value realization
Engagement20-25%NPS, CSM contact, responsivenessRelationship strength indicator
Growth Signals15-20%Seat expansion, usage trendInvestment signals commitment
Support Health15-20%Ticket volume, sentiment, resolutionFrustration predicts churn
Financial5-10%Payment history, contract lengthFinancial commitment level

Churn Risk Factors

FactorRisk WeightDetection Method
Champion departureCriticalContact tracking, LinkedIn
Usage decline >30%HighProduct analytics
Negative NPS (0-6)HighSurvey responses
Support escalationsHighTicket analysis
Missed renewal meetingHighCSM activity tracking
Contract downgradeVery HighBilling data
Competitor mentionsHighCall transcripts, tickets
Budget review mentionsMediumCSM notes

The Analytics Stack

LayerPurposeTools/Methods
CollectionGather raw dataProduct events, CRM, support
ProcessingClean and transformETL, data pipelines
CalculationCompute scoresScoring algorithms
StorageHistorical trackingData warehouse
VisualizationPresent insightsDashboards, reports
ActionTrigger interventionsAlerting, automation

Key Metrics

MetricFormulaTarget
Health Score AccuracyChurn predicted / Actual churn>70%
Leading Indicator CorrelationCorrelation to outcomes>0.6
Score Distribution% in each health tierBell curve
Intervention Success RateSaved / Intervened>40%
Time to DetectionDays before risk → action<14 days
False Positive RateFalse alerts / Total alerts<20%

Executive Dashboard KPIs

KPIDefinitionBenchmark
Gross Revenue RetentionRetained ARR / Starting ARR85-95%
Net Revenue Retention(Retained + Expansion) / Starting100-130%
Logo RetentionRetained customers / Starting90-95%
Health Score AverageMean across customer base65-75
At-Risk RevenueARR with health <50<15%
Expansion RateCustomers expanded / Total15-30%

Cohort Analysis Framework

Cohort TypeSegments ByUse Case
Time-basedSign-up month/quarterRetention trends
BehavioralFeature usage patternsActivation success
Value-basedARR tierSegment economics
IndustryVerticalProduct-market fit
AcquisitionChannel/sourceMarketing efficiency

Anti-Patterns

  • Vanity health scores — Scores that look good but don't predict outcomes
  • Over-weighted product usage — Ignoring relationship and sentiment signals
  • Lagging indicator focus — Measuring what already happened
  • One-size-fits-all thresholds — Same scores mean different things for different segments
  • Manual-only health tracking — Can't scale without automation
  • Score without action — Calculating risk without intervention playbooks
  • Annual calibration only — Health models need continuous refinement
  • Ignoring data quality — Garbage in, garbage out

Reference documents


title: Section Organization

1. Health Score Design (health)

Impact: CRITICAL Description: Health score architecture, component selection, weight assignment, scoring algorithms, threshold calibration, and model validation.

2. Leading vs Lagging Indicators (indicators)

Impact: CRITICAL Description: Indicator identification, predictive signal development, correlation analysis, signal prioritization, and action trigger design.

3. Churn Prediction (churn)

Impact: CRITICAL Description: Prediction model development, feature engineering, risk scoring, early warning systems, and intervention timing optimization.

4. Usage Analytics (usage)

Impact: HIGH Description: Engagement measurement, feature adoption tracking, usage patterns, behavioral analysis, and adoption benchmarking.

5. Risk Identification (risk)

Impact: CRITICAL Description: Risk signal detection, escalation frameworks, intervention playbooks, stakeholder communication, and save strategies.

6. Data Enrichment (data)

Impact: HIGH Description: Data source integration, enrichment strategies, data quality management, 360-degree customer view, and data governance.

7. Cohort Analysis (cohort)

Impact: HIGH Description: Cohort definition, retention curve analysis, comparative benchmarking, segment performance, and trend identification.

8. Executive Reporting (executive)

Impact: HIGH Description: KPI selection, dashboard design, data storytelling, executive presentations, and board reporting.

9. Segmentation & Scoring (segmentation)

Impact: MEDIUM-HIGH Description: Customer tier definition, behavioral clustering, value-based segmentation, scoring model design, and segment-specific strategies.


title: Churn Prediction Modeling impact: CRITICAL tags: churn-prediction, machine-learning, risk-scoring, early-warning

Churn Prediction Modeling

Impact: CRITICAL

Effective churn prediction gives you 60-90 days of lead time to intervene. A well-calibrated model can reduce churn by 15-30% by enabling proactive outreach to at-risk accounts before they decide to leave.

The Churn Prediction Pipeline

┌──────────────────────────────────────────────────────────────────┐
│                    CHURN PREDICTION PIPELINE                     │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  DATA           FEATURES         MODEL           SCORING        │
│  COLLECTION  →  ENGINEERING  →   TRAINING   →   & ALERTS       │
│                                                                  │
│  • Product       • Usage decay    • Logistic     • Daily risk   │
│  • CRM           • Engagement     • Random       • Threshold    │
│  • Support       • Sentiment      • XGBoost      • Routing      │
│  • Financial     • Growth         • Neural       • Actions      │
│                                                                  │
├──────────────────────────────────────────────────────────────────┤
│                    FEEDBACK LOOP                                 │
│                                                                  │
│           Actual Outcomes → Model Refinement → Improved Accuracy │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Feature Categories for Churn Models

CategoryFeaturesPredictive Value
Usage MetricsDAU/MAU, feature adoption, session depthHigh
Usage Trends30/60/90-day slopes, velocity changesVery High
EngagementNPS, CSM touchpoints, email responsivenessHigh
SupportTicket volume, sentiment, escalationsHigh
FinancialPayment issues, contract length, pricing tierMedium
OrganizationalChampion status, stakeholder changesHigh
FirmographicsCompany size, industry, growth stageMedium
TemporalTenure, contract timing, seasonalityMedium

Good Feature Engineering

Feature: Usage Velocity (30-Day)

Definition:
velocity_30d = (usage_current - usage_30d_ago) / usage_30d_ago

Why It's Predictive:
- Captures direction AND magnitude of change
- Declining velocity precedes churn by 60-90 days
- More predictive than static usage levels

Implementation:
SELECT
  customer_id,
  (current_usage - lag_30d_usage) / NULLIF(lag_30d_usage, 0) as velocity_30d
FROM customer_usage
WHERE lag_30d_usage > 0

Feature Distribution:
- Retained customers: mean velocity = +0.05
- Churned customers: mean velocity = -0.28
- Separation is clear and actionable

Bad Feature Engineering

Feature: Total Logins (All-Time)

Problems:
✗ Doesn't account for tenure
✗ No directional information
✗ Old customers always score higher
✗ Not predictive of future behavior

Better Alternative:
- Login frequency (logins per week)
- Login trend (this month vs. last month)
- Days since last login

Feature Reality:
- Retained customers: mean = 1,247 logins
- Churned customers: mean = 892 logins
- Overlap is massive, low predictive value

Model Selection Guide

Model TypeProsConsBest For
Logistic RegressionInterpretable, fastLess accurateBaseline, regulated industries
Random ForestHandles non-linear, robustLess interpretableMedium datasets
XGBoostHigh accuracy, handles imbalanceComplex tuningLarge datasets, accuracy focus
Neural NetworkCaptures complex patternsBlack box, needs lots of dataVery large datasets
Survival AnalysisTime-to-event predictionSpecializedWhen timing matters

Model Training Process

Step 1: Data Preparation
├── Define churn (90-day non-renewal? Contract cancellation?)
├── Set observation window (features from T-90 to T-0)
├── Set outcome window (churn in next 90 days)
└── Handle class imbalance (SMOTE, class weights)

Step 2: Feature Selection
├── Calculate feature importance (univariate)
├── Remove correlated features (>0.8 correlation)
├── Engineer interaction features
└── Normalize/standardize as needed

Step 3: Model Training
├── Split: 70% train, 15% validation, 15% test
├── Train multiple model types
├── Tune hyperparameters on validation set
└── Select best model by validation AUC

Step 4: Model Evaluation
├── Test set performance (AUC, precision, recall)
├── Calibration check (predicted vs. actual probabilities)
├── Feature importance review
└── Business metric simulation

Step 5: Deployment
├── Productionize scoring pipeline
├── Set up monitoring and alerts
├── Document model and features
└── Plan retraining schedule

Model Performance Metrics

MetricFormulaTargetInterpretation
AUC-ROCArea under ROC curve>0.75Discrimination ability
PrecisionTP / (TP + FP)>0.60Of predicted churns, % correct
RecallTP / (TP + FN)>0.70Of actual churns, % caught
F1 Score2 × (P × R) / (P + R)>0.65Balanced accuracy
LiftModel precision / Base rate>3xImprovement over random

Threshold Selection

Tradeoff: Precision vs. Recall

High Threshold (e.g., >0.7 probability):
✓ High precision (fewer false positives)
✗ Low recall (miss some actual churns)
→ Use when intervention is expensive

Low Threshold (e.g., >0.3 probability):
✓ High recall (catch more actual churns)
✗ Low precision (more false positives)
→ Use when missing churn is expensive

Optimal Threshold:
- Calculate cost of false positive (unnecessary intervention)
- Calculate cost of false negative (missed churn)
- Find threshold that minimizes total expected cost

Risk Tiering System

TierProbability% of CustomersAction
Critical>70%5-10%Immediate executive intervention
High50-70%10-15%CSM manager involvement
Medium30-50%15-20%CSM proactive outreach
Low10-30%30-40%Standard monitoring
Minimal<10%20-30%Expansion focus

Early Warning System Design

┌──────────────────────────────────────────────────────────────────┐
│                    EARLY WARNING SYSTEM                          │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  Daily Scoring Pipeline:                                         │
│  ├── Pull latest customer data                                   │
│  ├── Calculate features                                          │
│  ├── Score all customers                                         │
│  └── Update risk tiers                                           │
│                                                                  │
│  Alert Triggers:                                                 │
│  ├── Risk tier change (e.g., Low → Medium)                      │
│  ├── Probability increase >20 points                             │
│  ├── Critical signals detected                                   │
│  └── Combination triggers                                        │
│                                                                  │
│  Alert Routing:                                                  │
│  ├── Critical → CSM + Manager + VP (Slack + Email)              │
│  ├── High → CSM + Manager (Email)                               │
│  ├── Medium → CSM (Dashboard + Email)                           │
│  └── Low → Dashboard only                                        │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Intervention Optimization

Lead TimeIntervention Success RateRecommended Actions
90+ days55-65%Strategic value review
60-90 days45-55%Executive engagement
30-60 days30-40%Intensive support
<30 days15-25%Save offer
At cancellation5-15%Exit interview + win-back plan

Model Monitoring Dashboard

Churn Prediction Model Health

┌─────────────────────────────────────────────────────────────┐
│  MODEL PERFORMANCE (Rolling 90 Days)                        │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  AUC-ROC:          0.78 (target: >0.75)        ✓           │
│  Precision:        0.62 (target: >0.60)        ✓           │
│  Recall:           0.71 (target: >0.70)        ✓           │
│  Lift at 10%:      4.2x (target: >3x)          ✓           │
│                                                             │
├─────────────────────────────────────────────────────────────┤
│  PREDICTION ACCURACY                                        │
│                                                             │
│  Actual Churns:         47                                  │
│  Predicted (>50%):      38                                  │
│  Correctly Predicted:   33                                  │
│  Surprise Churns:       14                                  │
│  False Alarms:          5                                   │
│                                                             │
├─────────────────────────────────────────────────────────────┤
│  FEATURE IMPORTANCE (Top 5)                                 │
│                                                             │
│  1. Usage velocity (30d)      ████████████  28%            │
│  2. NPS trend                 ████████      19%            │
│  3. Support sentiment         ███████       15%            │
│  4. Champion engagement       ██████        13%            │
│  5. Feature adoption trend    █████         11%            │
│                                                             │
├─────────────────────────────────────────────────────────────┤
│  ALERTS                                                     │
│                                                             │
│  ⚠ Recall dropped 5% vs. prior period                      │
│  ⚠ Feature drift detected in usage metrics                 │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Model Maintenance Schedule

ActivityFrequencyOwnerDeliverable
Accuracy reviewWeeklyData teamPerformance report
Feature drift checkWeeklyData teamDrift alerts
Threshold reviewMonthlyCS + DataUpdated thresholds
Full retrainingQuarterlyData teamNew model version
Feature reviewQuarterlyCS + DataFeature updates
Major overhaulAnnuallyData teamArchitecture review

Churn Model Checklist

□ Data Quality
  □ Churn definition is clear and consistent
  □ Historical data covers 12+ months
  □ Feature data is complete and accurate
  □ Class imbalance addressed appropriately

□ Feature Engineering
  □ Features are predictive (tested)
  □ No data leakage (future info in features)
  □ Features are interpretable
  □ Trends included, not just levels

□ Model Development
  □ Train/validation/test split done properly
  □ Cross-validation used for tuning
  □ Multiple model types compared
  □ Hyperparameters optimized

□ Model Evaluation
  □ Performance meets targets
  □ Model is calibrated (probabilities accurate)
  □ No obvious bias by segment
  □ Business simulation validates value

□ Deployment
  □ Scoring pipeline automated
  □ Monitoring in place
  □ Alerts configured
  □ Documentation complete

□ Operations
  □ Retraining schedule defined
  □ Drift monitoring active
  □ Feedback loop from CS team
  □ Regular accuracy reviews

Anti-Patterns

  • Predicting the past — Data leakage giving false accuracy
  • One model fits all — Ignoring segment differences
  • Set and forget — Models decay without retraining
  • Ignoring false positives — Intervention fatigue from bad predictions
  • Probability as certainty — Treating 60% risk as definite churn
  • No action mapping — Predictions without intervention playbooks
  • Over-engineering — Complex models when simple works
  • Ignoring surprise churns — Not investigating model failures

title: Cohort Analysis & Benchmarking impact: HIGH tags: cohort-analysis, benchmarking, retention-curves, segment-analysis

Cohort Analysis & Benchmarking

Impact: HIGH

Cohort analysis reveals patterns hidden in aggregate data. By grouping customers with shared characteristics and tracking them over time, you can identify which customer segments thrive, which struggle, and what drives the difference. Benchmarking puts your performance in context.

The Cohort Analysis Framework

┌──────────────────────────────────────────────────────────────────┐
│                    COHORT ANALYSIS PROCESS                       │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  DEFINE          TRACK           ANALYZE         ACTION         │
│  COHORTS    →    OVER TIME   →   PATTERNS   →   INSIGHTS       │
│                                                                  │
│  • Time-based    • Retention     • Compare       • Why differ?  │
│  • Behavioral    • Revenue       • Identify      • What works?  │
│  • Value-based   • Engagement    • Benchmark     • Optimize     │
│  • Acquisition   • Health        • Trend         • Predict      │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Cohort Definition Types

Cohort TypeDefinition BasisUse Case
Time-basedSign-up month/quarterRetention trend analysis
AcquisitionChannel, campaign, sourceMarketing efficiency
BehavioralFeature adoption, activationProduct-market fit
Value-basedARR tier, contract valueSegment economics
IndustryVertical, company typeProduct-market fit by segment
SizeEmployee count, seatsSegment strategy
GeographyRegion, countryMarket expansion
PlanPricing tier, feature setMonetization optimization

Retention Cohort Analysis (Time-Based)

Monthly Retention by Signup Cohort

Cohort    Month 0   Month 1   Month 2   Month 3   Month 6   Month 12
────────────────────────────────────────────────────────────────────
Jan 2024    100%      88%       82%       78%       71%       65%
Feb 2024    100%      91%       85%       81%       74%       -
Mar 2024    100%      89%       84%       80%       72%       -
Apr 2024    100%      92%       87%       83%       -         -
May 2024    100%      90%       86%       -         -         -
Jun 2024    100%      93%       -         -         -         -
Jul 2024    100%      -         -         -         -         -

Insights:
✓ Month 1 retention improving (88% → 93%)
✓ Month 6 retention stable around 72%
⚠ Q1 cohorts showing lower long-term retention
Action: Investigate Jan cohort for onboarding issues

Revenue Retention Cohort Analysis

Net Revenue Retention by Signup Quarter

Cohort    Q0      Q1      Q2      Q3      Q4      Q5      Q6
────────────────────────────────────────────────────────────
Q1 2023   100%    98%    102%    108%    115%    118%    122%
Q2 2023   100%   101%    106%    112%    119%    124%     -
Q3 2023   100%    99%    104%    109%    116%     -       -
Q4 2023   100%   102%    108%    114%     -       -       -
Q1 2024   100%   103%    110%     -       -       -       -
Q2 2024   100%   104%     -       -       -       -       -

Analysis:
├── All cohorts achieve >100% NRR (expansion > churn)
├── Q2 2024 showing strongest early expansion
├── Typical trajectory: 100% → 110% → 120% by Year 2
└── Cohort maturity required for full picture

Good Cohort Visualization

Retention Curve by Customer Segment

100% ┤● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
     │  ╲
 90% ┤   ╲ ● ● ● ● ● ● ● ● ● ● ● ● ● Enterprise
     │     ╲ ╲
 80% ┤        ╲ ● ● ● ● ● ● ● ● ● ● Mid-Market
     │          ╲ ╲
 70% ┤             ╲ ● ● ● ● ● ● ● SMB
     │               ╲
 60% ┤                 ╲ ● ● ● ● Startup
     │
 50% ┤
     └────┬────┬────┬────┬────┬────┬────┬────┬────
          1    2    3    4    5    6    9   12
                    Months Since Signup

Key Insights:
1. Enterprise: 95% retention at month 12 (target: 90%)
2. Mid-Market: 82% retention at month 12 (on target)
3. SMB: 71% retention at month 12 (below 75% target)
4. Startup: 58% retention at month 12 (investigate)

Bad Cohort Analysis

Customer Retention Report

Total customers: 2,500
Active customers: 2,150
Retention rate: 86%

Problems:
✗ No time dimension
✗ No segmentation
✗ No trend analysis
✗ No benchmark comparison
✗ Point-in-time snapshot only
✗ Blends all cohort maturities
✗ No actionable insights

Behavioral Cohort Analysis

Retention by Activation Behavior (First 30 Days)

Behavior Cohort                    Month 6 Retention    Index
──────────────────────────────────────────────────────────────
Completed core workflow             89%                 1.48x
Invited 3+ team members             84%                 1.40x
Used 5+ features                    81%                 1.35x
Attended onboarding webinar         78%                 1.30x
Created 10+ [objects]               75%                 1.25x
Basic activation only               60%                 1.00x
No activation (signed up only)      32%                 0.53x

Implications:
1. Core workflow completion is strongest retention predictor
2. Team invitation = social commitment = retention
3. Focus onboarding on these high-impact behaviors
4. Users who don't activate are unlikely to retain

Value-Based Cohort Analysis

Retention & NRR by Initial ARR Tier

Tier          ARR Range        Logo Retention  NRR    Avg Health
─────────────────────────────────────────────────────────────────
Enterprise    >$100K           96%             135%   82
Upper MM      $50K-$100K       93%             122%   76
Lower MM      $20K-$50K        88%             112%   71
SMB           $5K-$20K         78%             98%    64
Startup       <$5K             62%             85%    52

Insights:
├── Enterprise segment is profitable (high retention, expansion)
├── SMB requires efficiency focus (lower retention, no expansion)
├── Startup segment may not be viable at scale
├── Health score correlates with retention across tiers
└── Consider minimum viable customer criteria

Benchmarking Framework

MetricYour ValueIndustry 25thIndustry MedianIndustry 75thBest in Class
Gross Retention88%82%88%93%97%
Net Retention108%95%105%115%130%
Month 1 Retention91%85%90%94%97%
Year 1 Retention78%70%78%85%92%
Health Score Avg685565%7280

Industry Benchmark Sources

SourceBest ForData QualityAccess
OpenViewSaaS benchmarksHighFree reports
GainsightCS metricsHighCustomer only
ChartMogulRevenue metricsHighCustomer only
ProfitWellPricing, retentionMedium-HighFree + paid
SaaS CapitalFinancial metricsHighFree reports
BessemerCloud metricsHighFree reports
KBCMPrivate SaaSHighAnnual report

Cohort Comparison Best Practices

Comparing Cohorts Effectively

1. Same Time Window
   ✓ Compare Jan 2024 at Month 6 to Jan 2023 at Month 6
   ✗ Compare Jan 2024 at Month 6 to Jan 2023 at Month 12

2. Normalize for Seasonality
   ✓ Account for holiday slowdowns, fiscal year patterns
   ✗ Compare Q4 directly to Q1 without adjustment

3. Statistical Significance
   ✓ Ensure cohort size supports conclusions (n > 30)
   ✗ Draw conclusions from cohorts of 5 customers

4. Consistent Definitions
   ✓ Same retention definition across cohorts
   ✗ Changing what "active" means mid-analysis

5. Account for Mix Shifts
   ✓ Note if segment composition changed
   ✗ Compare blended metrics when mix shifted significantly

Cohort Analysis Dashboard

Cohort Analysis Dashboard

┌─────────────────────────────────────────────────────────────────┐
│  RETENTION TRENDS                                               │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Month 1 Retention (6-month trend):   91% → 89% → 90% → 93%   │
│  Status: ✓ Improving                                            │
│                                                                 │
│  Month 6 Retention (6-month trend):   72% → 71% → 73% → 74%   │
│  Status: ✓ Stable/Improving                                     │
│                                                                 │
│  Month 12 Retention (trailing):       65%                       │
│  Status: ⚠ Below 70% target                                    │
│                                                                 │
├─────────────────────────────────────────────────────────────────┤
│  SEGMENT COMPARISON (Month 6)                                   │
│                                                                 │
│  Enterprise:  ████████████████████ 94%  (↑ vs prior)           │
│  Mid-Market:  █████████████████░░░ 82%  (= vs prior)           │
│  SMB:         ██████████████░░░░░░ 71%  (↓ vs prior)           │
│  Startup:     ████████████░░░░░░░░ 58%  (↓ vs prior)           │
│                                                                 │
│  ⚠ Alert: SMB retention declining - investigate                │
│                                                                 │
├─────────────────────────────────────────────────────────────────┤
│  BEHAVIORAL COHORT INSIGHTS                                     │
│                                                                 │
│  Highest retention cohort: Multi-user activation (89%)         │
│  Lowest retention cohort: Single feature users (52%)           │
│  Biggest gap: 37 percentage points                              │
│                                                                 │
│  Recommendation: Focus onboarding on multi-user + multi-feature │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Cohort Analysis Checklist

□ Cohort Definition
  □ Clear criteria for cohort membership
  □ Mutually exclusive cohorts (no overlap)
  □ Meaningful segment differences
  □ Sufficient sample size per cohort

□ Metric Selection
  □ Primary metric defined (retention, NRR, etc.)
  □ Time windows specified
  □ Calculation methodology documented
  □ Edge cases handled (partial periods, etc.)

□ Data Preparation
  □ Data completeness verified
  □ Historical data sufficient for trends
  □ Consistent definitions over time
  □ Cohort assignment logic validated

□ Analysis Execution
  □ Retention curves plotted
  □ Segment comparisons completed
  □ Trends over time identified
  □ Statistical significance checked

□ Benchmarking
  □ Internal benchmarks established
  □ Industry benchmarks sourced
  □ Peer comparisons available
  □ Best-in-class targets defined

□ Actionability
  □ Key insights documented
  □ Root causes investigated
  □ Recommendations developed
  □ Actions assigned and tracked

Anti-Patterns

  • Single cohort obsession — Focusing on one segment without context
  • Insufficient sample size — Drawing conclusions from tiny cohorts
  • Ignoring seasonality — Comparing Q4 to Q1 without adjustment
  • Inconsistent definitions — Changing metrics mid-analysis
  • Survivorship bias — Only analyzing retained customers
  • No benchmarks — Can't assess "good" without comparison
  • Analysis paralysis — Too many cohorts, no action
  • Stale analysis — Running cohort analysis once, never updating

title: Customer Data Enrichment & 360 View impact: HIGH tags: data-enrichment, customer-360, data-quality, data-integration

Customer Data Enrichment & 360 View

Impact: HIGH

A complete customer view is the foundation of effective health scoring and risk prediction. Data enrichment fills gaps, adds context, and creates a unified picture that enables proactive customer success. Without comprehensive data, even the best health models fail.

The Customer 360 Architecture

┌──────────────────────────────────────────────────────────────────┐
│                      CUSTOMER 360 VIEW                           │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐              │
│  │   COMPANY   │  │  CONTACTS   │  │  CONTRACT   │              │
│  │   PROFILE   │  │  & ROLES    │  │   DETAILS   │              │
│  └─────────────┘  └─────────────┘  └─────────────┘              │
│                                                                  │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐              │
│  │   PRODUCT   │  │   SUPPORT   │  │   BILLING   │              │
│  │   USAGE     │  │   HISTORY   │  │   HISTORY   │              │
│  └─────────────┘  └─────────────┘  └─────────────┘              │
│                                                                  │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐              │
│  │   COMMS     │  │   SUCCESS   │  │  EXTERNAL   │              │
│  │   HISTORY   │  │   METRICS   │  │   SIGNALS   │              │
│  └─────────────┘  └─────────────┘  └─────────────┘              │
│                                                                  │
│                    ┌─────────────────┐                          │
│                    │   HEALTH SCORE  │                          │
│                    │   & RISK MODEL  │                          │
│                    └─────────────────┘                          │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Data Source Categories

CategoryData TypesSourcesRefresh Frequency
FirmographicsCompany size, industry, locationClearbit, ZoomInfo, LinkedInMonthly
TechnographicsTech stack, integrations usedBuiltWith, G2, product dataMonthly
Intent SignalsResearch activity, content engagementBombora, 6sense, websiteWeekly
FinancialFunding, revenue, growthCrunchbase, PitchBookMonthly
SocialNews, sentiment, job postingsLinkedIn, news APIsDaily
ContactEmail, phone, role, hierarchyCRM, LinkedIn, email toolsWeekly
BehavioralProduct usage, engagementProduct analyticsReal-time
FeedbackNPS, CSAT, surveysSurvey toolsEvent-driven

Good Data Enrichment Strategy

Customer Profile: Acme Corp

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

COMPANY INFORMATION (Enriched)
├── Legal Name: Acme Corporation
├── Industry: SaaS / B2B Technology
├── Employee Count: 450 (↑ 15% YoY)
├── Annual Revenue: $45M (estimated)
├── Funding: Series C, $28M raised
├── HQ Location: San Francisco, CA
├── Founded: 2018
└── Growth Stage: Scale-up

TECHNOGRAPHICS
├── CRM: Salesforce
├── Marketing: HubSpot
├── Support: Zendesk
├── Analytics: Mixpanel
├── Integrations Active: Salesforce, Slack
└── Potential Integrations: HubSpot, Zendesk

CONTACT INTELLIGENCE
├── Decision Makers: 3 identified
├── Champion: Sarah Chen (Head of Ops)
├── Executive Sponsor: Michael Torres (VP)
├── Billing Contact: Finance team
├── Power Users: 8 identified
└── Stakeholder Health: Strong

INTENT SIGNALS
├── Competitor Research: None detected
├── Content Engagement: 12 articles last month
├── Webinar Attendance: Attended 2 of 3 offered
└── Community Activity: Active in user group

EXTERNAL SIGNALS
├── Recent News: Announced new product line
├── Job Postings: Hiring 3 ops roles (expansion signal)
├── LinkedIn Activity: Champion posted about our product
└── Sentiment: Positive social mentions

DERIVED INSIGHTS
├── Expansion Potential: High (hiring, growing)
├── Churn Risk Factors: None detected
├── Recommended Actions: Upsell conversation
└── Next Best Action: Schedule expansion QBR

Bad Data Enrichment Strategy

Customer Profile: Acme Corp

Company Name: Acme Corp
Contact: Sarah
Email: [email protected]
Plan: Enterprise
MRR: $10,000

Problems:
✗ Minimal company context
✗ No firmographic enrichment
✗ No contact role or hierarchy
✗ No intent or external signals
✗ No usage data integration
✗ No derived insights
✗ No next best action
✗ Static, not dynamic data

Key Enrichment Fields

FieldSourceUse in Health Scoring
Employee countClearbit, ZoomInfoGrowth signal, seat potential
IndustryClearbitSegment benchmarking
Funding stageCrunchbaseExpansion potential
Tech stackBuiltWithIntegration opportunities
Job postingsLinkedInGrowth/contraction signals
News mentionsNews APIsOrganizational changes
Social sentimentLinkedIn, TwitterBrand health
Contact changesLinkedInChampion risk
Competitor researchIntent dataCompetitive threat

Contact Enrichment Strategy

Contact Hierarchy Mapping:

Executive Level
├── CEO: John Smith
│   └── Relationship: Met once, annual review
├── CFO: Lisa Wong
│   └── Relationship: Billing escalations only
└── VP Operations: Michael Torres (Exec Sponsor)
    └── Relationship: Monthly check-ins ✓

Management Level
├── Head of Ops: Sarah Chen (Champion)
│   └── Relationship: Weekly calls ✓
├── IT Director: David Park
│   └── Relationship: Technical contact
└── Finance Manager: Amy Liu
    └── Relationship: Billing contact

User Level
├── Power Users: 8 identified
├── Regular Users: 23 active
└── Dormant Users: 4 inactive

Stakeholder Health Score: 78/100
├── Champion strength: Strong
├── Multi-threading: Good (4 relationships)
├── Executive access: Moderate
└── Risk: Champion single point of failure

Data Quality Framework

DimensionDefinitionTargetMeasurement
Completeness% of fields populated>85%Filled fields / Total fields
Accuracy% of correct data>90%Validated / Total records
FreshnessAge of data<30 daysDays since last update
ConsistencyData matches across systems>95%Matching / Total
UniquenessNo duplicate records>99%Unique / Total records

Data Quality Dashboard

Customer Data Quality Report

┌─────────────────────────────────────────────────────────────────┐
│  OVERALL DATA QUALITY SCORE: 81%                                │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Completeness by Category:                                      │
│  ├── Company Profile:     ████████████████████ 92%             │
│  ├── Contact Data:        █████████████████░░░ 78%             │
│  ├── Product Usage:       ████████████████████ 96%             │
│  ├── Support History:     █████████████████░░░ 82%             │
│  ├── External Signals:    ████████████░░░░░░░░ 58%             │
│  └── Financial Data:      ██████████████████░░ 89%             │
│                                                                 │
│  Data Freshness:                                                │
│  ├── Updated <7 days:     65% of accounts                      │
│  ├── Updated 8-30 days:   28% of accounts                      │
│  └── Updated >30 days:    7% of accounts (⚠ stale)            │
│                                                                 │
│  Data Issues:                                                   │
│  ├── Missing champion:    23 accounts                          │
│  ├── Invalid email:       12 contacts                          │
│  ├── Duplicate contacts:  8 records                            │
│  └── Stale firmographics: 34 accounts                          │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Integration Architecture

SystemData FlowFrequencyKey Fields
CRM (Salesforce)Bi-directionalReal-timeContacts, opportunities, notes
ProductInboundHourlyUsage events, feature adoption
Support (Zendesk)InboundReal-timeTickets, sentiment, resolution
Billing (Stripe)InboundReal-timePayments, invoices, MRR
Enrichment (Clearbit)InboundDailyFirmographics, contacts
Intent (Bombora)InboundWeeklyResearch signals, topics
Health ScoreOutboundDailyScore, risk tier, signals

Data Governance Principles

1. Single Source of Truth
   - Define master system for each data type
   - Health score is calculated, not stored in CRM
   - CRM is master for relationships
   - Product database is master for usage

2. Ownership
   - Each data field has a defined owner
   - Owner responsible for quality
   - Regular audits by data team

3. Access Control
   - Sensitive data (PII) protected
   - Role-based access
   - Audit logging enabled

4. Privacy Compliance
   - GDPR / CCPA compliant enrichment
   - Consent management
   - Data retention policies
   - Right to deletion supported

Enrichment ROI Calculation

MetricBefore EnrichmentAfter EnrichmentImprovement
Health score accuracy62%78%+16%
Churn prediction lead time45 days72 days+27 days
CSM research time25 min/account8 min/account-68%
Expansion identification35%58%+23%
False positive rate32%18%-14%

Data Enrichment Checklist

□ Core Customer Data
  □ Company name and legal entity
  □ Industry and sub-industry
  □ Employee count and trend
  □ Location (HQ and offices)
  □ Website and social profiles

□ Contact Data
  □ Key contacts identified
  □ Roles and hierarchy mapped
  □ Email and phone validated
  □ LinkedIn profiles linked
  □ Champion and sponsor flagged

□ Financial Data
  □ Contract details accurate
  □ MRR/ARR calculated correctly
  □ Payment history current
  □ Renewal dates tracked
  □ Expansion history captured

□ Behavioral Data
  □ Product usage integrated
  □ Support tickets linked
  □ Communication history captured
  □ Engagement metrics calculated
  □ Feature adoption tracked

□ External Signals
  □ Firmographic enrichment active
  □ Intent data flowing
  □ News monitoring enabled
  □ Job posting tracking
  □ Social sentiment captured

□ Data Quality
  □ Completeness monitored
  □ Freshness tracked
  □ Duplicates resolved
  □ Validation rules in place
  □ Regular audits scheduled

Anti-Patterns

  • Data silos — Product data separate from CRM separate from support
  • Manual enrichment — Relying on CSMs to research and update
  • Stale data — Firmographics from years ago
  • Over-collection — Gathering data without clear use case
  • No single source of truth — Conflicting data across systems
  • Privacy violations — Enriching without consent
  • Ignoring data quality — Garbage in, garbage out
  • Under-utilization — Rich data not surfaced to users

title: Executive Reporting & Dashboards impact: HIGH tags: executive-reporting, dashboards, kpis, data-storytelling

Executive Reporting & Dashboards

Impact: HIGH

Executive reporting transforms customer health data into strategic business insights. The goal isn't just presenting metrics — it's enabling better decisions about customer investments, product direction, and company strategy. The best reports tell a story that drives action.

The Executive Reporting Hierarchy

┌──────────────────────────────────────────────────────────────────┐
│                  REPORTING HIERARCHY                             │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  BOARD LEVEL                                                     │
│  └── High-level health, NRR, strategic risks                    │
│      Frequency: Quarterly                                        │
│                                                                  │
│  C-SUITE LEVEL                                                   │
│  └── Portfolio health, trends, strategic accounts               │
│      Frequency: Monthly                                          │
│                                                                  │
│  VP/DIRECTOR LEVEL                                               │
│  └── Team performance, segment health, initiatives              │
│      Frequency: Weekly                                           │
│                                                                  │
│  MANAGER LEVEL                                                   │
│  └── Individual accounts, risk alerts, action items             │
│      Frequency: Daily                                            │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Key Executive KPIs

KPIDefinitionTargetFrequency
Net Revenue Retention (NRR)(Starting + Expansion - Churn) / Starting100-130%Monthly
Gross Revenue Retention (GRR)Retained ARR / Starting ARR85-95%Monthly
Logo RetentionRetained Customers / Starting90-95%Monthly
Expansion RateCustomers with expansion / Total15-30%Monthly
Health Score Distribution% in each health tierBell curveWeekly
At-Risk ARRARR where health <50<15%Weekly
Time to ValueDays to activation<30 daysMonthly
CSM EfficiencyARR per CSM$2-5MQuarterly

Good Executive Dashboard

Customer Success Executive Dashboard
Period: January 2025

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

PORTFOLIO HEALTH SUMMARY

Total ARR:           $24.5M      Health Score Avg:    72 (↑ 3)
Customers:           485         At-Risk ARR:         $2.1M (8.6%)
NRR (Trailing 12M):  112%        Time to Value:       22 days

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

REVENUE METRICS                           vs. Prior Month

Gross Retention:     92%                  ↑ +1%
Net Retention:       108%                 ↑ +2%
Expansion Revenue:   $412K                ↑ +15%
Churned Revenue:     $198K                ↓ -22% (improvement)
Contraction:         $89K                 ↓ -8% (improvement)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

HEALTH DISTRIBUTION

Thriving (85+):   ████████████░░░░░░░░  28% ($6.9M)
Healthy (70-84):  ██████████████░░░░░░  38% ($9.3M)
Neutral (50-69):  ████████░░░░░░░░░░░░  22% ($5.4M)
At-Risk (30-49):  ████░░░░░░░░░░░░░░░░  9%  ($2.2M)
Critical (<30):   █░░░░░░░░░░░░░░░░░░░  3%  ($0.7M)

Trend: Distribution improving (at-risk down from 12%)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

STRATEGIC ACCOUNTS STATUS (Top 20 by ARR)

Green:  14  accounts ($8.2M ARR)
Yellow: 4   accounts ($2.1M ARR)
Red:    2   accounts ($1.4M ARR)  ← Executive attention required

Red Accounts:
1. GlobalTech Inc ($850K) - Champion departure, exec engaged
2. MegaCorp ($550K) - Competitive threat, QBR scheduled

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

KEY WINS THIS MONTH
✓ Saved $420K at-risk ARR (TechFlow, DataPro)
✓ Closed $380K expansion (Acme Corp +$150K, 3 others)
✓ NPS improved 8 points (32 → 40)

KEY RISKS TO WATCH
⚠ 3 renewals >$100K in next 60 days at health <60
⚠ Enterprise segment NPS declined 5 points
⚠ Q2 cohort showing early retention weakness

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Bad Executive Dashboard

Customer Success Report - January

Customers: 485
ARR: $24,500,000
Health Score: 72
NPS: 40

Churned: 12 customers
New: 28 customers

Support Tickets: 1,247

Problems:
✗ No context or trends
✗ No targets or benchmarks
✗ No segmentation
✗ No actionable insights
✗ Mixing operational and strategic metrics
✗ No risk visibility
✗ No narrative
✗ No recommendations

Board-Level Reporting

Board Report: Customer Success (Q4 2024)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

EXECUTIVE SUMMARY

Net Revenue Retention of 112% demonstrates strong customer health
and expansion motion. At-risk ARR has decreased 25% since Q3,
indicating improved early intervention effectiveness.

Key achievements:
• Reduced churn rate from 1.8% to 1.2% monthly
• Expanded NRR from 105% to 112%
• Decreased time-to-value from 34 to 22 days

Areas requiring investment:
• Enterprise segment engagement (NPS declining)
• Proactive risk detection (surprise churn rate 18%)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

KEY METRICS

                    Q4 2024    Q3 2024    YoY       Target
Net Revenue Ret.    112%       105%       +18%      110%  ✓
Gross Revenue Ret.  92%        90%        +4%       90%   ✓
Logo Retention      94%        93%        +2%       92%   ✓
NPS                 40         32         +12       35    ✓

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

STRATEGIC RISKS

1. Enterprise Engagement (Medium Risk)
   - NPS declined 5 points in segment
   - Two $500K+ accounts in yellow status
   - Mitigation: Executive business reviews, product investment

2. Market Competition (Low-Medium Risk)
   - Competitor mentions up 15% in support tickets
   - No significant losses yet
   - Mitigation: Competitive intelligence program launched

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Q1 2025 PRIORITIES

1. Launch enterprise engagement program
2. Reduce surprise churn rate to <10%
3. Achieve 115% NRR target

Dashboard Design Principles

PrincipleDescriptionExample
HierarchyMost important metrics firstNRR at top, details below
ContextAlways show comparisonsvs. target, vs. prior period
TrendShow direction, not just levelArrows, sparklines
ActionabilityLink to next steps"2 accounts need attention"
SegmentationBreak down aggregatesBy segment, tier, CSM
Simplicity5-7 key metrics maxRemove nice-to-haves
ConsistencySame layout each periodEnables quick comparison

Data Storytelling Framework

The Situation-Complication-Resolution Framework

SITUATION (What's happening)
"Our customer portfolio grew 22% this year to $24.5M ARR
across 485 customers."

COMPLICATION (Why it matters)
"However, our at-risk ARR has increased to $2.1M (8.6%),
driven primarily by declining engagement in the enterprise
segment where NPS dropped 5 points."

RESOLUTION (What we're doing)
"We're launching a dedicated enterprise success program
with executive business reviews, which has shown 40%
improvement in similar situations. Expected impact:
reduce at-risk enterprise ARR by 50% in Q1."

KEY INSIGHT
Lead with the insight, not the data.

Bad: "Health scores averaged 72 this month."
Good: "Customer health improved for the 3rd consecutive month,
      driven by our new onboarding program which reduced
      time-to-value by 35%."

Reporting Cadence

ReportAudienceFrequencyContent Focus
Daily AlertsCSM, ManagerDailyCritical risks, action items
Weekly OpsCS TeamWeeklyPipeline, at-risk, wins
Monthly ReviewVP, C-SuiteMonthlyMetrics, trends, initiatives
QBRExec Team, BoardQuarterlyStrategy, risks, investments
Annual ReviewBoardAnnuallyYoY performance, strategy

Dashboard Metrics by Audience

MetricBoardC-SuiteVPManager
NRR/GRRYYY-
Health DistributionSummaryYYY
At-Risk ARR$ amountYYAccount list
Churn AnalysisTrendsDetailsDetailsAccounts
CSM Performance-SummaryDetailsIndividual
Risk Alerts-CriticalAllAssigned
Renewal Pipeline-SummaryYY

Report Automation

ComponentAutomation LevelTools
Data CollectionFully automatedETL, data warehouse
Metric CalculationFully automatedSQL, dbt
Dashboard RefreshFully automatedLooker, Tableau, Metabase
Alert GenerationFully automatedWorkflow tools, Slack
Insight GenerationSemi-automatedTemplates + human review
Narrative WritingManualCS leadership
DistributionAutomatedEmail, Slack

Executive Presentation Checklist

□ Pre-Meeting Preparation
  □ Data refreshed and validated
  □ Key metrics calculated correctly
  □ Narrative prepared and reviewed
  □ Anticipated questions researched
  □ Backup slides ready

□ Content Structure
  □ Executive summary on first slide
  □ Key metrics with context
  □ Trends and comparisons shown
  □ Strategic risks highlighted
  □ Wins and successes celebrated
  □ Clear recommendations included
  □ Ask/investment needs specified

□ Visual Design
  □ Consistent formatting
  □ Clear hierarchy
  □ Minimal clutter
  □ Actionable insights highlighted
  □ Red/yellow/green status clear

□ Delivery
  □ Lead with insights, not data
  □ Tell a story
  □ Acknowledge challenges honestly
  □ Provide recommendations
  □ Allow time for questions
  □ Document action items

Good vs Bad Metrics Presentation

ApproachBadGood
Format"NRR was 108%""NRR of 108% (↑ 3% vs Q3, on track to 110% target)"
Context"12 customers churned""12 customers churned ($198K), down 22% from prior month"
Insight"Health score is 72""Health improved 3 points, driven by new onboarding program"
Action"At-risk ARR is $2.1M""At-risk ARR of $2.1M — 3 accounts need exec intervention"
Trend"NPS is 40""NPS reached 40 (+8 points YTD), highest in company history"

Anti-Patterns

  • Data dump — Too many metrics without narrative
  • No benchmarks — Metrics without targets or comparisons
  • Vanity focus — Highlighting good metrics, hiding problems
  • Stale reporting — Manual processes creating delays
  • One-size-fits-all — Same report for board and manager
  • No action items — Reporting without recommendations
  • Surprise reveals — Board learns about risks first in meeting
  • Metric overload — 50 KPIs when 5 would suffice

title: Health Model Validation & Calibration impact: HIGH tags: model-validation, calibration, accuracy, continuous-improvement

Health Model Validation & Calibration

Impact: HIGH

A health score model is only valuable if it accurately predicts outcomes. Without regular validation and calibration, models drift, accuracy degrades, and teams lose confidence. Continuous validation ensures your health scores remain actionable and trustworthy.

The Validation Lifecycle

┌──────────────────────────────────────────────────────────────────┐
│                  MODEL VALIDATION LIFECYCLE                      │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│       ┌─────────┐                                                │
│       │  BUILD  │                                                │
│       └────┬────┘                                                │
│            │                                                     │
│            ▼                                                     │
│       ┌─────────┐      ┌─────────┐      ┌─────────┐            │
│       │ DEPLOY  │ ───► │ MONITOR │ ───► │ ANALYZE │            │
│       └─────────┘      └────┬────┘      └────┬────┘            │
│            ▲                │                │                  │
│            │                ▼                ▼                  │
│            │           ┌─────────┐      ┌─────────┐            │
│            └───────────│ REFINE  │ ◄─── │ CALIBRATE│            │
│                        └─────────┘      └─────────┘            │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Key Validation Metrics

MetricDefinitionTargetRed Flag
Churn Prediction AccuracyPredicted churns / Actual churns>70%<50%
Surprise Churn RateChurns with health >60 / Total churns<20%>35%
False Positive RateFalse at-risk / Flagged at-risk<30%>50%
Score-Outcome CorrelationPearson correlation (score, outcome)>0.5<0.3
Lift at 10%Top decile churn rate / Overall rate>3x<2x
Score DistributionSpread across 0-100 rangeNormalBimodal/Skewed
Calibration ErrorAvg(Predicted prob - Actual prob)<5%>15%

Good Model Validation Report

Health Score Model Validation Report
Period: Q4 2024

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

EXECUTIVE SUMMARY

Model performance meets targets across key metrics.
Prediction accuracy improved 8% vs. Q3 following
feature updates. One area of concern: enterprise
segment showing higher surprise churn rate.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

PREDICTION ACCURACY

Actual Churns:               47
Predicted (Health <50):      38
Correctly Predicted:         33
Surprise Churns (>60):       14

Accuracy Rate:               70% (target: 70%) ✓
Surprise Churn Rate:         30% (target: <20%) ⚠
False Positive Rate:         28% (target: <30%) ✓

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

SCORE-OUTCOME CORRELATION

                    Retention    Expansion    NPS
Health Score        0.62         0.48         0.55
Correlation         Strong       Moderate     Moderate

                    Prior Quarter   Current
Retention Corr.     0.58            0.62  ↑
Expansion Corr.     0.45            0.48  ↑
NPS Corr.           0.52            0.55  ↑

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

LIFT ANALYSIS

Decile    Avg Score    Churn Rate    Lift
1 (High)  12           42%           5.2x  ← Good separation
2         28           31%           3.9x
3         38           22%           2.8x
4         48           15%           1.9x
5         56           11%           1.4x
6         63           9%            1.1x
7         70           7%            0.9x
8         76           5%            0.6x
9         83           3%            0.4x
10 (Low)  91           1%            0.1x

Overall churn rate: 8%
Top decile lift: 5.2x (target: >3x) ✓

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

SEGMENT ANALYSIS

Segment         Accuracy    Surprise Rate    Status
Enterprise      62%         38%              ⚠ Needs attention
Mid-Market      74%         22%              ✓ On track
SMB             72%         28%              ✓ On track
Startup         68%         32%              ~ Monitor

Enterprise segment investigation:
- 5 of 14 surprise churns were enterprise
- Common pattern: Champion departure not detected
- Recommendation: Add LinkedIn monitoring signal

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

CALIBRATION CHECK

Score Range    Predicted Risk    Actual Risk    Gap
0-20           85%               78%            -7%
20-40          60%               55%            -5%
40-60          35%               32%            -3%
60-80          15%               12%            -3%
80-100         5%                4%             -1%

Avg Calibration Error: 4% (target: <5%) ✓
Model is slightly overconfident in high-risk scores.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

RECOMMENDATIONS

1. [HIGH] Add champion monitoring to enterprise scoring
2. [MEDIUM] Recalibrate high-risk thresholds
3. [LOW] Review startup segment feature weights

Bad Model Validation Report

Health Score Report

Model accuracy: 70%
Churns predicted: 33/47

Status: Working fine.

Problems:
✗ No trend analysis
✗ No segment breakdown
✗ No calibration check
✗ No feature analysis
✗ No actionable recommendations
✗ No comparison to prior period
✗ No investigation of failures

Surprise Churn Analysis Framework

Surprise Churn Investigation: AccountName

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

ACCOUNT PROFILE
├── ARR: $85,000
├── Tenure: 18 months
├── Health Score at Churn: 72 (Healthy)
├── Health Score 30 days prior: 74
└── Health Score 90 days prior: 71

CHURN REASON: Competitor displacement

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

SIGNAL ANALYSIS: WHAT WE MISSED

Signal                  Present?    In Model?    Why Missed?
─────────────────────────────────────────────────────────────
Competitor research     Yes         No           No intent data
Champion job search     Yes         No           No LinkedIn tracking
Reduced engagement      Subtle      Yes          Below threshold
Support complaints      No          -            No signal
Usage decline           Minor       Yes          Below threshold

ROOT CAUSE: Champion was evaluating alternatives
while maintaining appearance of engagement.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

MODEL IMPROVEMENT OPPORTUNITIES

1. Add intent data signals (competitor research)
2. Add LinkedIn monitoring for key contacts
3. Lower threshold for engagement decline
4. Create composite "quiet leaving" indicator
5. Weight recent trend more heavily

EXPECTED IMPACT: Could have caught this 60 days earlier

Calibration Techniques

TechniqueWhen to UseHow It Works
Platt ScalingScore not well-calibratedFit logistic regression on scores
Isotonic RegressionNon-monotonic calibrationNon-parametric adjustment
Temperature ScalingNeural network outputsSingle parameter adjustment
Threshold TuningBusiness-driven calibrationAdjust based on capacity
Segment AdjustmentDifferent segments behave differentlySegment-specific thresholds

Model Drift Detection

Drift Monitoring Dashboard

┌─────────────────────────────────────────────────────────────────┐
│  FEATURE DRIFT MONITORING                                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Feature                   Baseline    Current     Drift       │
│  ────────────────────────────────────────────────────────────  │
│  Usage velocity (30d)      -0.02       -0.08       ⚠ DRIFT    │
│  NPS score                 42          38          ~ Minor     │
│  Support tickets/mo        2.3         2.5         ✓ Stable   │
│  Feature adoption          58%         55%         ✓ Stable   │
│  Champion engagement       0.72        0.68        ~ Minor     │
│                                                                 │
│  ⚠ Alert: Usage velocity distribution shifted significantly   │
│     Recommend: Investigate cause, consider retraining          │
│                                                                 │
├─────────────────────────────────────────────────────────────────┤
│  OUTCOME DRIFT                                                  │
│                                                                 │
│  Metric                    Baseline    Current     Status      │
│  ────────────────────────────────────────────────────────────  │
│  Monthly churn rate        1.5%        1.8%        ~ Monitor   │
│  Score-churn correlation   0.62        0.58        ~ Monitor   │
│  Prediction accuracy       72%         68%         ⚠ Watch    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Validation Schedule

ActivityFrequencyOwnerOutput
Accuracy trackingWeeklyData teamDashboard update
Surprise churn reviewPer eventCS + DataInvestigation report
Drift monitoringWeeklyData teamDrift alerts
Segment analysisMonthlyData teamSegment report
Full validationQuarterlyCS + DataValidation report
Model retrainingQuarterlyData teamNew model version
Threshold calibrationQuarterlyCS leadershipUpdated thresholds

Threshold Calibration Process

Step 1: Analyze current distribution
├── Plot health scores vs. outcomes
├── Identify natural breakpoints
└── Calculate churn rate by score band

Step 2: Assess operational capacity
├── How many at-risk accounts can CSMs handle?
├── What's the cost of false positives?
└── What's the cost of missed churns?

Step 3: Optimize thresholds
├── Set thresholds to balance precision/recall
├── Consider segment-specific adjustments
└── Align with intervention capacity

Step 4: Validate proposed changes
├── Backtest on historical data
├── Calculate expected false positive/negative rates
└── Estimate resource requirements

Step 5: Implement and monitor
├── Update threshold configuration
├── Communicate to CS team
├── Track performance post-change
└── Adjust if needed

Feature Importance Review

FeatureCurrent WeightQ3 WeightCorrelationRecommendation
Usage velocity28%25%0.58Maintain
NPS trend19%22%0.51Maintain
Support sentiment15%14%0.42Maintain
Champion engagement13%12%0.45Increase to 15%
Feature adoption11%13%0.38Reduce to 10%
Billing health8%8%0.32Maintain
Contract signals6%6%0.28Maintain

Model Governance Checklist

□ Validation Process
  □ Weekly accuracy tracking automated
  □ Surprise churn review process defined
  □ Drift alerts configured
  □ Quarterly full validation scheduled

□ Documentation
  □ Model architecture documented
  □ Feature definitions captured
  □ Threshold rationale recorded
  □ Version history maintained

□ Change Management
  □ Change approval process defined
  □ A/B testing capability available
  □ Rollback plan documented
  □ Communication plan for changes

□ Stakeholder Alignment
  □ CS leadership reviews validation reports
  □ Data team owns model maintenance
  □ Feedback loop from CSMs formalized
  □ Executive sponsor engaged

□ Continuous Improvement
  □ New feature experimentation process
  □ Segment-specific tuning allowed
  □ Industry benchmark tracking
  □ Model improvement backlog maintained

Anti-Patterns

  • Set and forget — Never validating after initial launch
  • Aggregate-only analysis — Missing segment-specific issues
  • No surprise churn investigation — Not learning from failures
  • Threshold stagnation — Never adjusting as business changes
  • Ignoring drift — Features change meaning over time
  • No documentation — Model logic in one person's head
  • Validation without action — Reports with no follow-through
  • Perfect-seeking — Waiting for 100% accuracy vs. iterating

title: Health Score Design & Weighting impact: CRITICAL tags: health-score, weighting, methodology, scoring-algorithm

Health Score Design & Weighting

Impact: CRITICAL

A well-designed health score predicts customer outcomes 60-90 days before they happen. Poor health scores are vanity metrics that provide false confidence while customers silently churn.

The Health Score Equation

Health Score = Σ (Component Score × Weight)

Where:
- Each component is normalized to 0-100
- Weights sum to 100%
- Final score ranges 0-100

Component Selection Criteria

CriterionQuestionExample
PredictiveDoes this signal future outcomes?Usage decline predicts churn
MeasurableCan we reliably track this?Login frequency vs. "satisfaction"
ActionableCan we influence this?Feature adoption (yes) vs. company size (no)
TimelyDo we get the signal early enough?Leading indicators only
AvailableDo we have access to this data?CRM data vs. internal discussions

Standard Health Score Components

ComponentTypical WeightSub-Metrics
Product Usage30-40%DAU/MAU, feature breadth, depth, frequency
Engagement20-25%NPS, CSM touchpoints, email responsiveness
Growth Signals15-20%Seat expansion, usage trend, contract growth
Support Health15-20%Ticket volume, sentiment, resolution satisfaction
Financial Health5-10%Payment history, contract terms, billing issues

Weight Assignment by Business Model

Business ModelUsageEngagementGrowthSupportFinancial
Self-serve SaaS45%15%20%15%5%
Enterprise SaaS30%30%15%15%10%
Usage-based50%15%20%10%5%
High-touch services20%40%15%20%5%

Good Health Score Design

Health Score v2.0 - Enterprise Accounts

Component: Product Usage (35%)
├── DAU/MAU ratio (10%)
│   └── 30-day rolling average
├── Feature adoption score (10%)
│   └── % of key features used
├── Usage depth (10%)
│   └── Actions per session
└── Core workflow completion (5%)
    └── % completing primary use case

Component: Engagement (25%)
├── Relationship NPS (10%)
│   └── Most recent score
├── CSM touchpoints (8%)
│   └── Meetings held vs. scheduled
└── Communication responsiveness (7%)
    └── Email response rate

Component: Growth Signals (20%)
├── Seat expansion trend (8%)
│   └── 90-day user growth rate
├── Usage expansion trend (7%)
│   └── 90-day consumption growth
└── Contract expansion (5%)
    └── Any expansion in last year

Component: Support Health (20%)
├── Ticket sentiment (8%)
│   └── AI-analyzed support conversations
├── Resolution satisfaction (7%)
│   └── Post-ticket CSAT
└── Escalation frequency (5%)
    └── Escalations per month

Scoring:
- All sub-metrics normalized to 0-100
- Component score = weighted average of sub-metrics
- Final score = weighted sum of components

Bad Health Score Design

Health Score v1.0 (Problems Identified)

Components:
├── Product Usage (70%)          ← Over-weighted single category
│   └── Total logins             ← Vanity metric
│
├── Support Tickets (15%)        ← Direction unclear
│   └── Total tickets opened     ← More tickets = lower score?
│
└── Contract Value (15%)         ← Not predictive
    └── ARR                      ← Bigger customers ≠ healthier

Problems:
✗ Over-reliance on single category
✗ Logins don't measure value
✗ Tickets could be good (engaged) or bad (frustrated)
✗ ARR doesn't predict retention
✗ No engagement or relationship signals
✗ No leading indicators

Scoring Algorithm Examples

Linear Scoring:

Score = (Actual Value / Target Value) × 100
Cap at 100, floor at 0

Example: DAU/MAU
Target: 40%
Actual: 32%
Score: (32/40) × 100 = 80

Threshold-Based Scoring:

If DAU/MAU >= 50%: Score = 100
If DAU/MAU >= 40%: Score = 80
If DAU/MAU >= 30%: Score = 60
If DAU/MAU >= 20%: Score = 40
If DAU/MAU >= 10%: Score = 20
If DAU/MAU < 10%:  Score = 0

Trend-Adjusted Scoring:

Base Score = Current metric score
Trend Factor = (Current - 30 days ago) / 30 days ago
Adjusted Score = Base Score × (1 + Trend Factor × 0.2)

Example:
Base Score: 70
Usage up 15%: 70 × 1.03 = 72.1
Usage down 15%: 70 × 0.97 = 67.9

Health Score Thresholds

Score RangeStatusColorAction Priority
85-100ThrivingGreenExpansion focus
70-84HealthyLight GreenMonitor, optimize
50-69NeutralYellowProactive engagement
30-49At-RiskOrangeImmediate intervention
0-29CriticalRedExecutive escalation

Threshold Calibration Process

Step 1: Historical Analysis
- Pull 12+ months of health scores
- Tag customers by outcome (churned, retained, expanded)
- Plot score distribution by outcome

Step 2: Threshold Identification
- Find score ranges where outcomes diverge
- Identify clear "danger zones"
- Map to intervention capacity

Step 3: Validation
- Apply thresholds prospectively
- Track prediction accuracy
- Measure false positive/negative rates

Step 4: Refinement
- Adjust thresholds quarterly
- Segment-specific thresholds if needed
- Document rationale for changes

Health Score Validation Metrics

MetricTargetCalculation
Churn Prediction Accuracy>70%Predicted churn / Actual churn
False Positive Rate<25%False at-risk / Total at-risk
False Negative Rate<15%Surprise churns / Total churns
Score-Outcome Correlation>0.5Pearson correlation
Segment ConsistencySimilarSame score ≈ same outcomes

Segment-Specific Scoring Considerations

SegmentAdjustment
EnterpriseWeight relationship higher, usage patterns differ
SMBWeight product usage higher, less CSM touchpoints
New customersSeparate onboarding score, don't penalize low tenure
High-growthAdjust for rapid seat expansion volatility
SeasonalNormalize for expected usage patterns

Health Score Implementation Checklist

□ Component Selection
  □ Each component has clear predictive value
  □ All data sources are reliable and available
  □ Metrics are actionable (we can influence them)
  □ No duplicate signals across components

□ Weight Assignment
  □ Weights based on historical correlation analysis
  □ Weights sum to 100%
  □ No single component dominates (max 40%)
  □ Weights documented with rationale

□ Scoring Logic
  □ All sub-metrics normalized consistently (0-100)
  □ Handling for missing data defined
  □ Edge cases documented (new customers, etc.)
  □ Calculation logic peer-reviewed

□ Threshold Definition
  □ Thresholds based on outcome analysis
  □ Clear actions mapped to each threshold
  □ Thresholds validated against historical data
  □ Segment-specific adjustments if needed

□ Operational Readiness
  □ Score calculation automated
  □ Update frequency defined (daily/weekly)
  □ Alerting configured for threshold crossings
  □ Dashboard visibility for CS team

□ Ongoing Governance
  □ Quarterly calibration review scheduled
  □ Accuracy metrics tracked
  □ Feedback loop from CS team
  □ Version history maintained

Anti-Patterns

  • Kitchen sink scoring — Including every metric regardless of predictive value
  • Equal weighting — All components at 20% without analysis
  • Binary signals — Using yes/no when degree matters
  • Static thresholds — Never recalibrating as business changes
  • Ignoring tenure — New customers scored same as mature ones
  • Vanity components — Metrics that feel important but don't predict
  • Over-fitting — Optimizing for historical data, failing on new patterns
  • No documentation — Scoring logic understood by one person only

title: Leading vs Lagging Indicator Analysis impact: CRITICAL tags: leading-indicators, lagging-indicators, predictive-signals, correlation

Leading vs Lagging Indicator Analysis

Impact: CRITICAL

By the time you see lagging indicators (churn, downgrades), it's often too late. Leading indicators give you the 60-90 day window needed to intervene effectively. The best customer success teams obsess over leading indicators.

Indicator Classification Framework

Timeline to Outcome:
────────────────────────────────────────────────────────►
│                                                       │
│  LEADING           COINCIDENT         LAGGING        │
│  (60-90 days)      (30-60 days)       (0-30 days)   │
│                                                       │
│  ✓ Actionable      ~ Urgent           ✗ Historical  │
│  ✓ Predictive      ~ Confirmatory     ✗ Reactive    │
│  ✓ Proactive       ~ Responsive       ✗ Post-mortem │
│                                                       │
└───────────────────────────────────────────────────────┘

Common Indicator Categories

CategoryLeading (60-90 days)Coincident (30-60 days)Lagging (0-30 days)
UsageFeature adoption decliningLogin frequency droppingAccount dormant
EngagementMissed scheduled meetingsUnresponsive to outreachNo contact 60+ days
SentimentSupport ticket tone changeNPS score dropCancellation request
FinancialContract questionsDowngrade inquiryNon-renewal notice
OrganizationalChampion on LinkedInNew stakeholder introducedChampion departed

Leading Indicator Catalog

IndicatorSignal TypeDetection MethodAction Window
DAU/MAU declining >20%UsageProduct analytics90 days
Key feature abandonmentUsageEvent tracking75 days
Power user disengagementUsageUser segmentation60 days
CSM meeting cancellationsEngagementCalendar tracking60 days
Exec sponsor unresponsiveEngagementCommunication logs75 days
Support ticket sentiment shiftSentimentNLP analysis45 days
Renewal meeting not scheduledFinancialCSM activity90 days
Budget/cost questionsFinancialCall transcripts60 days
Champion job change signalsOrganizationalLinkedIn tracking90 days
New stakeholder evaluationOrganizationalCSM notes60 days

Good Leading Indicator Analysis

Indicator: Feature Adoption Decline

Definition:
- Customer using <50% of features used at peak
- Measured over rolling 30-day window
- Compared to their own historical baseline

Why It's Leading:
- Precedes churn by 75 days on average
- Indicates value not being realized
- Actionable through enablement

Detection:
- Automated daily feature usage calculation
- Alert when adoption drops below threshold
- Trend visualization in health dashboard

Correlation Analysis:
- 68% of customers with this signal churned within 120 days
- Only 12% of customers without this signal churned
- Predictive accuracy: 73%

Action Trigger:
When detected → CSM outreach within 48 hours
Goal → Feature re-enablement or use case pivot

Bad Leading Indicator Analysis

Indicator: Low NPS Score

Problems:
✗ NPS is often coincident or lagging, not leading
✗ By the time NPS drops, issues are entrenched
✗ Quarterly surveys miss the window
✗ NPS alone lacks actionability

Better Approach:
- Track NPS trend (leading signal: declining NPS)
- Combine with other signals (NPS + usage decline)
- Use transactional NPS for faster feedback
- Look at verbatim comments for leading signals

Correlation Reality:
- Static low NPS: 45% correlation to churn
- Declining NPS trend: 72% correlation to churn
- The trend is the leading indicator, not the score

Correlation Analysis Methodology

Step 1: Define Outcomes
- Primary: Churn (Y/N)
- Secondary: Expansion, Contraction, NRR

Step 2: Identify Candidate Signals
- List all measurable customer behaviors
- Include product, engagement, support, financial

Step 3: Time-Shift Analysis
For each signal at each lag period (30, 60, 90, 120 days):
- Calculate correlation to outcome
- Identify optimal prediction window

Step 4: Signal Ranking
- Rank by correlation strength
- Consider actionability
- Assess data availability

Step 5: Combine for Prediction
- Build composite leading indicator score
- Validate on holdout data
- Monitor ongoing accuracy

Correlation Strength Benchmarks

CorrelationInterpretationAction
>0.7Strong predictorHigh priority signal
0.5-0.7Moderate predictorInclude in model
0.3-0.5Weak predictorCombine with others
<0.3Not predictiveExclude or investigate

Signal Combination Matrix

If Signal A...And Signal B...Risk LevelAction
Usage decliningEngagement stableMediumEnablement focus
Usage stableEngagement decliningMediumRelationship focus
Usage decliningEngagement decliningHighExecutive intervention
Usage decliningSupport tickets increasingCriticalImmediate escalation
Champion activeUsage decliningMedium-HighChampion conversation
Champion inactiveUsage stableMediumFind new champion

Good Indicator Monitoring Dashboard

Leading Indicator Dashboard

┌─────────────────────────────────────────────────────────┐
│  LEADING INDICATOR ALERTS (Last 7 Days)                 │
├─────────────────────────────────────────────────────────┤
│                                                         │
│  Critical (Action Required):                   12       │
│  ├── Usage Decline >30%                        5       │
│  ├── Champion Departure Detected               3       │
│  └── Renewal Meeting Not Scheduled             4       │
│                                                         │
│  Warning (Monitor Closely):                    28       │
│  ├── Feature Adoption Declining                12      │
│  ├── Engagement Score Down                     9       │
│  └── Support Sentiment Shift                   7       │
│                                                         │
├─────────────────────────────────────────────────────────┤
│  INDICATOR TRENDS (90-Day)                              │
│                                                         │
│  Usage Decline Alerts:     ▲ +15% vs prior period      │
│  Champion Departures:      ▼ -8% vs prior period       │
│  Engagement Drops:         ─ Flat vs prior period      │
│                                                         │
├─────────────────────────────────────────────────────────┤
│  PREDICTION ACCURACY (Last Quarter)                     │
│                                                         │
│  Churns Predicted:         42/51 (82%)                 │
│  False Positives:          15/42 (36%)                 │
│  Avg Lead Time:            67 days                     │
│                                                         │
└─────────────────────────────────────────────────────────┘

Building Your Leading Indicator Model

StepActionOutput
1Collect 12+ months historical dataData set
2Tag outcomes (churn, retain, expand)Labeled data
3Calculate all signals at various time lagsSignal matrix
4Run correlation analysisRanked signals
5Select top 5-8 leading indicatorsIndicator set
6Define thresholds for eachAlert rules
7Build composite scoreLeading indicator score
8Validate on holdout dataAccuracy metrics
9Implement monitoringAutomated alerts
10Refine quarterlyContinuous improvement

Action Triggers by Signal

SignalThresholdActionOwnerSLA
Usage decline>25% MoMCSM outreachCSM48 hrs
Feature abandonmentKey feature unused 30+ daysEnablement callCSM1 week
Champion departureLinkedIn change detectedStakeholder mappingCSM + Manager24 hrs
NPS declineDrop of 3+ pointsRoot cause analysisCSM1 week
Support sentimentNegative trend detectedService reviewSupport Lead48 hrs
Meeting cancellation2+ consecutiveManager check-inCSM Manager1 week
Budget questionsDetected in callValue realization reviewCSM48 hrs

Indicator Validation Checklist

□ Predictive Power
  □ Correlation to outcome >0.5
  □ Consistent across customer segments
  □ Maintains accuracy over time
  □ Not just correlating with another signal

□ Actionability
  □ Clear intervention available
  □ Enough lead time to act (60+ days)
  □ Team has capacity to respond
  □ Success interventions documented

□ Reliability
  □ Data source is consistent
  □ Signal can be calculated automatically
  □ Missing data handling defined
  □ False positive rate acceptable (<30%)

□ Operationalization
  □ Real-time or near-real-time detection
  □ Alerts configured and routed correctly
  □ Playbook exists for each signal
  □ Feedback loop to improve model

Anti-Patterns

  • Lagging indicator focus — Tracking churn rate instead of churn predictors
  • Single indicator reliance — One signal without confirmation
  • Ignoring signal combinations — Missing that A + B together is critical
  • Static thresholds — Not adjusting for segment or seasonality
  • No validation — Using indicators without testing predictive power
  • Action-less alerts — Signals without defined responses
  • Too many indicators — Alert fatigue from over-monitoring
  • Ignoring false positives — Not refining to reduce noise

title: Risk Identification & Escalation impact: CRITICAL tags: risk-identification, escalation, intervention, save-strategies

Risk Identification & Escalation

Impact: CRITICAL

Early risk identification and well-defined escalation processes are the difference between saving an at-risk account and conducting a post-mortem. A structured approach ensures no customer falls through the cracks and interventions happen with enough lead time to succeed.

The Risk Escalation Framework

┌──────────────────────────────────────────────────────────────────┐
│                     RISK ESCALATION PATH                         │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  LOW RISK          MEDIUM RISK        HIGH RISK        CRITICAL │
│  Health: 70+       Health: 50-69      Health: 30-49    Health: <30│
│                                                                  │
│  ┌─────────┐      ┌─────────┐       ┌─────────┐      ┌─────────┐│
│  │ Monitor │ ───► │ Engage  │ ────► │Intervene│ ───► │Escalate ││
│  └─────────┘      └─────────┘       └─────────┘      └─────────┘│
│                                                                  │
│  Owner: CSM       Owner: CSM        Owner: CSM +     Owner: VP + │
│                                     Manager          Executive   │
│                                                                  │
│  SLA: Weekly      SLA: 1 week       SLA: 48 hours   SLA: 24 hrs │
│  review           outreach          intervention    response     │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Risk Signal Categories

CategorySignalsSeverityDetection Method
UsageDeclining logins, feature abandonment, dormant usersHighProduct analytics
EngagementMissed meetings, unresponsive, no exec accessHighCRM tracking
SentimentNegative NPS, complaints, support escalationsHighSurvey + Support
FinancialPayment issues, contract questions, budget concernsVery HighBilling + CSM notes
OrganizationalChampion leaving, reorg, M&ACriticalLinkedIn + news
CompetitiveCompetitor mentions, RFP activity, feature comparisonsVery HighCall transcripts
ContractualShort contract, no auto-renew, upcoming expirationMediumContract data

Risk Signal Severity Matrix

SignalSeverityTime SensitivityRequired Action
Champion departureCritical24 hoursExecutive outreach
Cancellation requestCriticalSame daySave team activation
Competitor evaluationVery High48 hoursExecutive involvement
Usage decline >50%High48 hoursCSM intervention
Payment failureHigh24 hoursBilling + CSM outreach
Negative NPS responseHigh72 hoursClosed-loop follow-up
Missed QBRMedium1 weekManager involvement
Contract expiring <90 daysMedium1 weekRenewal discussion
Support escalationMedium48 hoursService recovery

Good Risk Identification System

Risk Alert: Acme Corp

Account: Acme Corp
ARR: $125,000
Health Score: 42 (was 68 last month)
CSM: Jane Smith

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

RISK SIGNALS DETECTED:

1. Champion Status Change (CRITICAL)
   └── Sarah Johnson updated LinkedIn to new company
   └── Detected: 2 hours ago
   └── She represented 65% of account activity

2. Usage Decline (HIGH)
   └── 34% decrease in DAU over 30 days
   └── Key feature "Reports" unused for 14 days
   └── Trend accelerating

3. Support Sentiment (MEDIUM)
   └── Last 3 tickets rated "Dissatisfied"
   └── Average sentiment score: 2.1/5 (was 4.2)

RISK SCORE: 78/100 (High Risk)

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

RECOMMENDED ACTIONS:

1. [IMMEDIATE] Contact account to identify new champion
2. [48 HOURS] Schedule executive check-in
3. [1 WEEK] Arrange re-onboarding for new stakeholders

ESCALATION: Manager + VP CS notified
SLA: Response required within 24 hours

Bad Risk Identification System

Alert: Account health decreased

Account: Acme Corp
Health Score: 42
Alert: Health score below threshold

Problems:
✗ No specific signals identified
✗ No context on what changed
✗ No severity classification
✗ No recommended actions
✗ No escalation path
✗ No SLA defined
✗ No ARR/impact context

Escalation Matrix

TriggerFirst ResponderEscalate ToExecutive Involvement
Health drops >15 pointsCSMNone initiallyIf no improvement in 2 weeks
Health drops >25 pointsCSMCSM ManagerVP if no improvement in 1 week
Health score <40CSM + ManagerVP CSCEO for strategic accounts
Churn signal detectedCSMManager + VPBased on ARR tier
Champion departureCSMManagerVP for accounts >$100K
Competitive threatCSM + ManagerVP CS + ExecCEO for strategic
Cancellation requestSave TeamVP CSCEO for top 20 accounts

Intervention Playbooks

Playbook: Champion Departure

Trigger: Key contact leaves company
Severity: Critical
SLA: 24 hour initial response

Day 1:
□ Verify departure (LinkedIn, email bounce, etc.)
□ Identify replacement contact
□ Executive-to-executive outreach to maintain relationship
□ Update CRM with new stakeholder map

Day 2-7:
□ Schedule intro call with new champion
□ Offer re-onboarding/training
□ QBR to re-establish value baseline
□ Document new success criteria

Day 8-30:
□ Accelerate engagement cadence
□ Monthly check-ins (vs. quarterly)
□ Feature adoption review
□ Executive sponsor assignment if needed

Playbook: Usage Decline

Trigger: Usage down >25% over 30 days
Severity: High
SLA: 48 hour initial contact

Day 1-2:
□ Analyze usage data for root cause
□ Identify which users/features affected
□ CSM outreach: "I noticed [specific change], is everything okay?"
□ Offer support call

Day 3-7:
□ Deep-dive call to understand context
□ Create action plan with customer
□ Enablement session if adoption issue
□ Executive involvement if strategic issue

Day 8-30:
□ Weekly check-ins during recovery
□ Monitor usage daily
□ Adjust plan based on progress
□ Escalate if no improvement by day 14

Playbook: Competitive Threat

Trigger: Competitor mention detected
Severity: Very High
SLA: 48 hour executive response

Day 1:
□ Alert CSM, Manager, and VP
□ Gather intelligence (what competitor, why looking)
□ Prepare competitive battle card
□ Schedule executive call

Day 2-3:
□ Executive-to-executive engagement
□ Understand specific evaluation criteria
□ Address gaps or concerns directly
□ Reinforce unique value proposition

Day 4-14:
□ Provide additional proof points (case studies, ROI)
□ Offer executive references
□ Consider strategic concessions if needed
□ Document outcome and learnings

Save Team Structure

RoleResponsibilityWhen Engaged
CSMFirst line, relationship managementAlways
CSM ManagerStrategy, additional resourcesHealth <50
VP Customer SuccessExecutive relationships, approvalsHealth <35 or $100K+ ARR
Executive SponsorPeer-level engagementStrategic accounts
ProductRoadmap discussions, custom solutionsFeature gaps
FinancePricing, contract flexibilityCommercial objections

Save Offer Guidelines

Offer TypeWhen to UseApproval RequiredSuccess Rate
Extended supportAdoption/enablement issuesCSM45%
Professional servicesImplementation gapsManager40%
Feature accessMissing functionalityManager35%
Contract pauseTiming/budget issuesVP30%
Pricing concessionCost objectionsVP + Finance25%
Custom developmentCritical feature gapExecutive20%

Risk Review Cadence

Review TypeFrequencyAttendeesFocus
Daily StandupDailyCSM TeamCritical alerts
Team ReviewWeeklyCSM + ManagerAt-risk accounts
Leadership ReviewWeeklyVP + DirectorsHigh-value at-risk
Executive ReviewMonthlyC-SuiteStrategic accounts
Portfolio ReviewQuarterlyAll CSTrends, patterns

Risk Documentation Template

## At-Risk Account Analysis

**Account:** [Name]
**ARR:** [Amount]
**Health Score:** [Current] (was [Previous])
**Risk Level:** [Critical/High/Medium]
**Date Identified:** [Date]

### Risk Signals
| Signal | Severity | Date Detected |
|--------|----------|---------------|
| [Signal 1] | [Level] | [Date] |
| [Signal 2] | [Level] | [Date] |

### Root Cause Analysis
[What's driving the risk]

### Stakeholder Impact
- Champion: [Status]
- Executive Sponsor: [Status]
- End Users: [Status]

### Action Plan
| Action | Owner | Due Date | Status |
|--------|-------|----------|--------|
| [Action 1] | [Name] | [Date] | [Status] |
| [Action 2] | [Name] | [Date] | [Status] |

### Outcome
[ ] Saved
[ ] Churned
[ ] In Progress

### Lessons Learned
[What we'll do differently]

Escalation Checklist

□ Risk Identification
  □ Specific signals documented
  □ Severity classified correctly
  □ Root cause hypothesized
  □ ARR impact quantified

□ Initial Response
  □ CSM contacted within SLA
  □ Customer context gathered
  □ Quick win opportunities identified
  □ Escalation need assessed

□ Escalation Execution
  □ Right people involved
  □ Clear ask defined
  □ Timeline established
  □ Customer expectations set

□ Intervention
  □ Action plan created
  □ Customer agreement obtained
  □ Progress tracking in place
  □ Success criteria defined

□ Resolution
  □ Outcome documented
  □ Lessons captured
  □ Process improvements identified
  □ Stakeholders informed

Anti-Patterns

  • Alert fatigue — Too many low-priority alerts mask real risks
  • Single signal reliance — Missing multi-factor risk patterns
  • Slow escalation — Waiting too long to involve leadership
  • No playbooks — Ad-hoc response to predictable situations
  • Discount-first saves — Training customers to threaten churn
  • Ignoring small accounts — Risk exists at all ARR levels
  • No documentation — Same mistakes repeated
  • Hero culture — Depending on individuals vs. process

title: Customer Segmentation & Tier Scoring impact: MEDIUM-HIGH tags: segmentation, tier-scoring, customer-tiers, behavioral-clustering

Customer Segmentation & Tier Scoring

Impact: MEDIUM-HIGH

Not all customers are equal — and treating them equally means over-investing in some and under-investing in others. Effective segmentation enables right-sized engagement models, focused resources, and segment-specific success strategies. Tier scoring determines service levels.

The Segmentation Framework

┌──────────────────────────────────────────────────────────────────┐
│                   SEGMENTATION DIMENSIONS                        │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  VALUE                    HEALTH                  POTENTIAL      │
│  (Current Worth)          (Current State)         (Future Worth) │
│                                                                  │
│  • ARR / MRR              • Health score          • Growth rate  │
│  • Lifetime value         • Engagement level      • Expansion    │
│  • Contract length        • Risk tier               capacity     │
│  • Payment history        • NPS/Sentiment         • Strategic    │
│                                                                  │
│                          ↓                                       │
│                                                                  │
│                 CUSTOMER TIER ASSIGNMENT                         │
│                                                                  │
│                 Enterprise │ Growth │ Scale │ Tech-touch        │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Value-Based Segmentation

TierARR Range% of Customers% of ARREngagement Model
Enterprise>$100K5-10%40-50%High-touch, named CSM
Mid-Market$25K-$100K15-25%25-35%Pooled CSM, proactive
SMB$5K-$25K30-40%15-25%Scaled, digital-first
Starter<$5K30-40%5-10%Tech-touch, self-serve

Tier Scoring Model

Customer Tier Score Calculation

TIER SCORE = (Value Score × 0.4) + (Potential Score × 0.35) + (Strategic Score × 0.25)

Value Score Components (0-100):
├── ARR percentile (50%)
├── Contract length (25%)
└── Payment reliability (25%)

Potential Score Components (0-100):
├── Growth trajectory (40%)
├── Seat expansion capacity (30%)
└── Product fit depth (30%)

Strategic Score Components (0-100):
├── Brand recognition (35%)
├── Reference potential (35%)
└── Market influence (30%)

Tier Assignment:
├── Tier 1 (Enterprise): Score 80-100
├── Tier 2 (Growth):     Score 60-79
├── Tier 3 (Scale):      Score 40-59
└── Tier 4 (Tech-touch): Score 0-39

Good Tier Assignment

Customer Tier Assessment: TechCorp Inc.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

VALUE SCORE: 72/100
├── ARR: $65,000 (68th percentile)        → 68 pts
├── Contract: 2-year agreement            → 80 pts
└── Payment: Always on time               → 100 pts
Weighted: 72

POTENTIAL SCORE: 85/100
├── Growth: 25% user growth last year     → 90 pts
├── Expansion: Using 40% of seats         → 75 pts
└── Product fit: 8/10 use cases match     → 80 pts
Weighted: 85

STRATEGIC SCORE: 68/100
├── Brand: Known regional player          → 60 pts
├── Reference: Willing, used once         → 75 pts
└── Influence: 500 LinkedIn followers     → 70 pts
Weighted: 68

TOTAL TIER SCORE: 75.3

Tier Assignment: TIER 2 (Growth)
Engagement Model: Pooled CSM with proactive touchpoints
Rationale: Strong potential for expansion, moderate current value

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Bad Tier Assignment

Customer Tier: Enterprise

Reason: They asked for a dedicated CSM.

Problems:
✗ No objective criteria
✗ Based on customer request, not value
✗ No scoring methodology
✗ No potential assessment
✗ No strategic consideration
✗ Will lead to misallocated resources

Behavioral Segmentation

SegmentBehavior PatternTypical NeedsEngagement Focus
ChampionsHigh usage, high NPS, advocatesExpansion, recognitionAdvocacy programs
Power UsersHeavy usage, feature depthAdvanced trainingFeature betas
Steady StateConsistent, moderate usageEfficiency, stabilityCheck-ins, optimization
Light TouchMinimal engagement, still renewsSelf-service, cost focusDigital nurture
ExpandingGrowing seats/usageOnboarding, enablementGrowth support
DecliningUsage trending downIntervention, value proofProactive outreach
At-RiskMultiple churn signalsRescue, retentionSave playbooks

Segment-Specific Success Strategies

ENTERPRISE SEGMENT ($100K+ ARR)

Engagement Model:
├── Named Strategic CSM (1:10-15 ratio)
├── Dedicated Executive Sponsor
├── Quarterly Business Reviews
├── Annual Strategic Planning
└── Direct access to product leadership

Success Activities:
├── Monthly strategic check-ins
├── Bi-weekly operational reviews
├── Custom success plans
├── Early access to roadmap
└── Executive-level escalation path

Metrics Focus:
├── Value realization / ROI
├── Stakeholder satisfaction
├── Strategic alignment
├── Expansion pipeline
└── Reference/advocacy activity

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

TECH-TOUCH SEGMENT (<$5K ARR)

Engagement Model:
├── Automated, digital-first
├── Community-based support
├── Self-service resources
└── Exception-based human touch

Success Activities:
├── Automated onboarding sequences
├── In-app guidance and tutorials
├── Community forum engagement
├── Triggered outreach (risk, expansion)
└── Scaled webinars and office hours

Metrics Focus:
├── Activation rate
├── Feature adoption
├── Support ticket volume
├── Self-service resolution
└── Upgrade conversion rate

Customer Matrix: Value vs Health

                    HIGH VALUE
                        │
    ┌───────────────────┼───────────────────┐
    │                   │                   │
    │   AT-RISK         │   CHAMPIONS       │
    │   HIGH VALUE      │   HIGH VALUE      │
    │                   │                   │
    │   Strategy:       │   Strategy:       │
    │   Save & retain   │   Expand & grow   │
    │   Executive focus │   Advocacy focus  │
    │                   │                   │
────┼───────────────────┼───────────────────┼────
LOW │                   │                   │ HIGH
HEALTH                  │                   │ HEALTH
────┼───────────────────┼───────────────────┼────
    │                   │                   │
    │   AT-RISK         │   HEALTHY         │
    │   LOW VALUE       │   LOW VALUE       │
    │                   │                   │
    │   Strategy:       │   Strategy:       │
    │   Evaluate ROI    │   Self-serve      │
    │   Tech-touch save │   Upgrade path    │
    │                   │                   │
    └───────────────────┼───────────────────┘
                        │
                   LOW VALUE

Segment Migration Tracking

From TierTo TierTriggerAction
SMB → Mid-MarketARR >$25KAuto-upgradeAssign CSM
Mid-Market → EnterpriseARR >$100KManual reviewStrategic CSM assignment
Any → At-RiskHealth <40Auto-flagEscalation playbook
At-Risk → HealthyHealth >60 for 60 daysAuto-restoreReturn to normal model
Declining → ChurnedCancellationManual processWin-back eligibility

Resource Allocation by Segment

ResourceEnterpriseMid-MarketSMBTech-Touch
CSM Ratio1:10-151:30-501:100-2001:1000+
QBR FrequencyQuarterlySemi-annualAnnualNone
Proactive OutreachMonthlyBi-monthlyQuarterlyTriggered only
Executive AccessDirectEscalationNoneNone
Custom Success PlanYesTemplateSelf-serviceNone
Priority SupportYesEnhancedStandardCommunity

Segmentation Dashboard

Segmentation Overview

┌─────────────────────────────────────────────────────────────────┐
│  TIER DISTRIBUTION                                              │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Tier        Customers    ARR         Health Avg   NRR         │
│  ─────────────────────────────────────────────────────────────  │
│  Enterprise      48       $9.8M       78           125%        │
│  Mid-Market     156       $8.2M       72           112%        │
│  SMB            187       $4.8M       68           98%         │
│  Tech-Touch     412       $2.2M       62           92%         │
│                                                                 │
│  Total          803       $25.0M      68           108%        │
│                                                                 │
├─────────────────────────────────────────────────────────────────┤
│  TIER MOVEMENT (Last Quarter)                                   │
│                                                                 │
│  ↑ Upgraded:    34 customers (+$1.2M ARR impact)               │
│  ↓ Downgraded:  12 customers (-$380K ARR impact)               │
│  → Churned:     28 customers (-$520K ARR impact)               │
│  ★ New:         67 customers (+$890K ARR impact)               │
│                                                                 │
├─────────────────────────────────────────────────────────────────┤
│  TIER-SPECIFIC ALERTS                                           │
│                                                                 │
│  Enterprise: 2 accounts at-risk (need exec attention)          │
│  Mid-Market: 8 accounts approaching Enterprise threshold       │
│  SMB: 15 accounts declining, intervention needed               │
│  Tech-Touch: Upgrade candidates identified (12 accounts)       │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Segmentation Implementation Checklist

□ Segment Definition
  □ Clear criteria for each tier
  □ Scoring methodology documented
  □ Thresholds validated against data
  □ Edge case handling defined

□ Data Requirements
  □ Value metrics available
  □ Potential indicators tracked
  □ Strategic scoring inputs defined
  □ Automated calculation possible

□ Engagement Models
  □ CSM ratios defined per tier
  □ Touchpoint cadence specified
  □ Resource allocation approved
  □ Escalation paths documented

□ Migration Rules
  □ Upgrade triggers defined
  □ Downgrade criteria specified
  □ Review process for changes
  □ Customer communication plan

□ Technology Setup
  □ Tier field in CRM
  □ Automated tier calculation
  □ CSM assignment automation
  □ Reporting by segment

□ Team Readiness
  □ CSMs understand segment strategies
  □ Playbooks exist per segment
  □ Training completed
  □ Metrics tracked by segment

Anti-Patterns

  • ARR-only tiers — Ignoring potential and strategic value
  • Manual assignment — Subjective, inconsistent tiering
  • Static segmentation — Not updating as customers change
  • One-size engagement — Same model regardless of tier
  • Segment leakage — Enterprise service for SMB pricing
  • Ignoring potential — Only looking at current value
  • No migration path — Customers stuck in initial tier
  • Resource mismatch — High-touch for low-value, or vice versa

title: Usage Analytics & Adoption Metrics impact: HIGH tags: usage-analytics, adoption-metrics, engagement, product-analytics

Usage Analytics & Adoption Metrics

Impact: HIGH

Usage data is the most honest signal of customer health. Customers can tell you they're happy while silently disengaging — usage data tells the real story. Effective usage analytics separate healthy accounts from future churns 60-90 days in advance.

The Usage Analytics Hierarchy

┌──────────────────────────────────────────────────────────────────┐
│                    USAGE ANALYTICS HIERARCHY                     │
├──────────────────────────────────────────────────────────────────┤
│                                                                  │
│  Level 1: ACTIVITY                                               │
│  └── Are they logging in?                                        │
│      Metrics: DAU, WAU, MAU, session count                       │
│                                                                  │
│  Level 2: ENGAGEMENT                                             │
│  └── What are they doing?                                        │
│      Metrics: Actions per session, time in app, feature usage    │
│                                                                  │
│  Level 3: ADOPTION                                               │
│  └── Are they using core features?                               │
│      Metrics: Feature adoption %, key workflow completion        │
│                                                                  │
│  Level 4: VALUE                                                  │
│  └── Are they achieving outcomes?                                │
│      Metrics: Goals completed, ROI realized, business impact     │
│                                                                  │
└──────────────────────────────────────────────────────────────────┘

Key Usage Metrics

MetricDefinitionFormulaTarget
DAU/MAUStickiness ratioDaily active / Monthly active25-40%
L7/L30Weekly engagement7-day active / 30-day active40-60%
Sessions/User/WeekUsage frequencyWeekly sessions / Active users3-5+
Actions per SessionUsage depthTotal actions / Sessions10-20+
Feature Adoption RateBreadthFeatures used / Available features40-60%
Power User %Top engagementUsers >80th percentile / Total15-25%
Dormant %Inactive accountsNo login 30+ days / Total<10%

Feature Adoption Framework

Feature Classification:

CORE (Must Use)         EXPANSION (Growth)       ADVANCED (Power)
├── Essential to        ├── Multiplies value     ├── Differentiating
│   basic value         │                        │   capabilities
├── Onboarding focus    ├── Growth milestone     ├── Power user features
├── 100% adoption       ├── 40-60% adoption      ├── 15-25% adoption
│   target              │   target               │   target
│                       │                        │
Examples:               Examples:                Examples:
- CRM: Contact mgmt     - CRM: Automations       - CRM: Custom objects
- Analytics: Dashboards - Analytics: Alerts      - Analytics: API access
- Support: Tickets      - Support: Self-service  - Support: Integrations

Good Usage Dashboard Design

Customer Usage Dashboard: Acme Corp

┌─────────────────────────────────────────────────────────────────┐
│  OVERALL HEALTH                                    Score: 72    │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  ACTIVITY METRICS (Last 30 Days)                               │
│  ├── Active Users:        45 of 60 licensed (75%)             │
│  ├── DAU/MAU:             28% (industry avg: 25%)             │
│  ├── Sessions/User/Week:  3.2 (↓ from 4.1 last month)         │
│  └── Trend:               ⚠ Declining (-22% MoM)              │
│                                                                 │
│  FEATURE ADOPTION                                               │
│  ├── Core Features:       ████████████████░░░░ 82%             │
│  ├── Expansion Features:  ████████████░░░░░░░░ 58%             │
│  └── Advanced Features:   ████░░░░░░░░░░░░░░░░ 21%             │
│                                                                 │
│  TOP FEATURES BY USAGE                                          │
│  1. Dashboard views       ████████████████████ 2,340           │
│  2. Report exports        ████████████████     1,856           │
│  3. Alert configuration   ████████████         1,247           │
│  4. Team collaboration    ████████             892             │
│  5. API calls             ██████               634             │
│                                                                 │
│  USER SEGMENTS                                                  │
│  ├── Power Users (5+/wk):    12 users (27%)                    │
│  ├── Regular (2-4/wk):       23 users (51%)                    │
│  ├── Light (1/wk):           7 users (16%)                     │
│  └── Dormant (0/wk):         3 users (7%)                      │
│                                                                 │
│  ⚠ ALERT: Usage declining 22% — recommend CSM outreach         │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Bad Usage Dashboard Design

Usage Report: Acme Corp

Total Logins: 12,456
Total Actions: 89,234
Features Available: 47
Features Used: 31

Problems:
✗ All-time totals, not recent activity
✗ No trend information
✗ No context (vs. baseline, vs. peers)
✗ No user-level breakdown
✗ No actionable insights
✗ Missing dormant user identification
✗ No health score integration

Usage Pattern Analysis

PatternDefinitionHealth SignalAction
Steady HighConsistent strong usageHealthyExpansion
GrowingIncreasing over timeVery HealthyCase study
PlateauStable but not growingNeutralFeature adoption push
DecliningDecreasing over timeAt RiskIntervention
SporadicInconsistent engagementWarningUsage training
ConcentratedFew power usersRiskBroaden adoption
DormantNo recent activityCriticalRe-activation

User Segmentation by Usage

SegmentDefinition% of UsersStrategy
ChampionsDaily use, high depth, advocates10-15%Expand, case studies
Power UsersFrequent use, feature breadth15-25%Feature adoption
Regular UsersConsistent weekly use30-40%Habit formation
Casual UsersMonthly, light use15-25%Increase engagement
At-RiskDeclining usage10-15%Re-engagement
DormantNo use 30+ days5-10%Reactivation

Adoption Milestone Tracking

Customer Journey: Feature Adoption Milestones

Day 1:  First Login                                    ✓
Day 3:  Complete profile setup                         ✓
Day 7:  Create first [core object]                     ✓
Day 14: Invite team member                             ✓
Day 21: Set up first automation                        ○ ← Not completed
Day 30: Export first report                            ○
Day 45: Configure integration                          ○
Day 60: Build custom dashboard                         ○

Adoption Score: 57% (4 of 7 milestones)
Status: On track but automation milestone overdue

Recommendation:
- Schedule enablement session for automation setup
- Automation adoption correlates with 2.3x higher retention

Usage Benchmarking

MetricYour AverageIndustry 25thIndustry 50thIndustry 75th
DAU/MAU28%18%25%35%
Feature Adoption52%35%48%62%
Sessions/Week3.22.03.55.0
Power User %22%12%20%30%

Alert Configuration

TriggerThresholdSeverityAction
No login14+ daysWarningAutomated re-engagement email
No login30+ daysHighCSM outreach
Usage decline>25% MoMHighCSM intervention
Usage decline>50% MoMCriticalManager escalation
Key user inactive7+ daysHighImmediate outreach
Feature abandonmentCore feature unused 14+ daysMediumUsage training
Seat utilization<50% activeMediumLicense optimization

Good Usage Analysis

Usage Deep Dive: Declining Account

Account: TechCorp Inc.
Health Score: 48 (was 72 three months ago)
Usage Trend: -34% over 90 days

Root Cause Analysis:

1. Champion Departure (Primary)
   - Sarah Chen (main user, 45% of all activity) left company
   - Remaining users haven't increased usage
   - No new champion identified

2. Feature Concentration Risk
   - 80% of usage was in 2 features
   - Those features are now unused
   - Other features never adopted

3. Team Turnover
   - 3 of 8 licensed users are new (last 60 days)
   - New users have not completed onboarding
   - No enablement sessions scheduled

Recommendations:
1. Schedule call with new stakeholder to identify champion
2. Arrange onboarding for 3 new users
3. Feature adoption push for underutilized capabilities
4. Consider usage-based pricing adjustment if team shrinks further

Usage Metrics Collection Checklist

□ Activity Tracking
  □ Login events with timestamp
  □ Session duration
  □ User identification
  □ Device/platform tracking

□ Engagement Tracking
  □ Feature usage events
  □ Actions per session
  □ Time spent per feature
  □ Navigation patterns

□ Adoption Tracking
  □ Feature first-use detection
  □ Milestone completion
  □ Workflow completion rates
  □ Feature breadth score

□ Aggregations
  □ Daily/weekly/monthly rollups
  □ User-level aggregations
  □ Account-level rollups
  □ Trend calculations

□ Alerting
  □ Inactivity alerts
  □ Decline alerts
  □ Anomaly detection
  □ Threshold breach notifications

□ Visualization
  □ Real-time dashboards
  □ Historical trends
  □ Cohort comparisons
  □ Benchmark overlays

Anti-Patterns

  • Vanity metrics — Total logins don't predict retention
  • All-time totals — Recent activity matters more
  • No user segmentation — Average usage hides problems
  • Ignoring depth — Login without action isn't engagement
  • Missing trends — Snapshots without trajectories
  • No benchmarks — Can't assess without comparison
  • Feature obsession — Activity without value delivery
  • Data silos — Usage disconnected from health scores