When activation metrics stall, /growth-product-manager designs loop-driven experiments, so you can compound user growth. — Claude Skill
A Claude Skill for Claude Code by Nick Jensen — run /growth-product-manager in Claude·Updated
Design growth loops, activation flows, and PLG retention strategies
- Map viral, content, and paid growth loops with feedback mechanics
- Score activation milestones and define aha-moment criteria
- Build retention cohort models with leading indicators
- Design PLG upgrade triggers tied to usage thresholds
- Structure north-star metric trees for growth teams
Who this is for
What it does
Run /growth-product-manager to map a referral loop end-to-end — input channel, action, output, and reinvestment step — producing a loop diagram with 4-6 nodes and estimated cycle time.
Use /growth-product-manager to define a 5-step activation funnel with conversion benchmarks: signup 100% > profile 60% > first-value 40% > habit 25% > upgrade 8%.
Run /growth-product-manager to set up weekly and monthly cohort retention tables with churn flags, producing a framework that tracks 12-week curves across 3 segments.
Use /growth-product-manager to design a freemium-to-paid experiment with control/variant splits, measuring upgrade rate across 2 paywall placements over 4 weeks.
How it works
Describe your product, current growth model, and the metric you want to move — DAU, activation rate, or expansion revenue.
The skill audits your existing loops, identifies friction points, and benchmarks against SaaS median ranges for your stage.
It produces a prioritized experiment backlog with hypotheses, expected lift, and sample-size requirements for each test.
You get a ready-to-execute growth brief with loop diagrams, metric definitions, and weekly tracking templates.
Example
B2B SaaS project management tool, 5k MAU, 12% W1 retention, freemium model. We need to improve activation — most users sign up but never create a project.
Current drop-off: signup-to-first-project is 31% (benchmark: 55%). Root cause: onboarding asks 6 questions before value. Recommendation: skip profile, go straight to template picker.
Template-sharing loop: User creates project from template > invites collaborator > collaborator signs up > discovers templates > creates own project > shares. Estimated cycle: 9 days, k-factor: 0.3.
1. Skip-to-template onboarding (est. +15pp activation, n=800, 2 weeks). 2. Collaborator invite prompt at project-complete (est. k-factor +0.1, n=500, 3 weeks). 3. Weekly digest email with project stats (est. +4pp W4 retention, n=1200, 4 weeks).
Metrics this improves
Works with
Growth Product Manager
Strategic growth product management expertise for SaaS companies — from growth loops and activation to retention, monetization, and PLG strategies.
Philosophy
Growth isn't about hacks. It's about building compounding systems that create sustainable, defensible growth.
The best growth product strategies:
- Systems over tactics — Growth loops compound; growth hacks don't
- Activation is everything — If users don't activate, nothing else matters
- Retention is growth — Churn kills; retained users compound
- Measure what matters — One north star metric, ruthlessly tracked
How This Skill Works
When invoked, apply the guidelines in rules/ organized by:
loops-*— Growth loops, flywheels, viral mechanicsactivation-*— First-time user experience, onboarding, time-to-valueretention-*— Engagement, habit formation, churn preventionmonetization-*— Pricing, upgrades, expansion revenueexperimentation-*— Growth experiments, A/B testing, metricsplg-*— Product-led growth strategies and patterns
Core Frameworks
Growth Loop Types
| Loop Type | Mechanism | Example | Key Metric |
|---|---|---|---|
| Viral | Users invite users | Dropbox, Calendly | K-factor |
| Content | Users create discoverable content | Notion templates, Figma Community | Indexed pages |
| Paid | Revenue funds acquisition | Any SaaS with paid ads | CAC payback |
| Sales | Revenue funds sales team | Enterprise SaaS | ACV / CAC |
| SEO | Content ranks, drives traffic | HubSpot, Zapier | Organic traffic |
The Growth Equation
Growth = Acquisition × Activation × Retention × Monetization × Referral
Each multiplier matters:
- 10% improvement across 5 areas = 61% total improvement
- 50% drop in one area = 50% total drop
The AARRR Funnel (Pirate Metrics)
┌─────────────────────────────────────────────┐
│ ACQUISITION │
│ (How do users find us?) │
├─────────────────────────────────────────────┤
│ ACTIVATION │
│ (Do users have a great first │
│ experience?) │
├─────────────────────────────────────────────┤
│ RETENTION │
│ (Do users come back?) │
├─────────────────────────────────────────────┤
│ REVENUE │
│ (Do users pay us money?) │
├─────────────────────────────────────────────┤
│ REFERRAL │
│ (Do users tell others about us?) │
└─────────────────────────────────────────────┘
PLG Motion Types
| Motion | Best For | Key Lever |
|---|---|---|
| Free Trial | Complex products, considered purchases | Trial conversion rate |
| Freemium | Simple products, network effects | Free → paid conversion |
| Open Source | Developer tools, infrastructure | Community adoption |
| Reverse Trial | High-value products, sticky usage | Premium feature discovery |
| Usage-Based | Variable consumption, API products | Usage expansion |
North Star Metric Framework
North Star Metric
│
├── Measures value delivered to customers
│
├── Leading indicator of revenue
│
├── Reflects product strategy
│
└── Actionable by product team
Examples:
- Slack: Daily Active Users sending messages
- Airbnb: Nights booked
- Amplitude: Weekly Learning Users
- Figma: Weekly Active Editors
Growth Model Overview
| Stage | Focus | Metrics | Experiments |
|---|---|---|---|
| Early (0-$1M ARR) | Activation, retention | Activation rate, D7 retention | 5-10/quarter |
| Growth ($1M-$10M) | Loops, monetization | Growth rate, payback period | 20-50/quarter |
| Scale ($10M+) | Efficiency, expansion | Net revenue retention, LTV/CAC | 50-100/quarter |
Anti-Patterns
- Optimizing acquisition before activation — Filling a leaky bucket
- Vanity metrics — MAU without engagement is meaningless
- Copy-paste growth tactics — What worked for Dropbox won't work for you
- Growth team in a silo — Growth is everyone's job
- Experimentation theater — Running tests without statistical rigor
- Ignoring retention — New users are 5-25x more expensive than retained ones
- Feature bloat over activation — Building more vs ensuring adoption
Reference documents
title: Section Organization
1. Growth Loops & Flywheels (loops)
Impact: CRITICAL Description: Sustainable growth systems that compound over time. The foundation of scalable growth.
2. Activation & Onboarding (activation)
Impact: CRITICAL Description: Getting users to their first "aha moment." If activation fails, nothing else matters.
3. Retention & Engagement (retention)
Impact: CRITICAL Description: Keeping users engaged and coming back. Retention is the foundation of all growth.
4. Viral & Referral Mechanics (viral)
Impact: HIGH Description: Engineering shareability and word-of-mouth into your product.
5. Monetization & Expansion (monetization)
Impact: HIGH Description: Converting users to revenue and expanding within accounts.
6. Growth Experimentation (experimentation)
Impact: HIGH Description: Running rigorous experiments to find growth levers.
7. North Star & Metrics (metrics)
Impact: MEDIUM-HIGH Description: Defining and tracking the metrics that matter.
8. PLG Strategies (plg)
Impact: CRITICAL Description: Product-led growth patterns and implementation strategies.
title: Activation & Onboarding Optimization impact: CRITICAL tags: activation, onboarding, first-run, time-to-value, aha-moment
Activation & Onboarding Optimization
Impact: CRITICAL
Activation is the single most important growth lever. If users don't experience value quickly, nothing else matters — they won't retain, refer, or pay.
What Is Activation?
Activation = User completes the critical action(s) that predict long-term retention
It's NOT:
× Completing signup
× Verifying email
× Finishing onboarding flow
It IS:
✓ Experiencing the core value
✓ Having the "aha moment"
✓ Taking the action that predicts retention
The Activation Equation
Activation Rate = Users who complete activation event / Total signups
Example:
- 1,000 signups
- 230 complete activation event
- Activation rate = 23%
Benchmark: 20-40% is typical, 40%+ is excellent
Defining Your Activation Event
Step 1: Find the "Aha Moment"
Ask: "What single action best predicts a user will still be active in 30 days?"
| Company | Activation Event | Why It Matters |
|---|---|---|
| Slack | Send 2,000 messages (team) | Indicates real team adoption |
| Dropbox | Upload 1 file to 1 folder | Indicates understanding value |
| Follow 30 accounts | Indicates engaging feed | |
| Zoom | Host 1 meeting | Indicates core value received |
| Notion | Create 1 page with content | Indicates investment in tool |
Step 2: Validate with Data
Cohort Analysis:
┌──────────────────────────────────────────────────────────┐
│ Users who did action X in first 7 days │
│ → 65% still active at Day 30 │
│ │
│ Users who did NOT do action X in first 7 days │
│ → 12% still active at Day 30 │
│ │
│ → Action X is your activation event │
└──────────────────────────────────────────────────────────┘
Time to Value (TTV)
The faster users reach value, the higher activation:
TTV Benchmarks:
┌─────────────────┬────────────────┬─────────────────────┐
│ Product Type │ Target TTV │ Example │
├─────────────────┼────────────────┼─────────────────────┤
│ Consumer app │ < 30 seconds │ TikTok: see video │
│ Productivity │ < 5 minutes │ Notion: create page │
│ Developer tool │ < 30 minutes │ Vercel: deploy app │
│ B2B SaaS │ < 1 hour │ Intercom: install │
│ Enterprise │ < 1 day │ Salesforce: import │
└─────────────────┴────────────────┴─────────────────────┘
Onboarding Flow Design
The Setup → Aha → Habit Framework:
┌─────────────────────────────────────────────────────────────┐
│ ONBOARDING STAGES │
├─────────────────────────────────────────────────────────────┤
│ │
│ SETUP (Minimize) AHA (Maximize) HABIT │
│ ───────────────── ───────────── ───────── │
│ • Account creation • First success • Triggers │
│ • Essential config • Core value seen • Routines │
│ • Permissions • "Wow" moment • Engagement │
│ │
│ Goal: < 2 min Goal: < 10 min Goal: Day 7 │
│ │
└─────────────────────────────────────────────────────────────┘
Onboarding Patterns That Work
1. Progressive Disclosure
Don't show everything. Reveal complexity as users need it.
Bad: Sign up → 15 settings → 10 features → empty dashboard
Good: Sign up → 1 action → success → next action → expand
2. Sample Data / Templates
Don't start users with blank slate.
Bad: "Create your first project" (blank screen)
Good: "Start with this template" (pre-populated)
Examples:
- Notion: Template gallery
- Figma: Starter files
- Airtable: Pre-built bases
3. Inline Guidance
Guide within the product, not with modals.
Bad: 5-step tutorial modal before using product
Good: Tooltips that appear as users explore
Empty states with clear CTAs
Checklists that track progress
4. Success Celebration
Acknowledge progress to build momentum.
✓ "You created your first project!"
✓ Progress bar showing completion
✓ Confetti / celebration animation (use sparingly)
✓ "You're ahead of 80% of new users"
The Onboarding Checklist Pattern
┌────────────────────────────────────────────────────────┐
│ Get started with [Product] 3/5 ✓ │
├────────────────────────────────────────────────────────┤
│ ✓ Create your account │
│ ✓ Install the browser extension │
│ ✓ Connect your first integration │
│ ○ Invite a team member │
│ ○ Complete your first [core action] │
│ │
│ [Continue →] │
└────────────────────────────────────────────────────────┘
Why it works:
- Clear progress visualization
- Completion psychology (Zeigarnik effect)
- Guides to activation event
- Can be dismissed but persistent
Activation Rate by Segment
Different users need different paths:
| Segment | Activation Challenge | Solution |
|---|---|---|
| Power users | Want to skip basics | "Skip to advanced" option |
| Beginners | Need hand-holding | Guided walkthrough |
| Teams | Need others to join | Invite flow emphasis |
| Solo users | Need quick wins | Personal value path |
| Mobile | Limited attention | Minimal steps |
Measuring Activation
Primary Metrics:
| Metric | Formula | Target |
|---|---|---|
| Activation Rate | Activated users / Signups | 25-40% |
| Time to Activate | Median time signup → activation | Minimize |
| Setup Completion | Users completing setup / Signups | 70%+ |
| D1 Activation | Users activated within 24h | 50%+ of eventual |
Activation Funnel:
Signup 100% ████████████████████
│
Email verify 85% █████████████████
│
Onboarding 70% ██████████████
│
Setup complete 55% ███████████
│
Core action 35% ███████
│
ACTIVATED 25% █████
Activation Experiments to Run
| Experiment | Hypothesis | Metric |
|---|---|---|
| Remove signup fields | Fewer fields = more completions | Signup → setup rate |
| Add templates | Pre-built content = faster aha | Time to activation |
| Checklist gamification | Progress visibility = completion | Activation rate |
| Personalized onboarding | Relevant path = better activation | Activation by segment |
| Sample data | Not blank = less intimidating | D1 activation |
| Invite during onboarding | Teams activate better | Team activation rate |
Good vs. Bad Onboarding
Good: Linear's Onboarding
Why it works:
✓ Minimal signup (Google SSO)
✓ Asks role to personalize
✓ Pre-populates sample issues
✓ Keyboard shortcuts shown inline
✓ Empty states guide next action
✓ Can be productive in < 5 minutes
Bad: Enterprise Software Onboarding
Why it fails:
✗ 10+ field signup form
✗ Email verification gate
✗ 30-minute setup wizard
✗ Requires IT involvement
✗ Empty dashboard on first login
✗ Value not seen for days/weeks
Anti-Patterns
- Signup friction — Requiring credit card, company info, phone verification for free trials
- Tutorial overload — 10-step walkthrough before seeing the product
- Feature tour — Showing every feature vs. the one that matters
- Empty states — Blank screens with "Create your first X"
- Delayed activation — Requiring invites/setup before seeing value
- One-size-fits-all — Same onboarding for different user types
- Premature asks — Asking for reviews/referrals before activation
- Passive onboarding — Just emails, no in-product guidance
title: Engagement Tactics & Habit Formation impact: HIGH tags: engagement, habit, stickiness, triggers, rewards
Engagement Tactics & Habit Formation
Impact: HIGH
Engagement is the bridge between activation and retention. Users who engage deeply form habits, and habits drive long-term retention and monetization.
The Engagement Equation
Engagement = Frequency × Depth × Breadth
Frequency: How often users return
Depth: How much time/actions per session
Breadth: How many features they use
Engagement Metrics
| Metric | Definition | Why It Matters |
|---|---|---|
| DAU/MAU | Daily active / Monthly active | Stickiness ratio |
| Sessions/User | Average sessions per user | Return frequency |
| Session Duration | Time per session | Depth of engagement |
| Actions/Session | Core actions per session | Usage intensity |
| Feature Adoption | % using key features | Breadth of usage |
| L7/L30 | Days active in last 30 | Habit strength |
DAU/MAU Benchmarks
DAU/MAU Ratio (Stickiness):
50%+ : Daily habit product (messaging, productivity)
Example: Slack, WhatsApp
25-50% : Frequent use product (work tools)
Example: Figma, Linear
10-25% : Weekly use product (planning, reporting)
Example: Analytics tools, project management
<10% : Occasional use (utilities, specific workflows)
Example: Tax software, travel booking
Building Habits: The Hook Model
┌─────────────────────────────────────────────────────────────┐
│ THE HOOK MODEL │
├─────────────────────────────────────────────────────────────┤
│ │
│ TRIGGER │
│ (What prompts the user?) │
│ │ │
│ ↓ │
│ ACTION │
│ (What's the simplest behavior?) │
│ │ │
│ ↓ │
│ VARIABLE REWARD │
│ (What satisfies but leaves wanting more?) │
│ │ │
│ ↓ │
│ INVESTMENT │
│ (What work does user put in?) │
│ │ │
│ └─────────→ (Increases value, creates trigger) │
│ │
└─────────────────────────────────────────────────────────────┘
Trigger Design
External Triggers (You Control):
| Trigger Type | Example | Best For |
|---|---|---|
| "Sarah commented on your doc" | Async updates | |
| Push | "Your report is ready" | Time-sensitive |
| SMS | "Verification code: 123456" | Critical actions |
| In-app | "Try our new feature" | Active users |
| Badge | Notification count | Curiosity |
Internal Triggers (User Creates):
| Trigger | Emotion | Example |
|---|---|---|
| Boredom | "I'm bored" | Open Twitter, TikTok |
| Uncertainty | "I wonder..." | Open Slack, email |
| Loneliness | "I need connection" | Open messaging app |
| Anxiety | "Did I miss something?" | Check notifications |
| Accomplishment | "I want to make progress" | Open productivity app |
Goal: Associate your product with an internal trigger.
Variable Reward Types
TRIBE (Social Rewards):
- Likes, comments, followers
- Recognition from peers
- Social validation
Example: LinkedIn endorsements
HUNT (Information Rewards):
- New content to discover
- Answers to questions
- Relevant information
Example: Twitter feed
SELF (Achievement Rewards):
- Mastery, completion
- Progress tracking
- Personal growth
Example: Duolingo streaks
Investment Mechanisms
User investment increases switching costs:
| Investment Type | Example | Effect |
|---|---|---|
| Data | Notes, documents | Can't leave data behind |
| Followers | Social graphs | Network locked in |
| Reputation | Reviews, karma | Status non-portable |
| Personalization | Settings, preferences | Tailored experience |
| Skill | Keyboard shortcuts | Expertise in tool |
| Content | Created assets | Portfolio in platform |
Engagement Tactics by Stage
New Users (Week 1):
Goal: Build initial engagement pattern
Tactics:
□ Welcome sequence with daily tips
□ Quick wins to celebrate
□ Checklist progress gamification
□ Personal "aha moment" push
□ Low-friction daily trigger
Developing Users (Week 2-4):
Goal: Establish regular usage pattern
Tactics:
□ Feature discovery prompts
□ Usage streaks/achievements
□ Social features introduction
□ Integration suggestions
□ "Power user" tips
Established Users (Month 2+):
Goal: Deepen engagement and prevent decay
Tactics:
□ Advanced feature unlocks
□ Community involvement
□ Referral program
□ Exclusive content/features
□ Recognition/status
Gamification Elements
Use Sparingly But Effectively:
| Element | Purpose | Example |
|---|---|---|
| Progress bars | Show completion | Profile 80% complete |
| Streaks | Encourage consistency | 7-day streak |
| Points | Quantify activity | 1,000 XP earned |
| Levels | Show advancement | Level 5 User |
| Badges | Recognize achievement | "Power User" badge |
| Leaderboards | Social competition | Top 10 contributors |
Gamification Anti-Patterns:
Bad Gamification:
✗ Points that mean nothing
✗ Badges for trivial actions
✗ Forced social competition
✗ Rewards unrelated to value
✗ Gamification that feels manipulative
Good Gamification:
✓ Celebrates real accomplishments
✓ Guides users to value
✓ Creates positive habits
✓ Feels natural to product
Notification Strategy
The Notification Hierarchy:
Priority 1: User-to-user (highest engagement)
"Sarah mentioned you"
Priority 2: User-triggered events
"Your export is ready"
Priority 3: Personalized insights
"Your weekly summary"
Priority 4: Feature education
"Have you tried X?"
Priority 5: Marketing (lowest engagement)
"Check out our new feature"
Notification Timing:
| Timing | Best For | Example |
|---|---|---|
| Real-time | Urgent, social | Mentions, messages |
| Batched | Non-urgent, volume | Daily digest |
| Smart | Personalized timing | When user usually active |
| Triggered | Specific conditions | Abandoned cart, inactivity |
Re-Engagement Campaigns
Churn Risk → Intervention:
Signal: No login in 3 days
→ Push: "You have 5 unread messages"
Signal: Decreased usage (50% drop)
→ Email: "We noticed you haven't used X"
Signal: Stopped using key feature
→ In-app: "Need help with [feature]?"
Signal: Approaching renewal
→ Email: "Here's what you accomplished"
Win-Back Sequence:
Day 3: "We miss you" + value reminder
Day 7: "Here's what's new" + updates
Day 14: "Your data is waiting" + FOMO
Day 30: "Special offer to return" + incentive
Day 60: "Last chance" + data deletion warning
Measuring Engagement Health
Engagement Scoring:
┌─────────────────────────────────────────────────────────────┐
│ USER ENGAGEMENT SCORE │
├─────────────────────────────────────────────────────────────┤
│ FREQUENCY (0-30) │
│ • Logged in today +10 │
│ • Logged in 5+ days this week +10 │
│ • Logged in 15+ days this month +10 │
│ │
│ DEPTH (0-40) │
│ • Used core feature +15 │
│ • Session > 5 minutes +10 │
│ • 10+ actions per session +15 │
│ │
│ BREADTH (0-30) │
│ • Used 3+ features +10 │
│ • Connected integration +10 │
│ • Invited teammate +10 │
│ │
│ ENGAGEMENT TIERS: │
│ 80-100: Power User │
│ 50-79: Engaged │
│ 25-49: Casual │
│ 0-24: At Risk │
└─────────────────────────────────────────────────────────────┘
Anti-Patterns
- Notification spam — More notifications ≠ more engagement
- Dark patterns — Tricks that hurt trust (fake urgency, hidden unsubscribe)
- Engagement at any cost — Metrics up, but users unhappy
- Ignoring user preferences — One-size-fits-all communication
- Gamification overload — Points and badges everywhere
- No value, only dopamine — Engagement without outcome
- Measuring vanity metrics — Sessions without actions
- Abandoning churned users — They can come back
title: Growth Experimentation Process impact: HIGH tags: experimentation, ab-testing, growth, process, iteration
Growth Experimentation Process
Impact: HIGH
Growth is a discipline of systematic experimentation. The best growth teams run 10-20x more experiments than average teams — and learn 10-20x faster.
The Growth Experimentation Mindset
Two types of product work:
BUILD MODE: GROWTH MODE:
───────────────── ──────────────────
Big bets Small experiments
Months of work Days to weeks
High conviction High velocity
Ship and iterate Test and learn
"We believe..." "We'll test..."
The Growth Experiment Lifecycle
┌──────────────────────────────────────────────────────────────┐
│ EXPERIMENT LIFECYCLE │
├──────────────────────────────────────────────────────────────┤
│ │
│ IDEATE → PRIORITIZE → DESIGN → BUILD → RUN → ANALYZE → LEARN
│ │ │ │
│ └───────────────── LEARN & ITERATE ←───────────────┘ │
│ │
└──────────────────────────────────────────────────────────────┘
Experiment Prioritization: ICE Framework
ICE Score = Impact × Confidence × Ease
Impact (1-10): How much will this move the metric?
Confidence (1-10): How sure are we of the impact?
Ease (1-10): How easy is this to implement?
Example:
┌─────────────────────────────────────────────────────────────┐
│ Experiment │ Impact │ Conf │ Ease │ ICE Score │
├─────────────────────────────────────────────────────────────┤
│ Simplify signup flow │ 8 │ 7 │ 9 │ 504 │
│ Add social proof │ 5 │ 6 │ 8 │ 240 │
│ Redesign onboarding │ 9 │ 5 │ 3 │ 135 │
│ New referral program │ 7 │ 4 │ 4 │ 112 │
└─────────────────────────────────────────────────────────────┘
The Experiment Document
Every experiment needs:
┌─────────────────────────────────────────────────────────────┐
│ EXPERIMENT BRIEF │
├─────────────────────────────────────────────────────────────┤
│ EXPERIMENT NAME: [Clear, descriptive name] │
│ │
│ HYPOTHESIS: │
│ If we [change], then [metric] will [improve] because │
│ [rationale]. │
│ │
│ METRICS: │
│ - Primary: [The metric this aims to move] │
│ - Secondary: [Related metrics to watch] │
│ - Guardrail: [Metrics that shouldn't degrade] │
│ │
│ AUDIENCE: │
│ - Who: [User segment] │
│ - Sample: [% of traffic] │
│ - Duration: [Expected runtime] │
│ │
│ SUCCESS CRITERIA: │
│ - Minimum detectable effect: [X%] │
│ - Statistical significance: [95%] │
│ │
│ VARIANTS: │
│ - Control: [Current experience] │
│ - Treatment: [New experience] │
└─────────────────────────────────────────────────────────────┘
Writing Good Hypotheses
Bad Hypothesis:
"Let's test a new onboarding flow"
- No expected outcome
- No rationale
- Can't be proven wrong
Good Hypothesis:
"If we show a personalized checklist during onboarding
(instead of a generic welcome screen),
then day-7 activation rate will increase by 15%
because users will have clear next steps tailored to their use case."
Components:
- Specific change
- Measurable outcome
- Rationale/belief
Statistical Rigor
Sample Size Calculation:
Minimum Sample Size per Variant:
n = (Z² × p × (1-p)) / E²
Where:
Z = 1.96 (for 95% confidence)
p = baseline conversion rate
E = minimum detectable effect
Example:
- Baseline: 5% conversion
- Want to detect: 10% relative lift (5% → 5.5%)
- Need: ~30,000 users per variant
Use: https://www.evanmiller.org/ab-testing/sample-size.html
Running Time:
Never stop early based on results!
Common mistake:
Day 3: Treatment winning by 20%! → Ship it!
Day 14: Treatment actually -5% → Oops.
Why: Early results are noisy. Statistical power requires full sample.
Minimum: 1-2 full business cycles (usually 2+ weeks)
Experiment Types
| Type | When to Use | Example |
|---|---|---|
| A/B Test | Clear change, measurable outcome | Button color, copy |
| Multivariate | Multiple changes, interactions | Page layout + copy + CTA |
| Holdout | Measure cumulative impact | Feature launch impact |
| Sequential | Quick iteration, rolling changes | Onboarding flow steps |
| Fake Door | Validate demand before building | "Coming soon" feature |
| Painted Door | Test interest without building | Click to gauge interest |
The Growth Experiment Cadence
Weekly Growth Sprint:
Monday: Review last week's results
Prioritize this week's experiments
Tuesday- Design and build experiments
Thursday: Launch when ready
Friday: Review early data
Plan next week's experiments
Document learnings
Experiment Velocity Benchmarks:
| Stage | Experiments/Quarter | Why |
|---|---|---|
| Early Stage | 20-30 | Finding what works |
| Growth Stage | 50-100 | Optimizing loops |
| Scale Stage | 100-200 | Marginal gains |
Analyzing Results
Decision Framework:
Statistical Significance
Yes No
┌─────────────┬─────────────┐
Positive│ SHIP IT │ EXTEND │
Result │ │ TEST │
├─────────────┼─────────────┤
Negative│ LEARN & │ CALL IT │
Result │ ITERATE │ (Neutral) │
└─────────────┴─────────────┘
Analysis Checklist:
□ Did we reach required sample size?
□ Did we run for full business cycles?
□ Is the result statistically significant (p < 0.05)?
□ Is the effect size practically meaningful?
□ Did guardrail metrics hold?
□ Are there segment-level differences?
□ Can we explain the result?
Learning Documentation
After every experiment:
┌─────────────────────────────────────────────────────────────┐
│ EXPERIMENT RESULTS │
├─────────────────────────────────────────────────────────────┤
│ RESULT: [Win / Loss / Neutral] │
│ │
│ DATA: │
│ - Control: [X% conversion] │
│ - Treatment: [Y% conversion] │
│ - Lift: [Z%] │
│ - Significance: [p-value] │
│ │
│ DECISION: [Ship / Iterate / Kill] │
│ │
│ LEARNINGS: │
│ - What did we learn about users? │
│ - What hypotheses does this generate? │
│ - What should we test next? │
│ │
│ NEXT STEPS: [Follow-up experiments] │
└─────────────────────────────────────────────────────────────┘
Common Experiment Areas
| Area | Experiments to Try |
|---|---|
| Acquisition | Landing page copy, CTA, social proof, form fields |
| Activation | Onboarding flow, welcome emails, feature discovery |
| Retention | Notification timing, re-engagement emails, feature adoption |
| Monetization | Pricing page, upgrade prompts, trial length |
| Referral | Invite flow, incentives, share mechanics |
Anti-Patterns
- HiPPO decisions — Highest Paid Person's Opinion overrides data
- Peeking — Looking at results before sample size reached
- P-hacking — Running until you get significance
- No guardrails — Improving one metric while breaking another
- Ship and forget — Not monitoring post-ship
- No documentation — Same failed experiments repeated
- Too many variants — Diluting sample size
- Testing tiny changes — Button color when activation is broken
- Experimentation theater — Tests without rigor or learnings
title: Building Growth Flywheels impact: CRITICAL tags: flywheel, compound, systems, sustainable, moat
Building Growth Flywheels
Impact: CRITICAL
A flywheel is a self-reinforcing system where each component accelerates the others. Unlike growth loops (user-level mechanics), flywheels operate at the business/ecosystem level and create compounding advantages over time.
Flywheel vs. Growth Loop
GROWTH LOOP (Tactical): FLYWHEEL (Strategic):
─────────────────────── ─────────────────────────
User-level mechanic Business-level system
Single cycle Multiple reinforcing cycles
Weeks to optimize Years to build
Copyable by competitors Creates defensible moat
Example: Invite flow Example: Amazon's ecosystem
The Classic Amazon Flywheel
┌────────────────┐
│ Lower Prices │
└───────┬────────┘
│
┌───────────────┴───────────────┐
↓ │
┌───────────────┐ ┌───────────────┐
│ More Customers│───────────────→│ More Sellers │
└───────┬───────┘ └───────┬───────┘
│ │
↓ ↓
┌───────────────┐ ┌───────────────┐
│ More Revenue │ │ More Selection│
└───────┬───────┘ └───────┬───────┘
│ │
└───────────────┬───────────────┘
↓
┌────────────────┐
│ Lower Costs │
│ (economies of │
│ scale) │
└────────────────┘
│
└────────→ (back to lower prices)
B2B SaaS Flywheel Patterns
Pattern 1: Product-Led Growth Flywheel
┌────────────────────┐
│ Users Experience │
│ Value │
└─────────┬──────────┘
│
┌─────────────────┴─────────────────┐
↓ │
┌───────────────┐ ┌───────────────┐
│ Users Share/ │ │ Product │
│ Invite Others │ │ Improves │
└───────┬───────┘ └───────┬───────┘
│ ↑
↓ │
┌───────────────┐ ┌───────────────┐
│ More Users │ │ More Revenue │
│ Sign Up │ │ to Invest │
└───────┬───────┘ └───────────────┘
│ ↑
└───────────────────────────────────┘
Pattern 2: Content/SEO Flywheel
┌────────────────────┐
│ Create Quality │
│ Content │
└─────────┬──────────┘
│
┌─────────────────┴─────────────────┐
↓ │
┌───────────────┐ ┌───────────────┐
│ Content Ranks │ │ Revenue │
│ on Google │ │ Funds Team │
└───────┬───────┘ └───────┬───────┘
│ ↑
↓ │
┌───────────────┐ ┌───────────────┐
│ Organic │──────────────────→│ Customers │
│ Traffic │ │ Convert │
└───────────────┘ └───────────────┘
Pattern 3: Community Flywheel
┌────────────────────┐
│ Community Members │
│ Join │
└─────────┬──────────┘
│
┌─────────────────┴─────────────────┐
↓ │
┌───────────────┐ ┌───────────────┐
│ Members │ │ Better │
│ Help Others │ │ Product │
└───────┬───────┘ └───────┬───────┘
│ ↑
↓ │
┌───────────────┐ ┌───────────────┐
│ Knowledge │──────────────────→│ Feedback │
│ Base Grows │ │ Loop │
└───────────────┘ └───────────────┘
Designing Your Flywheel
Step 1: Identify Core Value Exchange
Questions:
1. What value do you create for users?
2. How does that value compound?
3. What do users give back that helps you create more value?
4. What advantages accumulate over time?
Step 2: Map the Flywheel Components
Template:
┌─────────────────────────────────────────────────────────────┐
│ YOUR FLYWHEEL │
├─────────────────────────────────────────────────────────────┤
│ │
│ COMPONENT 1: _____________ (What starts the cycle?) │
│ │ │
│ ↓ │
│ COMPONENT 2: _____________ (What does that enable?) │
│ │ │
│ ↓ │
│ COMPONENT 3: _____________ (What does that create?) │
│ │ │
│ ↓ │
│ COMPONENT 4: _____________ (How does that reinforce #1?) │
│ │ │
│ └──────────────────→ (back to Component 1) │
│ │
└─────────────────────────────────────────────────────────────┘
Step 3: Identify Acceleration Points
For each component, ask:
- What makes this component spin faster?
- What slows it down (friction)?
- How can we accelerate it?
- What metric tracks its velocity?
Flywheel Metrics
| Component | Metric Type | Example |
|---|---|---|
| Velocity | Speed of the cycle | Time from signup to referral |
| Friction | What slows it down | Drop-off at each stage |
| Momentum | Accumulated advantage | Content library size, user base |
| Efficiency | Output per input | Revenue per content piece |
Real-World Flywheel Examples
Figma:
Designers create in Figma
│
↓
Share designs with stakeholders (view links)
│
↓
Stakeholders see Figma value, request it
│
↓
More designers at company adopt
│
↓
Community creates plugins/resources
│
↓
Figma becomes more valuable
│
└────→ (More designers create in Figma)
HubSpot:
Create educational content
│
↓
Content ranks, drives traffic
│
↓
Visitors convert to free tools
│
↓
Free users upgrade to paid
│
↓
Revenue funds more content + product
│
└────→ (Create more educational content)
Notion:
Users create documents/templates
│
↓
Templates shared publicly
│
↓
Templates indexed by Google
│
↓
Searchers discover, sign up to use
│
↓
New users create their own templates
│
└────→ (More content in ecosystem)
Flywheel Stages
Stage 1: Start (Push)
The flywheel is heavy. Takes significant effort to get moving.
- Manual effort required
- Slow initial progress
- Feels like it's not working
- Temptation to give up
Action: Focus all energy on getting first rotation.
Stage 2: Momentum (Pull)
The flywheel starts helping itself.
- Less effort per rotation
- Components reinforce each other
- Measurable acceleration
- Competitive advantage emerging
Action: Identify and remove friction.
Stage 3: Escape Velocity (Self-Sustaining)
The flywheel is unstoppable.
- Self-reinforcing growth
- Competitors can't catch up
- Moat is established
- Focus shifts to efficiency
Action: Protect the flywheel, optimize efficiency.
Building Flywheel Moats
Types of Flywheel Moats:
| Moat Type | Description | Example |
|---|---|---|
| Data | More users = better product | Google, Netflix |
| Network | More users = more value | LinkedIn, Slack |
| Content | More content = more discovery | YouTube, Notion |
| Ecosystem | More integrations = more lock-in | Salesforce, Zapier |
| Brand | More usage = more trust | HubSpot, Stripe |
Common Flywheel Mistakes
1. Too Many Components
Bad: 10-step flywheel
Good: 3-5 components max
2. No Clear Reinforcement
Bad: Components don't actually help each other
Good: Each component directly accelerates others
3. Ignoring Friction
Bad: Flywheel looks good on paper but doesn't spin
Good: Identify and remove friction at each step
4. Premature Optimization
Bad: Optimizing before flywheel turns
Good: Get it turning, then optimize
5. Single Point of Failure
Bad: If one component breaks, flywheel stops
Good: Redundant reinforcement mechanisms
Measuring Flywheel Health
Flywheel Dashboard:
┌─────────────────────────────────────────────────────────────┐
│ FLYWHEEL HEALTH │
├─────────────────────────────────────────────────────────────┤
│ VELOCITY │
│ Cycle time: 14 days (↓ 2 days from last month) │
│ │
│ COMPONENT HEALTH │
│ 1. User activation: 78% (↑ 3%) ████████░░ Good │
│ 2. Viral sharing: 12% (↓ 1%) ██░░░░░░░░ Needs work │
│ 3. Content creation: 45% (↑ 5%) █████░░░░░ Improving │
│ 4. Revenue growth: 8% MoM ████░░░░░░ On track │
│ │
│ MOMENTUM INDICATORS │
│ • Organic % of signups: 34% (target: 50%) │
│ • Content library: 1,200 pieces (↑ 150 this month) │
│ • Community members: 8,500 (↑ 12% MoM) │
└─────────────────────────────────────────────────────────────┘
Anti-Patterns
- Flywheel fantasy — Drawing a flywheel that doesn't actually exist
- Complexity worship — Making it complicated instead of simple
- Ignoring push phase — Expecting flywheel without initial investment
- Friction blindness — Not seeing what's slowing the flywheel
- Moat complacency — Assuming the flywheel protects itself
- Component obsession — Optimizing one part while ignoring others
- Short-term thinking — Tactics over sustainable systems
- Copy-paste flywheels — Adopting others' flywheels without adaptation
title: Growth Loops & Flywheels impact: CRITICAL tags: growth, loops, flywheel, compound, sustainable
Growth Loops & Flywheels
Impact: CRITICAL
Growth loops are self-reinforcing systems where the output of one cycle becomes the input for the next. They compound over time and are the foundation of sustainable growth.
Growth Loop vs. Funnel Thinking
Traditional Funnel (Linear): Growth Loop (Compounding):
Acquisition → Activation → Revenue ┌──────────────────────┐
↓ ↓ ↓ │ New Users │
(lost) (lost) (end) └──────────┬───────────┘
↓
┌──────────────────────┐
│ Experience Value │
└──────────┬───────────┘
↓
┌──────────────────────┐
│ Take Action │
│ (Share/Create/Invite)│
└──────────┬───────────┘
↓
┌──────────────────────┐
│ Generate New │
│ Users │
└──────────┬───────────┘
│
└────────────→ (loops back)
The Five Core Loop Types
| Loop Type | How It Works | Compounds Via | Example |
|---|---|---|---|
| Viral Loop | Users invite other users | Each user brings N more users | Dropbox, Calendly, Slack |
| Content Loop | Users create content → indexed → discovered | SEO + content library | Notion, Figma, Canva |
| Paid Loop | Revenue → paid acquisition → more revenue | Profitable CAC payback | Most B2B SaaS |
| Sales Loop | Revenue → hire sales → more revenue | Sales team scaling | Salesforce, Enterprise SaaS |
| UGC/SEO Loop | User activity creates SEO pages | Indexed pages compound | Yelp, TripAdvisor, G2 |
Designing Your Growth Loop
Step 1: Identify Your Loop
Questions to answer:
1. What action do activated users take?
2. How does that action reach new potential users?
3. What makes those new users sign up?
4. How long is the cycle time?
Step 2: Map the Loop
┌─────────────────────────────────────────────────────────────┐
│ YOUR GROWTH LOOP │
├─────────────────────────────────────────────────────────────┤
│ │
│ INPUT: _____________ (New user / $ / Content piece) │
│ │
│ STEP 1: _____________ (What do they do?) │
│ │
│ STEP 2: _____________ (How does it spread?) │
│ │
│ STEP 3: _____________ (Who sees it?) │
│ │
│ OUTPUT: _____________ (New input for the loop) │
│ │
│ CYCLE TIME: _____________ (How long per loop?) │
│ │
└─────────────────────────────────────────────────────────────┘
Real-World Loop Examples
Calendly's Viral Loop:
1. User creates Calendly account
2. User shares scheduling link
3. Recipient sees "Powered by Calendly"
4. Recipient signs up to send their own links
→ Loop time: ~1 week
→ Each user exposes ~10 potential users
Notion's Content Loop:
1. User creates template
2. Template published to Notion template gallery
3. Google indexes template page
4. Searcher finds template, signs up to use it
5. New user creates their own templates
→ Loop time: ~2-4 weeks
→ Templates compound forever
HubSpot's Content/SEO Loop:
1. HubSpot publishes blog content
2. Content ranks on Google
3. Visitor reads content, sees CTA
4. Visitor signs up for free tool
5. User becomes customer, generates revenue
6. Revenue funds more content
→ Loop time: ~3-6 months
→ 100,000+ indexed pages
Loop Metrics
| Metric | Definition | Target |
|---|---|---|
| Cycle Time | Time for one complete loop | Shorter = faster compounding |
| Loop Conversion | % completing each loop step | Higher = stronger loop |
| Loop Output | New inputs generated per cycle | > 1 for viral growth |
| Loop Efficiency | Cost per loop completion | Lower = more sustainable |
Loop Math
Viral Loop:
K-factor = i × c
i = invites per user
c = conversion rate
K > 1: Exponential growth
K = 1: Stable
K < 1: Declining (need other loops)
Example:
- 5 invites per user, 20% convert
- K = 5 × 0.2 = 1.0 (stable)
- Improve conversion to 25%: K = 1.25 (viral!)
Content Loop:
Content Velocity = New pages × Rank probability × Traffic per page × Conversion rate
Example:
- 100 new pages/month
- 30% rank on page 1
- 500 visits/month per ranking page
- 2% conversion
= 100 × 0.3 × 500 × 0.02 = 300 new users/month
After 12 months of compounding:
= 3,600+ new users/month from content alone
Multi-Loop Strategy
Best-in-class companies stack loops:
Layer 1: Paid Loop (immediate, controllable)
Revenue → Ads → Signups → Revenue
Layer 2: Viral Loop (medium-term, scalable)
User → Invites → New Users → Invites
Layer 3: Content Loop (long-term, defensible)
Content → SEO → Traffic → Users → Content
Good vs. Bad Loop Design
Good: Slack's Viral Loop
Why it works:
✓ Loop is inherent to product value (collaboration)
✓ Short cycle time (days)
✓ High loop conversion (team adoption)
✓ Each user exposes entire team
✓ Network effects strengthen retention
Bad: Forced Referral Loop
Why it fails:
✗ Incentive not aligned with value (give $20, get $20)
✗ Users game the system
✗ No natural reason to share
✗ One-time action, not repeating
✗ Feels spammy to recipients
Identifying Your Best Loop
| If Your Product... | Best Loop Type | Examples |
|---|---|---|
| Requires collaboration | Viral (invite) | Figma, Miro, Notion |
| Creates shareable outputs | Viral (output) | Canva, Loom |
| Has user-generated content | Content/SEO | Yelp, Stack Overflow |
| High LTV, considered purchase | Paid | Salesforce, HubSpot |
| Developer/technical tool | Community/Open Source | GitLab, Supabase |
Anti-Patterns
- No loop identified — Growing linearly, not exponentially
- Forcing viral into non-viral product — "Invite 5 friends to unlock"
- Ignoring cycle time — A 6-month loop won't compound fast enough
- Single loop dependency — One loop dies, growth dies
- Optimizing acquisition without loop — Paying for users who don't loop
- Loop friction — Adding steps that break the loop
title: North Star Metrics & Growth Measurement impact: MEDIUM-HIGH tags: metrics, north-star, measurement, kpis, growth
North Star Metrics & Growth Measurement
Impact: MEDIUM-HIGH
What you measure determines what you optimize. A clear North Star metric aligns the entire company around what matters most — value delivered to customers.
What Is a North Star Metric?
A North Star Metric is the single metric that best captures
the core value your product delivers to customers.
Characteristics:
✓ Measures value delivered (not captured)
✓ Leading indicator of revenue
✓ Actionable by product/growth teams
✓ Easy to understand company-wide
✓ Reflects your product strategy
North Star Metric Examples
| Company | North Star | Why It Works |
|---|---|---|
| Airbnb | Nights booked | Directly measures value exchange |
| Slack | Daily Active Users sending messages | Measures engagement with core value |
| Spotify | Time spent listening | Measures content consumption |
| Amplitude | Weekly Learning Users | Measures users getting insights |
| Figma | Weekly Active Editors | Measures design collaboration |
| Dropbox | Files synced | Measures core utility |
| HubSpot | Weekly Active Teams | Measures business value |
| Notion | Weekly Active Users | Measures regular engagement |
Finding Your North Star
Step 1: Identify Core Value
Questions to answer:
1. What job does our product do for users?
2. When do users say "this is valuable"?
3. What action indicates they got value?
4. What predicts they'll stay and pay?
Step 2: Test Candidate Metrics
For each candidate metric, check:
□ Does it measure VALUE delivered (not just activity)?
□ Does it CORRELATE with revenue?
□ Can teams INFLUENCE it?
□ Is it SIMPLE to explain?
□ Does it ALIGN with strategy?
Score each 1-5. Highest total = best candidate.
Step 3: Validate with Data
Correlation analysis:
- Does higher metric → higher retention?
- Does higher metric → higher revenue?
- Does higher metric → higher referral?
If yes to all three, you have a strong North Star.
The Input Metrics Framework
NORTH STAR METRIC
│
┌──────────────────┼──────────────────┐
│ │ │
INPUT 1 INPUT 2 INPUT 3
(Breadth) (Depth) (Frequency)
│ │ │
Sub-metrics Sub-metrics Sub-metrics
Example: Spotify
Time Spent Listening
│
┌───────────────┼───────────────┐
│ │ │
Total Users Sessions/User Time/Session
(Breadth) (Frequency) (Depth)
│ │ │
• New users • Push opens • Playlist quality
• Reactivated • Home screen • Skip rate
• Recommendations • Completion rate
Metric Hierarchy
Level 1: North Star (Company)
Single metric everyone knows
Example: Weekly Active Users
Level 2: Health Metrics (Leadership)
4-6 metrics that feed the North Star
Example: New users, Activation rate, Retention, ARPU
Level 3: Team Metrics (Product Teams)
Specific metrics each team owns
Example: Onboarding completion, Feature adoption, Support tickets
Level 4: Experiment Metrics (Growth Team)
Granular metrics for experiments
Example: CTA click rate, Form completion, Time on page
The AARRR Framework (Pirate Metrics)
Stage │ Definition │ Example Metrics
─────────────┼─────────────────────────┼──────────────────────
ACQUISITION │ User discovers product │ Visitors, signups, CAC
ACTIVATION │ User experiences value │ Activation rate, TTV
RETENTION │ User keeps coming back │ D7/D30 retention, churn
REVENUE │ User pays money │ Conversion, ARPU, LTV
REFERRAL │ User brings others │ K-factor, NPS, referrals
Growth Accounting
Tracking Where Growth Comes From:
New Users This Period =
+ New signups (acquisition)
+ Resurrected users (win-back)
- Churned users (retention)
Growth Accounting Table:
┌──────────────────────────────────────────────────────────┐
│ Month │ Start │ +New │ +Resur │ -Churn │ End │
├──────────────────────────────────────────────────────────┤
│ January │ 1000 │ +200 │ +30 │ -80 │ 1150 │
│ February │ 1150 │ +250 │ +40 │ -90 │ 1350 │
│ March │ 1350 │ +280 │ +50 │ -100 │ 1580 │
└──────────────────────────────────────────────────────────┘
Metric Definitions Matter
Be Precise:
"Monthly Active Users" could mean:
- Logged in at least once
- Performed any action
- Performed core action
- Performed core action on X days
- Paid users who logged in
Define exactly what counts, document it, don't change it.
Good Definition Example:
Metric: Weekly Active Users (WAU)
Definition:
"Unique users who completed at least one [core action]
in the trailing 7-day period, excluding:
- Internal/test accounts
- Users in trial who never activated
- Bot/automated accounts"
Why: This measures users receiving value, not just logging in.
Dashboard Design
Growth Dashboard Essentials:
┌─────────────────────────────────────────────────────────────┐
│ GROWTH DASHBOARD │
├─────────────────────────────────────────────────────────────┤
│ NORTH STAR │
│ Weekly Active Users: 12,450 (+8% WoW) │
├─────────────────────────────────────────────────────────────┤
│ ACQUISITION │ ACTIVATION │ RETENTION │
│ Signups: 2,100 │ Rate: 34% │ D7: 42% │
│ CAC: $45 │ TTV: 8 min │ D30: 28% │
├─────────────────────────────────────────────────────────────┤
│ REVENUE │ REFERRAL │ EXPERIMENTS │
│ MRR: $125K │ K-factor: 0.3 │ Active: 4 │
│ Conversion: 5.2% │ NPS: 52 │ Last win: +12% │
└─────────────────────────────────────────────────────────────┘
Metric Reviews
Weekly Growth Review:
Agenda (30-60 min):
1. North Star trend (5 min)
2. Input metric review (10 min)
3. Experiment results (15 min)
4. Anomalies and insights (10 min)
5. Priorities for next week (10 min)
Monthly Deep Dive:
Agenda (2 hours):
1. Month-over-month trends
2. Cohort analysis
3. Channel performance
4. Experiment portfolio review
5. Roadmap alignment check
Avoiding Vanity Metrics
| Vanity Metric | Why It's Vanity | Better Alternative |
|---|---|---|
| Total signups | Includes churned users | Active users |
| Page views | Activity, not value | Time on page, conversions |
| Total downloads | Doesn't mean usage | Activated users |
| Follower count | Doesn't mean engagement | Engagement rate |
| Feature launches | Output, not outcome | Feature adoption rate |
Metric Red Flags
Warning Signs:
⚠ Metric going up but revenue flat
⚠ Metric looks good but customers churning
⚠ Teams gaming the metric
⚠ Metric impossible to move
⚠ Different teams measuring differently
⚠ No one can explain what the metric means
Anti-Patterns
- Multiple North Stars — If everything is the priority, nothing is
- Vanity over value — Measuring activity instead of value delivered
- Changing definitions — Makes trends incomparable
- Dashboard overload — 50 metrics = 0 focus
- Lagging-only metrics — Revenue tells you what happened, not what's coming
- Gaming metrics — Optimizing metric without delivering value
- Ignoring cohorts — Aggregate hides user behavior
- No input metrics — North Star without levers to pull
title: Monetization & Expansion Revenue impact: HIGH tags: monetization, pricing, expansion, upsell, revenue
Monetization & Expansion Revenue
Impact: HIGH
Monetization is where growth translates to business outcomes. Great monetization captures a fair share of the value you create — not more, not less.
The Monetization Equation
Revenue = Users × Conversion Rate × ARPU × Retention
Levers:
- More users (acquisition)
- Better conversion (monetization)
- Higher ARPU (pricing/packaging)
- Longer retention (product/success)
Pricing Model Types
| Model | Best For | Example | Key Metric |
|---|---|---|---|
| Flat Rate | Simple products, predictable value | Basecamp | Conversion rate |
| Per Seat | Collaboration tools, team products | Slack, Figma | Seats per account |
| Usage-Based | Variable consumption, API products | Twilio, AWS | Usage growth |
| Freemium | Land-and-expand, network effects | Notion, Dropbox | Free → paid % |
| Free Trial | Considered purchase, complex value | Salesforce | Trial → paid % |
| Reverse Trial | Premium value discovery | Ahrefs | Premium retention |
| Hybrid | Complex products, multiple personas | HubSpot | Multiple metrics |
Conversion Rate Benchmarks
| Model | Benchmark | Top Performers |
|---|---|---|
| Free Trial → Paid | 15-25% | 40%+ |
| Freemium → Paid | 2-5% | 10%+ |
| Free → Paid (PLG) | 3-5% | 7%+ |
| Monthly → Annual | 30-40% | 60%+ |
| Trial Request → Trial | 50-70% | 80%+ |
The Freemium Decision Framework
Use Freemium When:
✓ Large addressable market (100K+ potential users)
✓ Low marginal cost to serve free users
✓ Product improves with more users (network effects)
✓ Free users provide value (content, data, virality)
✓ Clear upgrade triggers exist
✓ Self-serve motion works
Don't Use Freemium When:
✗ High cost to serve
✗ Small, defined market
✗ Complex product requiring sales
✗ No natural upgrade trigger
✗ Free users provide no value
Freemium vs. Free Trial Matrix
Product Complexity
Low High
┌──────────────────────────────────┐
Large │ FREEMIUM FREE TRIAL │
Market │ (Slack, Notion) (HubSpot) │
Size │ │
Small │ FREEMIUM + LIMIT DEMO/SALES │
│ (Limited features) (Enterprise) │
└──────────────────────────────────┘
The Upgrade Trigger Framework
Natural Upgrade Triggers:
| Trigger Type | Example | Why It Works |
|---|---|---|
| Limit Hit | Storage full, seats maxed | Pain at expansion |
| Feature Gate | Advanced features locked | Value demonstration |
| Usage Threshold | 1000 API calls/month | Scales with value |
| Time Limit | 14-day trial ending | Urgency |
| Team Growth | 5+ users | Network effects |
| Compliance | SSO, audit logs needed | Enterprise requirement |
Good vs. Bad Upgrade Triggers:
Good Triggers (Value-Aligned):
✓ User hits limit while getting value
✓ Team needs collaboration features
✓ Business requirement (SSO, compliance)
✓ Power user needs advanced features
Bad Triggers (Value-Misaligned):
✗ Arbitrary limits unrelated to value
✗ Hiding basic features behind paywall
✗ Bait-and-switch pricing
✗ Nagging before user sees value
Packaging Strategy
The Good-Better-Best Framework:
┌─────────────────────────────────────────────────────────────┐
│ PRICING TIERS │
├──────────────┬──────────────┬──────────────┬───────────────┤
│ FREE │ BASIC │ PRO │ ENTERPRISE │
├──────────────┼──────────────┼──────────────┼───────────────┤
│ Individual │ Individual │ Teams │ Organization │
│ Limited │ Full access │ + Collab │ + Admin │
│ │ │ + Integrations│ + SSO/SCIM │
│ │ │ │ + Support │
├──────────────┼──────────────┼──────────────┼───────────────┤
│ $0 │ $10/mo │ $20/seat/mo │ Contact us │
├──────────────┼──────────────┼──────────────┼───────────────┤
│ Acquisition │ Monetization │ Expansion │ Enterprise │
│ funnel │ entry point │ driver │ land │
└──────────────┴──────────────┴──────────────┴───────────────┘
Expansion Revenue Strategies
Net Revenue Retention (NRR):
NRR = (Starting ARR + Expansion - Contraction - Churn) / Starting ARR
Example:
- Starting ARR: $1M
- Expansion: $200K (more seats, upgrades)
- Contraction: $50K (downgrades)
- Churn: $100K (cancellations)
- NRR = ($1M + $200K - $50K - $100K) / $1M = 105%
Top SaaS companies: 120-150% NRR
Expansion Levers:
| Lever | Mechanism | Example |
|---|---|---|
| Seat Expansion | More users added | Slack: team grows |
| Tier Upgrade | Move to higher tier | Basic → Pro |
| Usage Expansion | More consumption | Twilio: more API calls |
| Cross-Sell | New product | HubSpot: CRM → Marketing |
| Add-Ons | Premium features | Priority support |
Pricing Page Optimization
Elements That Convert:
┌─────────────────────────────────────────────────────────────┐
│ PRICING PAGE │
├─────────────────────────────────────────────────────────────┤
│ 1. Clear tier differentiation │
│ → Who is each tier for? │
│ │
│ 2. Recommended tier highlighted │
│ → "Most popular" badge │
│ │
│ 3. Feature comparison table │
│ → Easy to scan differences │
│ │
│ 4. Annual discount visible │
│ → "Save 20% with annual" │
│ │
│ 5. Social proof │
│ → Customer logos, testimonials │
│ │
│ 6. FAQ section │
│ → Handle objections │
│ │
│ 7. Clear CTA │
│ → "Start free trial" > "Get started" │
└─────────────────────────────────────────────────────────────┘
Monetization Experiments
| Experiment | Hypothesis | Metrics |
|---|---|---|
| Trial length (7 vs 14 days) | Shorter trial = more urgency | Conversion rate, time to convert |
| Pricing page layout | Emphasize Pro tier | Tier distribution |
| Limit adjustments | Lower free limit = more upgrades | Conversion rate, activation rate |
| Annual vs monthly default | Annual default = higher LTV | Annual subscription % |
| In-app upgrade prompts | Contextual prompts convert better | Upgrade rate |
| Pricing point testing | Higher price = higher revenue/user | Revenue per user |
The PQL (Product-Qualified Lead) Model
Traditional MQL: PQL:
───────────────── ────────────────
Downloaded ebook vs. Activated in product
Attended webinar Invited teammates
Filled out form Hit usage threshold
Used premium feature
PQLs are 6x more likely to convert than MQLs
PQL Scoring Example:
| Action | Score |
|---|---|
| Activated (completed aha moment) | +30 |
| Invited 2+ teammates | +25 |
| Used integration | +15 |
| Hit 80% of usage limit | +20 |
| Enterprise domain | +10 |
| PQL Threshold | 70+ |
Anti-Patterns
- Monetizing before value — Payment wall before aha moment
- Confusing pricing — Too many tiers, unclear differentiation
- Free too generous — No reason to ever upgrade
- Free too restrictive — Can't experience value
- No expansion path — Ceiling on revenue per customer
- Price vs. value mismatch — Charging more than value delivered
- Ignoring NRR — Focusing only on new revenue
- No PQL definition — Sales chasing cold leads
title: Product-Led Growth (PLG) Strategies impact: CRITICAL tags: plg, product-led, self-serve, freemium, trial
Product-Led Growth (PLG) Strategies
Impact: CRITICAL
Product-Led Growth is a go-to-market strategy where the product itself drives acquisition, activation, conversion, and expansion. The product is the primary growth engine.
What Is PLG?
Traditional GTM: Product-Led GTM:
────────────────── ──────────────────────────
Marketing → Lead Product → User
Sales → Demo User → Activation
Sales → Close User → Conversion (self-serve)
CSM → Expand Product → Expansion
PLG companies let users experience value before paying.
PLG Company Characteristics
| Characteristic | PLG Company | Traditional |
|---|---|---|
| Primary driver | Product | Sales |
| Free option | Yes (trial/freemium) | Rare |
| Sales involvement | After self-serve | Before use |
| CAC payback | < 12 months | 18-24 months |
| Time to value | Minutes to hours | Days to weeks |
| Conversion | In-product | Sales call |
PLG Model Types
┌─────────────────────────────────────────────────────────────┐
│ PLG MODEL TYPES │
├─────────────────────────────────────────────────────────────┤
│ │
│ FREE TRIAL │ FREEMIUM │ OPEN SOURCE │
│ ──────────────── │ ──────────────── │ ──────────── │
│ Full access │ Limited free tier │ Core is free │
│ Time-limited │ Upgrade for more │ Premium adds │
│ 14-30 days │ Forever free │ Hosting/ │
│ Convert or lose │ Convert when ready │ support paid │
│ │ │ │
│ Example: │ Example: │ Example: │
│ Salesforce │ Slack, Notion │ GitLab │
│ │
├─────────────────────────────────────────────────────────────┤
│ │
│ REVERSE TRIAL │ USAGE-BASED │ HYBRID │
│ ──────────────── │ ──────────────── │ ──────────── │
│ Premium first │ Pay for what you │ Multiple │
│ Downgrade to free │ use │ models │
│ Experience value │ Scales with value │ combined │
│ │ │ │
│ Example: │ Example: │ Example: │
│ Ahrefs │ Twilio, AWS │ HubSpot │
│ │
└─────────────────────────────────────────────────────────────┘
The PLG Funnel
Traditional Funnel: PLG Funnel:
───────────────── ─────────────────────────
Awareness Awareness
↓ ↓
Interest Signup (self-serve)
↓ ↓
MQL Activation (in-product)
↓ ↓
SQL PQL (product-qualified)
↓ ↓
Demo Conversion (in-product or sales)
↓ ↓
Close Expansion (in-product or sales)
PLG Principles
1. Time to Value is Everything
Users should experience core value in:
- Consumer: < 30 seconds
- Prosumer: < 5 minutes
- SMB SaaS: < 30 minutes
- Mid-market: < 1 day
If it takes a week to see value, it's not PLG.
2. The Product IS the Salesperson
Product must do the work of:
- Explaining value proposition
- Demonstrating features
- Overcoming objections
- Creating urgency
- Facilitating conversion
Every screen is a sales conversation.
3. Upgrade is a Natural Progression
Good: User hits limit while getting value → upgrade prompt
Bad: User must upgrade to see any value → frustration
The free tier should deliver real value.
The paid tier should deliver 10x more.
PLG Metrics
| Metric | Definition | Benchmark |
|---|---|---|
| Signup-to-Activation | % who complete activation | 20-40% |
| Free-to-Paid | % free users who convert | 2-5% (freemium), 15-25% (trial) |
| Time to Activation | Median time to aha moment | < industry standard |
| PQL-to-Close | % PQLs that convert | 20-40% |
| Expansion Revenue | Revenue from existing customers | 20-40% of new ARR |
| Natural Viral | Users acquired via product | 20-50% of signups |
PQL (Product-Qualified Lead) Framework
What is a PQL?
A PQL is a user who has:
1. Completed activation actions
2. Demonstrated buying intent through usage
3. Fits your ideal customer profile
PQLs convert 5-6x better than MQLs.
PQL Scoring Model:
┌─────────────────────────────────────────────────────────────┐
│ PQL SCORING │
├─────────────────────────────────────────────────────────────┤
│ ACTIVATION SCORE (0-40) │
│ • Completed onboarding +10 │
│ • Used core feature +15 │
│ • Created content/data +15 │
│ │
│ ENGAGEMENT SCORE (0-30) │
│ • Daily active last 7 days +5 per day │
│ • Invited teammates +10 │
│ • Connected integration +10 │
│ │
│ FIT SCORE (0-30) │
│ • Company size > 50 +10 │
│ • Business email domain +10 │
│ • Target industry +10 │
│ │
│ PQL THRESHOLD: 60+ │
└─────────────────────────────────────────────────────────────┘
Self-Serve Conversion Optimization
In-Product Upgrade Triggers:
| Trigger | Implementation | Example |
|---|---|---|
| Limit hit | Show upgrade when at capacity | "You've used 5/5 projects" |
| Feature gate | CTA on locked features | "Upgrade for analytics" |
| Team growth | Prompt when inviting | "Add unlimited members" |
| Power usage | Recognize heavy users | "Looks like you love X..." |
| Time-based | Trial ending soon | "3 days left in trial" |
Upgrade Page Best Practices:
┌─────────────────────────────────────────────────────────────┐
│ IN-PRODUCT UPGRADE │
├─────────────────────────────────────────────────────────────┤
│ 1. Show current plan usage │
│ "You're on Free: 3/5 projects used" │
│ │
│ 2. Highlight relevant value │
│ "Pro includes: Unlimited projects, analytics, SSO" │
│ │
│ 3. Provide social proof │
│ "12,000 teams upgraded last month" │
│ │
│ 4. Make it easy │
│ Credit card on file? One-click upgrade │
│ │
│ 5. Reduce risk │
│ "14-day money-back guarantee" │
└─────────────────────────────────────────────────────────────┘
PLG + Sales (Product-Led Sales)
When to Add Sales:
Pure PLG (self-serve only):
- ACV < $1,000
- Simple product
- Individual buyers
- High volume
PLG + Sales:
- ACV > $10,000
- Enterprise features needed
- Multiple stakeholders
- Complex security/compliance
Sales Assist Model:
┌─────────────────────────────────────────────────────────────┐
│ PRODUCT-LED SALES MOTION │
├─────────────────────────────────────────────────────────────┤
│ │
│ Self-Serve ──────────────────────────────────────→ Convert │
│ │ │
│ │ [PQL Score > Threshold] │
│ ↓ │
│ Sales Touches ────────→ Enterprise Close │
│ │ │
│ │ • Account research │
│ │ • Personalized outreach │
│ │ • Demo of advanced features │
│ │ • Security/compliance questions │
│ │
└─────────────────────────────────────────────────────────────┘
PLG Company Examples
| Company | PLG Model | Key Tactics |
|---|---|---|
| Slack | Freemium | Inherent virality, team expansion |
| Dropbox | Freemium | Storage limits, referral incentives |
| Zoom | Freemium | Time limits, viral meetings |
| Notion | Freemium | Templates, team collaboration |
| Figma | Freemium | Collaboration, "view only" sharing |
| Calendly | Freemium | "Powered by" virality |
| Loom | Freemium | View page virality |
| Linear | Free trial | Opinionated product, team conversion |
Building PLG Infrastructure
PLG Tech Stack:
| Layer | Purpose | Tools |
|---|---|---|
| Analytics | User behavior tracking | Amplitude, Mixpanel |
| Experimentation | A/B testing | LaunchDarkly, Optimizely |
| In-app messaging | Feature adoption | Appcues, Pendo |
| Billing | Self-serve payments | Stripe, Chargebee |
| PQL scoring | Identify hot leads | Custom + CRM |
| Reverse ETL | Data to tools | Census, Hightouch |
Anti-Patterns
- PLG without activation — Users sign up but never see value
- Too-generous free tier — No reason to ever pay
- Too-restrictive free tier — Can't experience value
- Forcing sales for small deals — Friction kills conversion
- No PQL process — Hot leads go cold
- Ignoring product-qualified accounts — Missing expansion signals
- One-size-fits-all — Same experience for indie dev and enterprise
- No upsell path — Revenue ceiling per customer
- Premium-only features in trial — Users adopt features they'll lose
title: Retention & Engagement Strategies impact: CRITICAL tags: retention, engagement, churn, habit, stickiness
Retention & Engagement Strategies
Impact: CRITICAL
Retention is the foundation of sustainable growth. A 5% improvement in retention can increase profits by 25-95%. Without retention, acquisition is just filling a leaky bucket.
The Retention Equation
Retention compounds. Churn kills.
Starting with 1,000 users:
┌──────────────────────────────────────────────────────────────┐
│ Monthly Churn: 5% vs Monthly Churn: 10% │
├──────────────────────────────────────────────────────────────┤
│ Month 1: 950 users 900 users │
│ Month 6: 735 users 531 users │
│ Month 12: 540 users 282 users │
│ Month 24: 292 users 79 users │
└──────────────────────────────────────────────────────────────┘
Same acquisition, wildly different outcomes.
Retention Curve Types
RETENTION CURVE PATTERNS
% Active
100% │●
│ ●
80% │ ● Flattening (Good!)
│ ●●●●●●●●●●●●●●●●●●●●
60% │
│ ●
40% │ ● Declining (Bad!)
│ ●●
20% │ ●●●●
│ ●●●●●●●●●●
0% └────────────────────────────────→ Time
D1 D7 D14 D30 D60 D90
Goal: Curve that flattens, indicating retained cohort
Retention Timeframes
| Timeframe | What It Measures | Benchmark (B2B SaaS) |
|---|---|---|
| D1 Retention | First impression | 40-60% |
| D7 Retention | Early engagement | 25-35% |
| D30 Retention | Product-market fit signal | 15-25% |
| D90 Retention | Long-term value | 10-20% |
| Monthly Retention | Ongoing engagement | 85-95% |
| Net Revenue Retention | Expansion vs. churn | 100-130%+ |
The Retention Stack
┌─────────────────────────────────────────────────────────────┐
│ RETENTION STACK │
├─────────────────────────────────────────────────────────────┤
│ │
│ Layer 4: SWITCHING COSTS │
│ Data, integrations, workflows, team adoption │
│ │
│ Layer 3: HABIT FORMATION │
│ Triggers, routines, variable rewards │
│ │
│ Layer 2: VALUE DELIVERY │
│ Core job done well, consistent experience │
│ │
│ Layer 1: ACTIVATION │
│ Users experience value (foundation of retention) │
│ │
└─────────────────────────────────────────────────────────────┘
Hook Model for Habit Formation
Nir Eyal's Hook Model:
┌──────────────┐
│ TRIGGER │ ← External (notification) or Internal (emotion)
└──────┬───────┘
↓
┌──────────────┐
│ ACTION │ ← Simple behavior in anticipation of reward
└──────┬───────┘
↓
┌──────────────┐
│ VARIABLE │ ← Unpredictable reward creates craving
│ REWARD │
└──────┬───────┘
↓
┌──────────────┐
│ INVESTMENT │ ← User puts something in, increases value
└──────┬───────┘
│
└────────→ (loops back to trigger)
Example: Slack
- Trigger: Notification of new message
- Action: Open Slack, read message
- Reward: Social connection, information (variable)
- Investment: Messages sent, channels joined, context built
Engagement Tactics by Stage
Week 1: Activation & Early Engagement
| Tactic | Implementation | Goal |
|---|---|---|
| Welcome sequence | 3-5 emails guiding to activation | Complete setup |
| Quick wins | Celebrate first success | Build confidence |
| Checklist progress | Show completion status | Drive activation |
| Human touch | Personal message from founder | Build relationship |
Week 2-4: Habit Building
| Tactic | Implementation | Goal |
|---|---|---|
| Usage triggers | Notifications on relevant events | Drive return visits |
| Progress tracking | Show streaks, achievements | Build consistency |
| Feature discovery | Introduce new capabilities | Expand value |
| Social proof | "X users did this today" | Normalize usage |
Month 2+: Deepening & Expansion
| Tactic | Implementation | Goal |
|---|---|---|
| Advanced features | Unlock/introduce power features | Increase switching cost |
| Team expansion | Invite colleagues prompts | Network effect |
| Integrations | Connect other tools | Increase stickiness |
| Use case expansion | Cross-sell, new workflows | Expand value |
Retention Levers
1. Notification Strategy
Good Notifications: Bad Notifications:
✓ Timely (when relevant) ✗ Spam (daily digest no one wants)
✓ Personal (your data/activity) ✗ Generic (new feature!)
✓ Actionable (clear next step) ✗ Dead-end (FYI only)
✓ Valuable (saves time/effort) ✗ Selfish (please come back)
Notification Hierarchy:
1. Activity from people you follow
2. Mentions/responses to your actions
3. Important status changes
4. Educational/onboarding
5. Product updates (sparingly)
2. Email Retention Sequences
Lifecycle Email Strategy:
Day 1: Welcome + quick start
Day 3: Activation push (if not activated)
Day 7: Feature highlight
Day 14: Case study / social proof
Day 21: Re-engagement (if inactive)
Day 30: Value recap + upgrade prompt
Day 45: Win-back (if churned)
3. Re-engagement Campaigns
Churn Risk Signals → Intervention
Signal: No login in 7 days
→ Email: "Here's what you missed"
Signal: Decreased usage
→ In-app: "Need help with anything?"
Signal: Not using key feature
→ Email: "Have you tried [feature]?"
Signal: Support ticket unresolved
→ Alert: Priority follow-up
Signal: Downgrade intent
→ Offer: Personalized retention offer
Measuring Retention
Cohort Analysis:
Week After Signup
W1 W2 W3 W4 W5 W6 W7 W8
Jan Cohort 100% 67% 52% 45% 42% 40% 39% 38%
Feb Cohort 100% 72% 58% 51% 47% 44% 42% 41%
Mar Cohort 100% 75% 62% 55% 52% 49% 47% --
↑ Improving cohort retention = product improvements working
Key Retention Metrics:
| Metric | Formula | What It Tells You |
|---|---|---|
| DAU/MAU | Daily active / Monthly active | Stickiness |
| L7/L30 | Active 7 of last 30 days | Habit strength |
| Resurrection Rate | Churned users who return | Win-back success |
| Net Revenue Retention | (Start ARR + Expansion - Churn) / Start ARR | Revenue retention |
| Logo Retention | Customers retained / Starting customers | Customer retention |
Retention by Product Type
| Product Type | Primary Retention Lever | Secondary Lever |
|---|---|---|
| Collaboration (Slack, Figma) | Team network effects | Switching cost |
| Productivity (Notion, Linear) | Habit + data lock-in | Feature depth |
| Analytics (Amplitude, Mixpanel) | Historical data | Integrations |
| Communication (Intercom, Zendesk) | Workflow dependency | Customer data |
| Developer (GitHub, Vercel) | Ecosystem + reputation | Code/deploy history |
Anti-Patterns
- Ignoring early retention — Focusing on month 3 when users churn in week 1
- Notification spam — More notifications ≠ more engagement
- Feature ship vs. feature adoption — Building new vs. ensuring existing is used
- Vanity engagement metrics — Sessions without value delivery
- Reactive churn prevention — Waiting until users want to cancel
- One-size-fits-all retention — Same strategy for all user segments
- Dark patterns — Making it hard to leave vs. valuable to stay
- Ignoring resurrection — Churned users can come back
title: Viral & Referral Mechanics impact: HIGH tags: viral, referral, word-of-mouth, k-factor, sharing
Viral & Referral Mechanics
Impact: HIGH
Virality isn't luck — it's engineering. The best products have referral built into their core value proposition, not bolted on as an afterthought.
Types of Virality
| Type | Mechanism | Strength | Example |
|---|---|---|---|
| Inherent Viral | Product requires sharing to work | Strongest | Slack, Figma, Calendly |
| Collaborative Viral | More valuable with others | Strong | Notion, Miro |
| Word of Mouth | So good people talk about it | Organic | Linear, Superhuman |
| Incentivized Referral | Rewards for sharing | Moderate | Dropbox, Uber |
| Content/Output Viral | Outputs get shared | Variable | Canva, Loom |
| Status Viral | Users want to signal they use it | Niche | Superhuman, Apple |
The Viral Coefficient (K-Factor)
K = i × c
Where:
i = Number of invites/exposures per user
c = Conversion rate of those invites
K > 1: Exponential growth (each user brings >1 user)
K = 1: Stable (each user replaces themselves)
K < 1: Declining (need other acquisition channels)
Example Calculation:
Calendly:
- Each user sends ~20 scheduling links/month
- Each link seen by 1 unique person
- 5% of recipients sign up
- K = 20 × 0.05 = 1.0 (stable viral)
If product improves:
- Better conversion page → 7% signup
- K = 20 × 0.07 = 1.4 (viral growth!)
Viral Loop Timing
Viral Cycle Time = Time from signup to generating new signup
Faster cycles = faster compounding
Day 1: 1 user
Day 7: K^1 users (if 7-day cycle)
Day 14: K^2 users
Day 30: K^4 users
With K=1.5 and 7-day cycle:
Day 1: 1 user
Day 30: 5 users
Day 60: 25 users
Day 90: 125 users
Designing Inherent Virality
Questions to Ask:
1. Does using the product naturally expose it to non-users?
- Calendly: Recipients see your booking page
- Figma: Collaborators see the tool
- Loom: Viewers see the player
2. Does the product become more valuable with more users?
- Slack: More teammates = more valuable
- Notion: Team knowledge base
- Linear: Team issue tracking
3. Can users accomplish their goal only by involving others?
- Figma: Real-time collaboration
- Calendly: Scheduling requires recipient
- DocuSign: Signing requires counterparty
Referral Program Design
The Referral Stack:
┌─────────────────────────────────────────────────────────┐
│ REFERRAL PROGRAM DESIGN │
├─────────────────────────────────────────────────────────┤
│ │
│ INCENTIVE │ MECHANIC │ TIMING │
│ ─────────────────│───────────────────│────────────────│
│ • Credit/money │ • Unique code │ • After aha │
│ • Free months │ • Share link │ • At value │
│ • Feature unlock │ • In-product │ • Natural │
│ • Status/badge │ • Email invite │ moment │
│ • Charitable │ • Social share │ • Not during │
│ │ │ onboarding │
│ │
│ BOTH SIDES WIN: Referrer + Referred get value │
│ │
└─────────────────────────────────────────────────────────┘
Referral Program Examples:
| Company | Referrer Gets | Referred Gets | Why It Works |
|---|---|---|---|
| Dropbox | 500MB space | 500MB space | Both need storage |
| Uber | $10 credit | $10 credit | Both need rides |
| Robinhood | Free stock | Free stock | Both want money |
| Notion | $5 credit | -- | Simple, low friction |
| Superhuman | Priority access | Priority access | Exclusivity |
Viral Mechanics in Product
1. "Powered By" Badges
Placement matters:
✓ Visible but not intrusive
✓ Links to signup page
✓ Contextual (shows what product does)
Examples:
- Typeform: "Create your own form"
- Calendly: "Powered by Calendly"
- Webflow: "Made in Webflow"
2. Shareable Outputs
Make outputs naturally shareable:
Canva:
- Design → Download → Share (with watermark option)
- Or share Canva link
Loom:
- Record → Share link
- Viewer sees "Record your own Loom"
Figma:
- Design → Share → View-only link
- Viewer can sign up to edit
3. Invite Flows
Good Invite Flow:
┌─────────────────────────────────────────────┐
│ 1. User takes action that benefits from │
│ collaboration │
│ │
│ 2. Prompt: "Share with your team" │
│ [Enter emails] │
│ │
│ 3. Personalized invite sent │
│ │
│ 4. Recipient sees context │
│ (why they're invited, what they'll do) │
│ │
│ 5. Frictionless signup │
│ │
│ 6. Recipient lands in shared context │
└─────────────────────────────────────────────┘
Viral Conversion Optimization
The Invite-to-Signup Funnel:
Invite Sent 100% ████████████████████
│
Invite Opened 60% ████████████
│
Clicked CTA 30% ██████
│
Started Signup 20% ████
│
Completed Signup 15% ███
│
Activated 8% ██
Optimize Each Step:
| Step | Optimization |
|---|---|
| Open rate | Better subject line, sender name is referrer |
| Click rate | Clear value prop, social proof |
| Signup start | SSO options, minimal fields |
| Signup complete | Progressive profiling, skip optional |
| Activation | Personalized based on invite context |
Measuring Referral Success
| Metric | Formula | Benchmark |
|---|---|---|
| K-Factor | Invites × Conversion | > 0.5 is good, > 1 is viral |
| Viral Cycle Time | Avg days signup → referral | Shorter is better |
| Referral Rate | Users who refer / Total users | 10-30% |
| Invite Conversion | Signups / Invites sent | 10-30% |
| Referral LTV | LTV of referred users | Often 16-25% higher |
Word-of-Mouth Engineering
What Makes People Talk:
STEPPS Framework (Jonah Berger):
S - Social Currency : Makes them look good
T - Triggers : Top of mind
E - Emotion : Strong feelings
P - Public : Visible usage
P - Practical Value : Useful to share
S - Stories : Narrative to tell
Product Changes That Drive WoM:
| Change | Why It Works |
|---|---|
| 10x better experience | "You have to try this" |
| Unexpected delight | Story worth telling |
| Status/exclusivity | Social currency |
| Solves common pain | Practical value |
| Visible results | Public proof |
Anti-Patterns
- Forced virality — "Invite 5 friends to continue" kills trust
- Incentive mismatch — Referrer gets value, referred gets spam
- Asking too early — Referral prompt before activation
- Spammy mechanics — Auto-posting, address book import abuse
- Ignoring recipient experience — Optimizing send, ignoring receive
- One-time referral — Program fades after initial burst
- No tracking — Can't measure what you don't track
- Gaming vulnerability — Easy to exploit for rewards