When a feature is ready to ship, /product-launch-manager builds the launch plan, so you can coordinate every team from one doc. — Claude Skill
A Claude Skill for Claude Code by Nick Jensen — run /product-launch-manager in Claude·Updated
Plan launches, coordinate cross-functional teams, and run post-launch retros.
- Generates tiered launch checklists covering eng, marketing, support, sales, and legal
- Drafts beta program plans with cohort selection, feedback loops, and graduation criteria
- Creates internal and external launch comms with stakeholder-specific messaging
- Builds post-launch monitoring dashboards with alert thresholds
- Produces retrospective templates with timeline reconstruction and action items
Who this is for
What it does
Run /product-launch-manager with your feature name and ship date to get a week-by-week checklist for 5 teams, with owners and dependencies mapped.
Give /product-launch-manager your target beta size and goals to produce a cohort plan, opt-in flow, feedback survey, and 3-stage graduation criteria.
Feed /product-launch-manager your feature summary to generate an internal announcement, customer email, help center article, and social post — each with the right tone for its audience.
Run /product-launch-manager after ship to get a monitoring plan with 8 metrics, alert thresholds, and an escalation matrix for the first 72 hours.
Paste your launch timeline into /product-launch-manager to produce a structured retro with what worked, what didn't, root causes, and 5 concrete action items.
How it works
Describe the feature, target launch date, and which teams are involved.
The skill determines the launch tier (soft launch, beta, GA) and generates the matching artifacts — checklists, comms, monitoring plans, or retro templates.
It cross-references dependencies between teams and flags gaps like missing support docs or untrained sales reps.
Review the plan, adjust dates or scope, and re-run to finalize before kicking off execution.
Example
Launching 'Schedule Send' for our email product. GA target: March 15. Teams: engineering, product marketing, customer support, sales enablement. We want a 2-week beta first.
Feb 15-28: Beta cohort (200 users from Enterprise tier) Mar 1-7: Beta feedback synthesis + bug fixes Mar 8-12: Support training, help center articles, sales one-pager Mar 13-14: Final QA, monitoring dashboards live Mar 15: GA release + customer email + blog post
Cohort: 200 Enterprise users, selected by email volume > 50/week. Opt-in via in-app banner. Feedback: in-app survey at day 3, day 10. Graduation: < 2 P1 bugs, > 70% satisfaction score.
1. Feature flag to 100% (eng) — 9:00 AM UTC 2. Help center article published (support) — 9:00 AM UTC 3. Customer email sent (marketing) — 10:00 AM UTC 4. Sales enablement Slack post (sales) — 10:30 AM UTC 5. Social media posts live (marketing) — 11:00 AM UTC 6. Monitor error rates, queue latency for 4 hours (eng)
Metrics this improves
Works with
Product Launch Manager
Strategic product launch expertise for technology companies — from launch planning and tiering to execution, monitoring, and retrospectives.
Philosophy
Great launches aren't about the big bang. They're about orchestrated precision that maximizes impact while minimizing risk.
The best product launches:
- Tier based on impact — Not every feature deserves a keynote
- Coordinate ruthlessly — Cross-functional alignment is non-negotiable
- Validate before announcing — Beta programs de-risk everything
- Plan for failure — Rollback plans aren't pessimism, they're professionalism
- Measure what matters — Success criteria before, not after, launch
How This Skill Works
When invoked, apply the guidelines in rules/ organized by:
planning-*— Launch strategy, tiering, timelines, success criteriacoordination-*— Cross-functional alignment, RACI, stakeholder managementbeta-*— Early access programs, beta cohorts, feedback loopscommunication-*— Internal enablement, external messaging, launch commsexecution-*— Launch day operations, war rooms, monitoringpostlaunch-*— Retrospectives, metrics analysis, iteration
Core Frameworks
Launch Tier Model
| Tier | Criteria | Timeline | Channels | Example |
|---|---|---|---|---|
| Tier 1 | New product, major platform shift | 8-12 weeks | Full press, event, keynote | New product line |
| Tier 2 | Major feature, significant expansion | 4-8 weeks | Blog, email, social, PR | Enterprise feature |
| Tier 3 | Feature enhancement, integration | 2-4 weeks | Blog, changelog, email | New integration |
| Tier 4 | Bug fix, minor improvement | 1-2 weeks | Changelog, in-app | UI improvement |
Launch Readiness Model
┌─────────────────────────────────────────────────────────┐
│ LAUNCH READINESS │
├─────────────────────────────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Product │ │Marketing│ │ Sales │ │ Support │ │
│ │ Ready │ │ Ready │ │ Ready │ │ Ready │ │
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
│ │ │ │ │ │
│ └────────────┴─────┬──────┴────────────┘ │
│ │ │
│ ┌─────▼─────┐ │
│ │ GO/NO-GO │ │
│ │ DECISION │ │
│ └───────────┘ │
└─────────────────────────────────────────────────────────┘
Launch RACI Matrix
| Activity | Product | Marketing | Sales | Support | Eng | Exec |
|---|---|---|---|---|---|---|
| Feature requirements | A | C | C | C | R | I |
| Launch tier decision | R | C | C | I | C | A |
| Launch date | R | C | I | I | C | A |
| External messaging | C | R | C | I | I | A |
| Internal enablement | C | R | R | R | I | I |
| Technical readiness | C | I | I | I | R | A |
| Support documentation | C | I | I | R | C | I |
| Go/no-go decision | R | R | R | R | R | A |
R = Responsible, A = Accountable, C = Consulted, I = Informed
Launch Timeline Template
Week -8: Launch brief, tier decision, stakeholder alignment
Week -6: Beta program begins, messaging draft
Week -4: Sales/support enablement starts, PR outreach
Week -2: Go/no-go checkpoint, final content review
Week -1: War room setup, monitoring configured, runbook complete
Day 0: LAUNCH
Week +1: Post-launch monitoring, quick fixes
Week +2: Launch retrospective, metrics review
Success Criteria Framework
| Category | Metric Type | Example |
|---|---|---|
| Adoption | Usage metrics | DAU, feature adoption rate, activation |
| Quality | Stability metrics | Error rate, P0 incidents, rollback rate |
| Business | Revenue metrics | Conversion, upsell, pipeline influence |
| Sentiment | Feedback metrics | NPS, support tickets, social sentiment |
Communication Templates
Launch Brief Structure
1. Executive Summary
2. Launch Tier & Rationale
3. Target Audience
4. Key Messages (3 max)
5. Success Criteria
6. Timeline & Milestones
7. RACI & Stakeholders
8. Risks & Mitigations
9. Budget (if applicable)
10. Approval Sign-offs
Go/No-Go Checklist
□ Product: Feature complete and tested
□ Product: Performance benchmarks met
□ Engineering: Rollback plan documented
□ Engineering: Monitoring/alerts configured
□ Marketing: All content published/scheduled
□ Marketing: PR embargo lifted
□ Sales: Enablement complete, battlecards ready
□ Support: Documentation live, team trained
□ Legal: Compliance review complete
□ Exec: Final approval received
Anti-Patterns
- Launch without tiers — Treating every release like a Tier 1 burns out teams and audiences
- Big bang only — Skipping beta means learning in production
- Engineering complete = launch ready — Code done ≠ market ready
- No rollback plan — Hope is not a strategy
- Post-hoc success criteria — Defining success after launch is rationalization
- Siloed launches — Marketing finds out when customers do
- Launch and leave — No post-launch monitoring or iteration
- Vanity launch metrics — Press mentions ≠ product success
Reference documents
title: Section Organization
1. Launch Planning (planning)
Impact: CRITICAL Description: Foundational launch decisions — tiering, timelines, success criteria, and strategic alignment. The blueprint for everything that follows.
2. Cross-Functional Coordination (coordination)
Impact: CRITICAL Description: Stakeholder alignment, RACI matrices, and orchestrating multiple teams toward a unified launch. Where most launches fail.
3. Beta & Early Access (beta)
Impact: HIGH Description: Pre-launch validation programs, beta cohort management, and feedback integration. De-risk before you launch.
4. Launch Communications (communication)
Impact: HIGH Description: Internal enablement, external messaging, and multi-channel communication plans. What you say and when you say it.
5. Launch Execution (execution)
Impact: CRITICAL Description: Launch day operations, war rooms, runbooks, and real-time decision making. The moment of truth.
6. Post-Launch Operations (postlaunch)
Impact: HIGH Description: Monitoring, retrospectives, metrics analysis, and iterative improvement. The launch isn't over when it ships.
title: Beta and Early Access Programs impact: HIGH tags: beta, early-access, validation, feedback
Beta and Early Access Programs
Impact: HIGH
Beta programs de-risk launches by validating with real users before you announce to the world.
Beta Program Types
| Type | Duration | Size | Purpose | Best For |
|---|---|---|---|---|
| Alpha | 2-4 weeks | 5-15 users | Technical validation | Complex features |
| Closed Beta | 4-8 weeks | 50-200 users | Product-market fit | Major features |
| Open Beta | 2-4 weeks | 500+ users | Scale testing | Platform changes |
| Early Access | Ongoing | Selected customers | Premium positioning | Enterprise |
Beta Cohort Selection
Ideal Beta Participants:
| Criteria | Weight | Rationale |
|---|---|---|
| Power users | High | Will stress-test thoroughly |
| Target segment | High | Validates PMF |
| Responsive to surveys | Medium | Will provide feedback |
| Diverse use cases | Medium | Uncovers edge cases |
| Vocal in community | Low | Organic promotion |
| Enterprise customer | Varies | Validates enterprise readiness |
Good Example: Structured Beta Program
## Beta Program: Workflow Automation
### Program Structure
| Phase | Duration | Cohort Size | Goals |
|-------|----------|-------------|-------|
| Alpha | 2 weeks | 10 users | Core flow validation |
| Closed Beta | 4 weeks | 100 users | Use case coverage |
| Open Beta | 2 weeks | 500 users | Scale + edge cases |
### Cohort Selection Criteria
**Alpha (10 users):**
- Internal power users (5)
- Design partners (3)
- Customer advisory board (2)
**Closed Beta (100 users):**
- Power users from target segment (40)
- Customers who requested feature (30)
- Mix of company sizes (20)
- International users (10)
### Feedback Collection
| Method | Timing | Goal |
|--------|--------|------|
| Kickoff survey | Day 1 | Baseline expectations |
| In-app feedback widget | Continuous | Friction points |
| Weekly survey | End of week | Satisfaction trend |
| User interviews | Week 2, 4 | Deep qualitative |
| Exit survey | Beta end | Overall assessment |
### Success Criteria for GA
- 70% of beta users would be "disappointed" without feature
- < 5% error rate in workflows
- < 3 critical bugs reported
- NPS > 40 from beta cohort
- 3+ validated use cases documented
Bad Example: Unstructured Beta
## Beta Program: New Editor (What Not To Do)
**The Approach:**
- "Let's just ship it to 10% of users"
- No cohort selection criteria
- No feedback mechanism planned
- No success criteria defined
- No timeline or phases
**What Happened:**
- Random users got feature (including churned accounts)
- No feedback received for 2 weeks
- Critical bug discovered by angry customer tweet
- No data to support GA decision
- Launched anyway, 3x support tickets
**Lesson:** Beta without structure isn't beta, it's chaos.
Beta Communication Templates
Invitation Email:
Subject: You're invited: Early access to [Feature]
Hi [Name],
You've been selected for early access to [Feature] -
our new [value prop in one line].
**Why you:**
- You're one of our most engaged [segment] users
- You've [specific behavior that qualified them]
**What you get:**
- Exclusive early access (4 weeks before GA)
- Direct line to product team
- Shape the final product with your feedback
**What we ask:**
- Try the feature within first week
- Complete 2 short surveys (5 min each)
- Report bugs through in-app widget
[Accept Invitation Button]
This beta runs [dates]. Features may change based on feedback.
Weekly Check-in:
Subject: Week [X] Check-in: [Feature] Beta
Hi [Name],
Quick update on the beta:
**This week's improvements:**
- [Improvement based on feedback]
- [Bug fix]
**Your feedback in action:**
- "[Quote from feedback]" → We [action taken]
**This week's ask:**
- Try [specific feature/flow]
- 2-min survey: [link]
Questions? Reply to this email or Slack me directly.
Feedback Collection Framework
| Channel | What to Capture | Frequency |
|---|---|---|
| In-app widget | Friction points, bugs | Continuous |
| NPS survey | Overall satisfaction | Weekly |
| User interviews | Deep understanding | Bi-weekly |
| Usage analytics | Behavioral patterns | Daily analysis |
| Support tickets | Blockers, confusion | Continuous |
Beta Metrics Dashboard
Beta Health Dashboard
═════════════════════
Participation
├── Invited: 100
├── Accepted: 85 (85%)
├── Active (used this week): 62 (73%)
└── Churned from beta: 8 (9%)
Feedback
├── Survey responses: 45 (53%)
├── Bug reports: 23
├── Feature requests: 12
└── Interview volunteers: 18
Quality
├── Critical bugs: 2 (both fixed)
├── Major bugs: 8 (5 fixed)
├── Error rate: 2.3%
└── Performance: 180ms p95
Sentiment
├── NPS: 52
├── "Would be disappointed": 68%
└── Top complaint: "Confusing first setup"
Beta Exit Criteria
| Criteria | Threshold | Current | Status |
|---|---|---|---|
| Critical bugs | 0 open | 0 | PASS |
| Major bugs | < 3 open | 2 | PASS |
| Error rate | < 1% | 0.8% | PASS |
| NPS | > 40 | 52 | PASS |
| PMF indicator | > 60% disappointed | 68% | PASS |
| Documentation | 100% coverage | 100% | PASS |
GA Decision: APPROVED
Early Access vs. Beta
| Aspect | Beta | Early Access |
|---|---|---|
| Goal | Validation | Premium positioning |
| Duration | Time-boxed | Ongoing |
| Expectation | Bugs expected | Near-production quality |
| Communication | Direct, frequent | Professional, polished |
| Incentive | Shape product | Get it first |
| Selection | Use case fit | Strategic value |
Anti-Patterns
- Beta as soft launch — No feedback mechanism, just staged rollout
- Too small cohort — 5 users can't find edge cases
- Too large cohort — 1000 users with no feedback structure
- No exit criteria — Beta that never ends
- Ignoring feedback — Collecting data without acting on it
- Random selection — Anyone who raises hand vs. strategic cohort
- No timeline — "We'll see how it goes"
- Beta fatigue — Same users in every beta, burned out
title: Internal Enablement for Launches impact: HIGH tags: communication, enablement, sales, support, training
Internal Enablement for Launches
Impact: HIGH
Your internal teams are your first launch audience. If Sales can't explain it and Support can't troubleshoot it, you've already failed.
Enablement Timeline
| Milestone | Sales | Support | CS | Marketing |
|---|---|---|---|---|
| T-4 weeks | Feature preview | Feature preview | Feature preview | Deep dive |
| T-3 weeks | Value prop training | Documentation review | Use case review | Content creation |
| T-2 weeks | Objection handling | Troubleshooting training | Talk track practice | Final content |
| T-1 week | Battlecard delivery | FAQ finalized | Customer prep | Launch prep |
| Launch | Ready to sell | Ready to support | Ready to expand | Go live |
Sales Enablement Package
Required Materials:
| Asset | Purpose | Owner | Timing |
|---|---|---|---|
| One-pager | Quick reference | PMM | T-2 weeks |
| Battlecard | Competitive positioning | PMM | T-2 weeks |
| Talk track | Discovery questions | PMM + Sales | T-2 weeks |
| Demo script | Standardized demo | PMM | T-1 week |
| Objection handling | Common pushback | PMM + Sales | T-1 week |
| Pricing guide | If applicable | Product | T-2 weeks |
| Customer stories | Social proof | PMM | T-1 week |
Good Example: Sales One-Pager
# Enterprise SSO - Sales One-Pager
## What It Is
Single Sign-On integration supporting SAML 2.0, OIDC, and
SCIM provisioning for enterprise identity providers.
## Who It's For
- Enterprise companies (500+ employees)
- Security-conscious organizations
- Companies with existing IdP (Okta, Azure AD, etc.)
## Key Value Props
1. **Security compliance** - Meet SOC 2, HIPAA requirements
2. **IT efficiency** - Automated user provisioning/deprovisioning
3. **User experience** - One-click login, no password fatigue
## Discovery Questions
- "How do you currently manage user access across tools?"
- "What's your offboarding process when employees leave?"
- "Are you facing any compliance requirements around access control?"
## Objection Handling
| Objection | Response |
|-----------|----------|
| "Too expensive" | SSO reduces password reset tickets by 40% |
| "We use [competitor]" | We support same IdPs + faster setup |
| "Not a priority" | Security breaches cost avg $4.2M |
## Pricing
- Included in Enterprise plan ($X/user/month)
- Available as add-on for Business plan (+$Y/user/month)
## Demo Environment
- sandbox-sso.company.com
- Credentials: [email protected] / [in 1Password]
Bad Example: No Enablement
## What Happened: Widget 2.0 Launch
**Sales Team Experience:**
- Found out about launch from customer LinkedIn post
- No training provided
- Had to read blog post to understand feature
- Demo environment not set up
- First customer call: "I don't know, let me get back to you"
**Support Team Experience:**
- Received first ticket before documentation existed
- No troubleshooting guide
- Had to reverse-engineer feature from code
- 2-hour average response time (normally 30 min)
**Result:**
- Lost 3 enterprise deals in first week
- Support CSAT dropped 20 points
- Team morale damaged
- Customer trust eroded
Support Enablement Package
Required Materials:
| Asset | Purpose | Owner | Timing |
|---|---|---|---|
| Feature documentation | Reference | Technical Writer | T-2 weeks |
| FAQ document | Quick answers | Product + Support | T-1 week |
| Troubleshooting guide | Issue resolution | Engineering + Support | T-1 week |
| Known issues list | Transparency | Engineering | Launch day |
| Escalation matrix | When to escalate | Support Lead | T-1 week |
| Canned responses | Efficiency | Support Lead | T-1 week |
Good Example: Support Troubleshooting Guide
# SSO Troubleshooting Guide
## Common Issues & Resolutions
### Issue: SAML assertion error
**Symptoms:** User sees "Invalid SAML response" on login
**Likely Causes:**
1. Clock skew between IdP and our servers
2. Incorrect assertion consumer URL
3. Certificate mismatch
**Resolution Steps:**
1. Check IdP configuration matches our setup guide
2. Verify assertion URL is `https://app.company.com/sso/callback`
3. Compare certificate thumbprint in Admin > SSO settings
4. If persists, collect SAML trace (instructions below)
**Escalation:** If unresolved after 3 attempts, escalate to
Tier 2 with SAML trace attached.
### Issue: SCIM sync failing
**Symptoms:** Users not provisioning/deprovisioning
**Likely Causes:**
1. SCIM token expired
2. Rate limiting
3. User attribute mapping mismatch
**Resolution Steps:**
1. Verify SCIM token in Admin > Integrations
2. Check our status page for rate limit issues
3. Review attribute mapping in IdP
4. Run manual sync from Admin panel
**Escalation:** If sync queue > 1000 users, escalate to
Engineering with org ID.
Customer Success Enablement
| Asset | Purpose | Owner | Timing |
|---|---|---|---|
| Expansion talk track | Upsell opportunity | CS + PMM | T-2 weeks |
| Customer announcement | Inform existing customers | CS | Launch day |
| Implementation guide | Adoption support | CS | T-1 week |
| Health score impact | How feature affects metrics | Product | T-2 weeks |
Enablement Delivery Methods
| Method | Best For | Timing | Duration |
|---|---|---|---|
| Live training | Major features, Q&A | T-2 weeks | 30-60 min |
| Recorded video | Reference, async teams | T-2 weeks | 10-15 min |
| Documentation | Detailed reference | T-2 weeks | Self-paced |
| Slack announcement | Awareness, quick updates | Launch day | 5 min read |
| Office hours | Questions, edge cases | T-1 week | 30 min |
Enablement Certification
For Tier 1/2 launches, consider requiring certification:
Sales Certification Checklist
─────────────────────────────
□ Completed training video (15 min)
□ Passed quiz (80% minimum)
□ Delivered practice demo to manager
□ Reviewed battlecard
□ Shadowed one customer call
Certification valid until: [Date]
Enablement Feedback Loop
Week 1 Post-Launch:
├── "What questions are customers asking?"
├── "What objections are you hearing?"
├── "What's missing from materials?"
└── "What's confusing in documentation?"
Action: Update materials based on feedback
Anti-Patterns
- Launch day enablement — Training the day of launch
- Email-only enablement — No one reads long emails
- No certification — Assuming everyone will self-enable
- One-and-done — No iteration based on feedback
- PMM creates alone — Sales and Support not involved in creation
- Different versions — Sales and Support telling different stories
- No demo environment — "Just describe it to customers"
- Technical-only docs — No business context for Sales
title: External Launch Communications impact: HIGH tags: communication, messaging, pr, marketing
External Launch Communications
Impact: HIGH
Your launch message reaches customers once. Make it count with clear, compelling communication across all channels.
Message Hierarchy
┌─────────────────────┐
│ POSITIONING │
│ (Why we exist) │
└──────────┬──────────┘
│
┌──────────▼──────────┐
│ KEY MESSAGE │
│ (What's new) │
└──────────┬──────────┘
│
┌────────────────┼────────────────┐
│ │ │
┌─────▼─────┐ ┌─────▼─────┐ ┌─────▼─────┐
│ Proof │ │ Proof │ │ Proof │
│ Point 1 │ │ Point 2 │ │ Point 3 │
└───────────┘ └───────────┘ └───────────┘
Channel Strategy by Tier
| Channel | Tier 1 | Tier 2 | Tier 3 | Tier 4 |
|---|---|---|---|---|
| Press release | Yes | Sometimes | No | No |
| Blog post | Long-form | Medium | Short | No |
| Email (all users) | Yes | Segment | Segment | No |
| Email (newsletter) | Featured | Mentioned | Changelog | No |
| Social media | Campaign | Posts | Post | No |
| In-app notification | Yes | Yes | Yes | Sometimes |
| Changelog | Yes | Yes | Yes | Yes |
| Video/demo | Yes | Sometimes | No | No |
| Webinar | Sometimes | No | No | No |
Messaging Framework
The 3-Part Launch Message:
| Component | Purpose | Example |
|---|---|---|
| What | Feature description | "Enterprise SSO with SAML and SCIM" |
| So What | Benefit | "Secure your team with one-click login" |
| Now What | CTA | "Enable SSO in Admin Settings today" |
Good Example: Launch Blog Post
# Introducing Enterprise SSO: Secure Your Team with One-Click Login
Your security team has been asking. Your IT team has been waiting.
Today, we're launching Enterprise SSO - making [Product] as secure
as it is powerful.
## What's New
Enterprise SSO brings:
- **SAML 2.0 & OIDC support** - Works with Okta, Azure AD, OneLogin,
and any standards-compliant identity provider
- **SCIM provisioning** - Automatically sync users from your IdP
- **Just-in-time provisioning** - New users created on first login
## Why It Matters
[Quote from Security Leader at beta customer]
**For IT teams:** No more manual user management. When someone leaves,
they're automatically deprovisioned. No forgotten accounts, no security gaps.
**For employees:** One click to log in. No more password to remember.
No more "forgot password" flows.
**For security teams:** Audit every login. Enforce MFA through your IdP.
Meet compliance requirements for SOC 2, HIPAA, and more.
## How to Get Started
1. Navigate to Admin > Security > SSO
2. Select your identity provider
3. Follow our step-by-step guide (average setup: 15 minutes)
[CTA Button: Enable SSO Now]
## Pricing
Enterprise SSO is included in our Enterprise plan at no additional cost.
Business plan customers can add SSO for $X/user/month.
## Learn More
- [Documentation: SSO Setup Guide]
- [Webinar: SSO Best Practices (register)]
- [Talk to Sales about Enterprise]
Bad Example: Weak Launch Communication
# New Feature: SSO
We added SSO to our product. You can now use SSO to log in.
SSO stands for Single Sign-On. It lets you log in with your
company credentials.
To use it, go to settings and turn it on.
Thanks,
The Team
**Why This Fails:**
- No value proposition
- No differentiation
- No specific capabilities mentioned
- No social proof
- Weak CTA
- Forgettable
Email Campaign Structure
Tier 1 Launch Email Sequence:
| Timing | Purpose | Subject Line | |
|---|---|---|---|
| Teaser | T-1 week | Build anticipation | "Something big is coming..." |
| Announcement | Launch day | Main message | "[New] Enterprise SSO is here" |
| How-to | T+2 days | Drive adoption | "Get started with SSO in 15 minutes" |
| Social proof | T+1 week | Reinforce value | "How [Customer] secured their team" |
Social Media Launch Plan
LinkedIn (B2B focus):
Post 1 (Launch day, 8am):
[Announcement + key visual]
"Today we're launching Enterprise SSO..."
Post 2 (Launch day, 2pm):
[Founder perspective]
"Why we built SSO: [personal story]..."
Post 3 (Day 2):
[Customer quote card]
"[Quote from beta customer]..."
Post 4 (Day 3):
[How-to carousel]
"Set up SSO in 4 steps..."
Twitter/X (Tech community):
Thread (Launch day):
1/ Big news: Enterprise SSO is live
Here's what's included:
- SAML 2.0 & OIDC
- SCIM provisioning
- All major IdPs supported
2/ The best part? Setup takes 15 minutes.
We worked with 50 beta customers to make configuration dead simple.
3/ [Screenshot of setup flow]
4/ Available now for Enterprise customers.
Business customers can add it for $X/user/mo.
Link: [launch blog]
Press Release Structure
FOR IMMEDIATE RELEASE
# [Company] Launches Enterprise SSO, Enabling Secure
# One-Click Access for Organizations
[City, Date] - [Company], the leading [category], today
announced the launch of Enterprise Single Sign-On (SSO),
enabling organizations to [key benefit].
## Key capabilities include:
- [Capability 1]
- [Capability 2]
- [Capability 3]
"[Quote from company executive about why this matters]"
- [Name], [Title], [Company]
## Customer validation
"[Quote from beta customer about value]"
- [Name], [Title], [Customer Company]
## Availability
[Pricing and availability details]
## About [Company]
[Boilerplate]
## Media Contact
[Contact info]
Communication Timing
| Channel | Timing | Rationale |
|---|---|---|
| Press (embargo) | T-2 weeks | Build coverage |
| Press (lift embargo) | Launch day 6am | Morning news cycle |
| Blog | Launch day 8am | Primary announcement |
| Launch day 9am | After blog is live | |
| Social | Launch day 10am | After email sends |
| In-app | Launch day 10am | Aligned with social |
Anti-Patterns
- Jargon-heavy — "Leveraging synergies" instead of clear benefits
- Feature-first — Leading with what, not why
- No CTA — Announcement without action
- One channel — Blog only, no distribution
- Bad timing — Friday afternoon or holiday launch
- No visuals — Text-only announcements
- Inconsistent message — Different story on each channel
- Premature PR — Press release for Tier 3 feature
title: Cross-Functional Launch Coordination impact: CRITICAL tags: coordination, stakeholders, raci, alignment
Cross-Functional Launch Coordination
Impact: CRITICAL
Launches fail at the seams between teams. Relentless coordination is the difference between success and chaos.
The Launch Coordination Triad
┌─────────────┐
│ PRODUCT │
│ (Captain) │
└──────┬──────┘
│
┌───────────┴───────────┐
│ │
┌────▼────┐ ┌────▼────┐
│MARKETING│◄───────────►│ENGINEER │
│ │ │ -ING │
└─────────┘ └─────────┘
Product is the captain, but Marketing and Engineering have veto power on their domains.
RACI Matrix by Launch Phase
Planning Phase (T-8 to T-4 weeks)
| Activity | PM | PMM | Eng | Sales | Support | Exec |
|---|---|---|---|---|---|---|
| Launch brief creation | R | C | C | C | C | A |
| Tier decision | R | C | C | I | I | A |
| Timeline alignment | R | C | C | I | I | A |
| Success criteria | R | C | C | C | C | A |
| Resource allocation | R | C | C | I | I | A |
Preparation Phase (T-4 to T-1 weeks)
| Activity | PM | PMM | Eng | Sales | Support | Exec |
|---|---|---|---|---|---|---|
| Feature development | C | I | R | I | I | I |
| Content creation | C | R | I | C | I | I |
| Sales enablement | C | R | I | R | I | I |
| Support training | C | I | I | I | R | I |
| Beta management | R | C | C | I | C | I |
Execution Phase (T-1 week to Launch)
| Activity | PM | PMM | Eng | Sales | Support | Exec |
|---|---|---|---|---|---|---|
| Go/no-go decision | R | R | R | R | R | A |
| War room leadership | R | C | C | I | I | I |
| Launch deployment | C | I | R | I | I | I |
| External comms | C | R | I | I | I | A |
| Incident response | R | I | R | I | R | I |
R = Responsible, A = Accountable, C = Consulted, I = Informed
Stakeholder Communication Cadence
| Tier | Daily Standup | Weekly Sync | Exec Update |
|---|---|---|---|
| Tier 1 | Yes (all teams) | Yes | Weekly |
| Tier 2 | Core team only | Yes | Bi-weekly |
| Tier 3 | Async updates | As needed | At milestones |
| Tier 4 | None | None | None |
Good Example: Effective Coordination
## Launch Coordination Plan: Enterprise SSO
### Core Team (Weekly Sync - Thursdays 2pm)
- PM: Sarah Chen (Captain)
- PMM: Mike Rodriguez
- Eng Lead: Priya Patel
- Sales Enablement: Tom Walsh
- Support Lead: Lisa Kim
### Extended Team (Informed via Slack #sso-launch)
- DevRel: James Wu
- Legal: Amanda Foster
- Security: David Park
- CS: Regional leads
### Communication Rhythm
| Cadence | Audience | Channel | Owner |
|---------|----------|---------|-------|
| Daily (T-2 weeks) | Core team | Standup | PM |
| Weekly | Extended team | Slack digest | PM |
| Bi-weekly | Exec sponsors | Email + 15min | PM |
| As needed | All-hands | Slack announcement | PM |
### Decision Rights
| Decision | Who Decides | Who Must Approve |
|----------|-------------|------------------|
| Launch date change | PM | Exec sponsor |
| Scope change | PM + Eng Lead | PM |
| Messaging change | PMM | PM |
| Go/no-go | Core team vote | Exec sponsor |
### Escalation Path
1. Core team member → PM (same day)
2. PM → Exec sponsor (within 24 hours)
3. Cross-functional conflict → Exec sponsor mediation
Bad Example: Coordination Failure
## What Went Wrong: Widget 2.0 Launch
### Red Flags Ignored:
- No single owner identified
- Marketing found out about features 1 week before launch
- Sales not trained until launch day
- Support documentation written by engineering
- No regular sync meetings scheduled
- Decisions made in hallway conversations
### Results:
- Support tickets 5x normal (untrained team)
- Sales couldn't answer customer questions
- Marketing blog post had wrong feature descriptions
- Launch delayed 2 weeks after "go" decision
- Exec blindsided by customer complaints
### Root Cause: No coordination framework
Dependency Tracking
Launch Dependencies Map
═══════════════════════
Feature Complete (Eng)
│
├──► Documentation Ready (Support)
│ │
│ └──► Support Training Complete
│
├──► Demo Environment Ready (Eng)
│ │
│ └──► Sales Enablement Complete
│
└──► API Finalized (Eng)
│
└──► Integration Partner Ready
All paths must complete before Go/No-Go
Handoff Protocols
| From | To | Handoff Artifact | Timing |
|---|---|---|---|
| Product | Marketing | Launch brief, feature specs | T-6 weeks |
| Product | Sales | Value prop, use cases, competitive | T-4 weeks |
| Product | Support | FAQ, troubleshooting guide | T-3 weeks |
| Engineering | Product | Technical constraints, risks | T-4 weeks |
| Marketing | Product | Content calendar, messaging | T-4 weeks |
| Marketing | Sales | Battlecards, one-pagers | T-2 weeks |
Conflict Resolution Framework
Conflict Type Resolution Path
───────────── ───────────────
Timeline disagreement → PM arbitrates, exec approves
Scope disagreement → PM decides with eng input
Messaging disagreement→ PMM decides with PM input
Resource conflict → Exec sponsor decides
Quality vs. speed → Eng lead has veto on quality
Coordination Tooling
| Need | Tool | Owner |
|---|---|---|
| Launch tracker | Notion/Asana/Linear | PM |
| Communication | Slack channel | PM |
| Documentation | Confluence/Notion | PM |
| Status updates | Weekly email | PM |
| Decision log | Shared doc | PM |
Anti-Patterns
- No single owner — Shared ownership = no ownership
- Siloed planning — Teams discover conflicts at launch
- Meeting overload — 5 hours of syncs per day
- Async only — Some decisions need real-time discussion
- No escalation path — Conflicts fester
- Late stakeholder inclusion — "I didn't know about this launch"
- Decision by committee — Endless debate, no action
- Verbal agreements — Undocumented decisions create confusion
title: Launch Day Execution impact: CRITICAL tags: execution, launch-day, war-room, operations
Launch Day Execution
Impact: CRITICAL
Launch day is the culmination of weeks of preparation. Execute with precision through clear runbooks, war rooms, and real-time coordination.
Launch Day Timeline Template
LAUNCH DAY SCHEDULE (Tier 1/2)
══════════════════════════════
06:00 War room opens, monitoring begins
06:30 Engineering: Final deployment verification
07:00 PR embargo lifts, press coverage begins
08:00 Blog post goes live
08:30 Marketing: Social media campaign starts
09:00 Email campaign sends
09:30 In-app notifications enabled
10:00 Sales team notified to begin outreach
10:30 First status check (all teams)
12:00 Midday status check
14:00 Afternoon status check
16:00 End-of-day status check
18:00 War room closes (or as needed)
T+1 Morning debrief, overnight monitoring review
War Room Setup
Virtual War Room (Slack/Teams):
#launch-war-room
├── Purpose: Real-time coordination
├── Who: Core launch team + on-call
├── Rules:
│ ├── All issues posted here immediately
│ ├── Use threads for discussion
│ ├── Pin critical updates
│ └── No side conversations
│
├── Pinned Messages:
│ ├── Launch runbook link
│ ├── Escalation contacts
│ ├── Status page link
│ └── Rollback command
Physical War Room (if applicable):
| Station | Owner | Equipment |
|---|---|---|
| Command | PM | Screens for metrics, Slack |
| Engineering | Eng Lead | Deployment dashboards |
| Support | Support Lead | Ticket queue, live chat |
| Marketing | PMM | Social monitoring |
| Executive | Exec Sponsor | Escalation decisions |
Launch Runbook Template
# [Feature] Launch Runbook
## Pre-Launch Checklist (T-1 hour)
□ All deployments complete
□ Feature flags ready to enable
□ Monitoring dashboards open
□ On-call engineers confirmed
□ Support team briefed
□ Rollback plan reviewed
## Launch Sequence
| Time | Action | Owner | Verify |
|------|--------|-------|--------|
| 08:00 | Enable feature flag | @eng-lead | [link] |
| 08:05 | Verify metrics baseline | @pm | [link] |
| 08:10 | Publish blog post | @pmm | [link] |
| 08:15 | Verify blog live | @pmm | [link] |
| 08:30 | Send email campaign | @pmm | [link] |
| 08:35 | Verify emails sending | @pmm | [link] |
| 09:00 | Enable in-app notification | @eng | [link] |
| 09:05 | Post to social channels | @pmm | [links] |
| 09:30 | First health check | @all | [checklist] |
## Health Check Protocol
Every 2 hours, verify:
□ Error rate < 0.1%
□ Latency p95 < 200ms
□ Support tickets < threshold
□ Social sentiment neutral/positive
□ Feature adoption tracking
## Escalation Matrix
| Issue Type | First Contact | Escalate To | SLA |
|------------|---------------|-------------|-----|
| P0 (system down) | @eng-oncall | @eng-lead | 5 min |
| P1 (degraded) | @eng-oncall | @eng-lead | 15 min |
| Support spike | @support-lead | @pm | 30 min |
| PR issue | @pmm | @comms-lead | 15 min |
| Exec escalation | @pm | @exec-sponsor | Immediate |
## Rollback Procedure
If rollback needed:
1. PM announces in #launch-war-room
2. Eng lead confirms with "ROLLBACK CONFIRMED"
3. Execute: `./scripts/rollback-feature.sh`
4. Verify rollback complete
5. PM notifies stakeholders
6. Marketing pauses campaigns
7. Support sends holding response
## Emergency Contacts
| Role | Name | Phone | Slack |
|------|------|-------|-------|
| PM | [Name] | [Phone] | @handle |
| Eng Lead | [Name] | [Phone] | @handle |
| Exec Sponsor | [Name] | [Phone] | @handle |
Good Example: Smooth Launch Execution
## Launch Day Log: Enterprise SSO
**08:00** - Feature flag enabled. All systems green.
**08:05** - Baseline metrics captured. Error rate 0.02%.
**08:10** - Blog published. [link verified]
**08:32** - Email campaign started. 50K recipients.
**09:00** - In-app notification live.
**09:15** - First support ticket: user confused by setup.
FAQ updated. Not escalated.
**09:30** - Health check: All green.
- Error rate: 0.03%
- First 10 SSO setups successful
- Social sentiment: Positive
**10:30** - 47 SSO setups completed (target: 30).
**12:00** - Midday check: All green.
- Error rate: 0.02%
- 89 SSO setups
- 3 support tickets (all resolved)
**14:00** - Press coverage starting to appear.
**16:00** - EOD check:
- 156 SSO setups (target: 100)
- 0 P0/P1 incidents
- 8 support tickets (all resolved)
- NPS from early users: 62
**17:00** - War room closed. Overnight monitoring enabled.
**Status: SUCCESSFUL LAUNCH**
Bad Example: Chaotic Launch
## Launch Day Disaster: Widget 2.0
**08:00** - Feature supposed to launch. Deployment stuck.
**08:15** - Blog post goes live anyway (premature).
**08:20** - Customers seeing "feature not found" errors.
**08:30** - Twitter complaints start.
**08:45** - Deployment finally completes.
**09:00** - Email sends to customers who already saw errors.
**09:15** - Support queue at 50 tickets, team not briefed.
**09:30** - No health check performed. PM in meetings.
**10:00** - Exec asks for status. No one has update.
**11:00** - Error rate discovered at 5% (limit was 0.1%).
**11:30** - Rollback decision debated for 30 minutes.
**12:00** - Rollback executed. Customers confused.
**12:30** - Blog post still live describing feature.
**14:00** - Press article published about "botched launch."
**Root Causes:**
- No deployment verification step
- No coordination between eng and marketing
- No war room or communication channel
- No health check protocol
- No clear rollback decision authority
Real-Time Decision Framework
| Signal | Threshold | Action |
|---|---|---|
| Error rate | > 1% | Investigate immediately |
| Error rate | > 5% | Rollback consideration |
| P0 incident | Any | Rollback unless fix < 15 min |
| Support volume | > 5x normal | Pause marketing, assess |
| Negative press | Major outlet | Exec notification |
| Security issue | Any | Immediate rollback |
Communication Templates
War Room Status Update:
🟢 STATUS CHECK [10:30]
Metrics:
- Error rate: 0.03% (limit: 0.1%)
- Adoption: 47 setups (target: 30)
- Support: 3 tickets (all resolved)
No blockers. Next check: 12:00.
Escalation Message:
🟡 ESCALATION [11:15]
Issue: Error rate at 0.8%, trending up
Impact: ~50 users affected
Cause: Database connection pooling
Action: Eng investigating, ETA 15 min
Decision needed: None yet
Next update: 11:30 or sooner if change
Rollback Announcement:
🔴 ROLLBACK INITIATED [11:45]
Decision: Rollback due to error rate > 2%
Status: In progress
ETA: 15 minutes
Actions:
- @marketing: Pause all campaigns
- @support: Send holding response template
- @eng: Execute rollback
- @pm: Notify exec sponsor
Will confirm completion.
Post-Launch Day Checklist
End of Day:
□ Final metrics captured
□ All incidents documented
□ Support queue cleared or handed off
□ Marketing campaigns assessed
□ Overnight monitoring confirmed
□ Key wins/issues summarized for exec
□ War room closed or transitioned
Day 2 Morning:
□ Overnight alerts reviewed
□ Metrics trend assessed
□ Support volume checked
□ Social sentiment reviewed
□ Quick wins identified
□ Follow-up actions assigned
Anti-Patterns
- No runbook — Making it up as you go
- No war room — Communication scattered across channels
- Premature publishing — Marketing goes before engineering confirms
- No health checks — Assuming green until someone complains
- Unclear rollback authority — Debate during crisis
- PM unavailable — Captain leaves the ship
- No escalation path — Issues fester
- Launch and leave — War room closes at 5pm sharp
title: Rollback and Contingency Planning impact: CRITICAL tags: execution, rollback, contingency, risk
Rollback and Contingency Planning
Impact: CRITICAL
Hope is not a strategy. Every launch needs a clear rollback plan and contingencies for when things go wrong.
Rollback Decision Matrix
| Scenario | Severity | Rollback? | Decision Maker |
|---|---|---|---|
| Error rate > 5% | Critical | Yes - Immediate | Eng Lead |
| Security vulnerability | Critical | Yes - Immediate | Security Lead |
| Data corruption | Critical | Yes - Immediate | Eng Lead |
| P0 incident unresolved > 30 min | High | Yes | PM + Eng Lead |
| Error rate 1-5%, stable | Medium | Assess | PM |
| Support volume > 10x | Medium | Assess | PM + Support Lead |
| Negative press, feature works | Low | No | Continue monitoring |
| Single customer complaint | Low | No | Continue monitoring |
Rollback Types
| Type | Speed | Scope | When to Use |
|---|---|---|---|
| Feature flag | Seconds | Single feature | Preferred method |
| Gradual rollback | Minutes | Percentage of users | Minimize disruption |
| Full rollback | Minutes-Hours | Entire deployment | Severe issues |
| Database rollback | Hours | Data + code | Data corruption |
Rollback Readiness Checklist
## Pre-Launch Rollback Verification
□ Feature flag configured and tested
□ Rollback command documented
□ Rollback tested in staging
□ Database rollback tested (if applicable)
□ Rollback runbook reviewed by on-call
□ Monitoring alerts set for rollback triggers
□ Communication templates ready
□ Decision authority clear
## Rollback Command
Feature flag: `./scripts/toggle-feature.sh sso-launch off`
Verify: Check [dashboard link] for flag status
## Estimated Rollback Time
- Feature flag: < 1 minute
- Full effect: < 5 minutes
- Verification: 10 minutes
Good Example: Clean Rollback
## Rollback Incident Report: Workflow Automation
**Timeline:**
- 09:15 - Feature launched, all green
- 09:45 - Error rate climbing (0.2% → 0.8%)
- 10:00 - Identified: Edge case in EU data center
- 10:05 - Decision: Attempt hotfix, 15 min timeout
- 10:20 - Hotfix not ready, error rate at 1.5%
- 10:22 - PM: "Initiating rollback"
- 10:23 - Eng Lead: "ROLLBACK CONFIRMED"
- 10:24 - Feature flag disabled
- 10:25 - Marketing campaigns paused
- 10:27 - Support team notified, canned response activated
- 10:30 - Error rate returning to baseline
- 10:35 - Rollback verified complete
- 10:40 - Customer communication sent
**What Went Well:**
- Clear decision criteria (1.5% > 1% threshold)
- Fast rollback execution (< 5 minutes)
- Good communication throughout
- No customer-facing outage
**Root Cause:**
- EU data center timezone handling edge case
- Not covered in beta (no EU beta users)
**Relaunch Plan:**
- Fix deployed to staging: 10/15
- EU-specific beta: 10/16-10/18
- Relaunch: 10/19
Bad Example: Rollback Chaos
## Rollback Disaster: Report Builder
**What Happened:**
- 09:00 - Launch
- 10:00 - Error rate at 3%, rising
- 10:15 - Support: "Should we rollback?"
- 10:20 - PM in meeting, no response
- 10:30 - Eng: "I could rollback but need approval"
- 10:45 - Error rate at 7%
- 11:00 - Exec: "Why is Twitter angry?"
- 11:10 - PM: "Who can make the rollback decision?"
- 11:20 - Rollback attempted, failed (untested)
- 11:45 - Rollback finally successful
- 12:00 - No customer communication prepared
**Impact:**
- 2.5 hours of degraded experience
- 47 angry support tickets
- 3 enterprise customers escalated to exec
- 2 tweets from tech press
**Root Causes:**
- No clear rollback authority
- Rollback never tested
- No pre-written communication
- PM not available during launch
Contingency Planning Matrix
| Risk | Likelihood | Impact | Mitigation | Contingency |
|---|---|---|---|---|
| High error rate | Medium | High | Thorough testing | Feature flag rollback |
| Support overwhelm | Medium | Medium | Pre-launch enablement | Temporary staff, queue priority |
| Negative press | Low | High | Embargo, press prep | Crisis comms plan |
| Competitor response | Medium | Low | Battlecards ready | Sales alert, response talking points |
| Infrastructure failure | Low | Critical | Load testing | Rollback + status page |
| Security vulnerability | Low | Critical | Security review | Immediate rollback, incident response |
Contingency Communication Templates
Internal Rollback Announcement:
🔴 ROLLBACK COMPLETE
The [Feature] launch has been rolled back due to [brief reason].
What happened:
- [1-2 sentence description]
What we're doing:
- Engineering investigating root cause
- ETA for fix assessment: [time]
- Relaunch target: TBD pending fix
What you should do:
- Sales: Pause outreach about [Feature]
- Support: Use canned response #rollback-[feature]
- Marketing: Campaigns paused automatically
Next update: [time]
Customer Communication (Major Issue):
Subject: Update on [Feature] - Temporary Availability
Hi [Name],
Earlier today we launched [Feature]. Some users experienced
[brief issue description], so we've temporarily rolled back
the feature while we make it right.
What this means for you:
- [Feature] is temporarily unavailable
- Your existing data/settings are unaffected
- We expect to relaunch within [timeframe]
We'll notify you when [Feature] is available again.
We apologize for any inconvenience.
[Name]
[Title]
Gradual Rollback Strategy
If issues detected but not critical:
Phase 1 (immediate):
├── Pause new user exposure
└── Current users unaffected
Phase 2 (if issues persist):
├── Reduce to 50% of users
└── Monitor for 30 minutes
Phase 3 (if no improvement):
├── Reduce to 10% of users
└── Monitor for 30 minutes
Phase 4 (if still problematic):
├── Full rollback
└── Root cause analysis
Benefits:
- Minimizes customer disruption
- Provides debugging opportunity
- Maintains some launch momentum
Relaunch Protocol
## Relaunch Checklist: [Feature]
### Before Scheduling Relaunch
□ Root cause identified and documented
□ Fix implemented and code-reviewed
□ Fix verified in staging environment
□ Additional test cases added for failure scenario
□ Rollback tested again
### Relaunch Communication
□ Internal teams notified of new date
□ Customer communication drafted (if needed)
□ Press/analyst update (if covered)
□ Sales re-enabled with update
### Relaunch Execution
□ Same runbook with additional monitoring
□ Lower thresholds for first 24 hours
□ Extended war room hours
□ Faster escalation paths
Anti-Patterns
- No rollback plan — "We'll figure it out if we need to"
- Untested rollback — Script that fails when needed
- Unclear authority — Debate during crisis
- No feature flags — Rollback requires full deployment
- No templates — Writing communications during incident
- Optimism bias — "It won't happen to us"
- Rollback shame — Treating rollback as failure vs. professionalism
- No relaunch plan — Rollback becomes permanent
title: Launch Tiering Strategy impact: CRITICAL tags: planning, tiers, strategy, prioritization
Launch Tiering Strategy
Impact: CRITICAL
Not every launch deserves a keynote. Tiering ensures you invest the right resources for the right impact.
The Four Launch Tiers
| Tier | Investment | Timeline | Team Involvement | Typical Outputs |
|---|---|---|---|---|
| Tier 1: Flagship | Maximum | 8-12 weeks | All functions, exec sponsors | Event, press, full campaign |
| Tier 2: Major | High | 4-8 weeks | Product, Marketing, Sales, Support | Blog, email, PR, enablement |
| Tier 3: Standard | Medium | 2-4 weeks | Product, Marketing | Blog, changelog, targeted email |
| Tier 4: Maintenance | Minimal | 1-2 weeks | Product, Engineering | Changelog, in-app notification |
Tier Decision Matrix
Use this matrix to determine the appropriate tier:
| Factor | Tier 1 | Tier 2 | Tier 3 | Tier 4 |
|---|---|---|---|---|
| Revenue Impact | >$1M ARR | $100K-$1M | $10K-$100K | <$10K |
| User Impact | All users | Major segment | Specific cohort | Power users |
| Competitive | Market defining | Market leading | Parity | Hygiene |
| Strategic | New market/product | Expansion | Enhancement | Fix/polish |
| Risk Level | High visibility | Moderate | Low | Minimal |
Tier 1: Flagship Launch Checklist
□ Executive sponsor assigned
□ Cross-functional launch team formed
□ 12-week timeline established
□ Press/analyst strategy defined
□ Event or major moment identified
□ Full marketing campaign planned
□ Sales enablement program designed
□ Customer advisory board briefed
□ Success metrics with exec alignment
□ Post-launch monitoring plan
Tier 2: Major Launch Checklist
□ Product marketing owner assigned
□ 6-week timeline established
□ Blog post and supporting content
□ Email campaign to relevant segments
□ Social media campaign
□ Sales one-pager and talking points
□ Support documentation updated
□ In-app announcement planned
□ Success metrics defined
Good Example: Proper Tiering
## Launch Brief: Enterprise SSO
**Proposed Tier: 2 (Major)**
**Justification:**
- Revenue: Enables $500K pipeline in enterprise segment
- User Impact: Top requested feature from enterprise prospects
- Competitive: Achieves parity with market leaders
- Strategic: Key to enterprise expansion strategy
**Approved Resources:**
- 6-week timeline
- Dedicated PMM owner
- Blog + email + social campaign
- Sales enablement session
- Updated security whitepaper
- Support team training
Bad Example: Over-Tiered Launch
## Launch Brief: Dashboard Dark Mode
**Proposed Tier: 1 (Flagship)** ← WRONG
**Justification:**
- "Users have been asking for this for years"
- "It looks really cool"
- "We should do a big splash"
**Why This Fails:**
- Dark mode is a Tier 3 or 4 feature
- Minimal revenue impact
- No competitive differentiation
- Wastes resources, burns out team
- Dilutes impact of actual Tier 1 launches
Good Example: Under-Tiered Recognition
## Launch Review: API v2.0
**Initial Tier: 3 (Standard)**
**Revised Tier: 2 (Major)**
**Why Upgrade:**
- Discovered 40% of enterprise pipeline blocked on API limitations
- Competitive analysis shows this is differentiating capability
- Developer community engagement opportunity
- Platform ecosystem enablement
**Added Resources:**
- Developer-focused blog series
- API documentation revamp
- Developer community event
- Integration partner outreach
Tiering Escalation Triggers
Upgrade tier when:
- Executive visibility increases
- Competitive landscape shifts
- Customer demand exceeds expectations
- Strategic priority changes
- Partner/press interest emerges
Downgrade tier when:
- Scope reduces significantly
- Timeline compresses
- Resources become constrained
- Market window passes
- Feature becomes table stakes
Anti-Patterns
- Everything is Tier 1 — Launch fatigue destroys impact and burns teams
- No tiering system — Every launch reinvents the wheel
- Tier by politics — Exec pet projects get over-resourced
- Static tiers — Refusing to adjust as circumstances change
- Tier without criteria — Gut feel instead of structured decision
- Tiering too late — Deciding resources after work begins
title: Launch Success Criteria impact: CRITICAL tags: planning, metrics, success, measurement
Launch Success Criteria
Impact: CRITICAL
Define success before launch, not after. Post-hoc success criteria is rationalization, not measurement.
Success Criteria Framework
Every launch needs metrics across four dimensions:
| Dimension | What It Measures | When to Measure | Owner |
|---|---|---|---|
| Adoption | Are users using it? | Day 1, Week 1, Month 1 | Product |
| Quality | Is it working well? | Real-time, Daily | Engineering |
| Business | Is it driving results? | Month 1, Quarter 1 | Product/Sales |
| Sentiment | Do users like it? | Week 1, Month 1 | Product/Support |
Adoption Metrics by Tier
| Tier | Day 1 | Week 1 | Month 1 |
|---|---|---|---|
| Tier 1 | 5% of users try | 15% activated | 30% retained |
| Tier 2 | 3% of segment tries | 10% activated | 25% retained |
| Tier 3 | 1% of users try | 5% activated | 20% retained |
| Tier 4 | N/A | Usage baseline | No regression |
Quality Metrics (All Tiers)
Error Rate: < 0.1% of requests
Latency: < 200ms p95 (or no regression)
Availability: 99.9% uptime
Incidents: Zero P0/P1 in first 48 hours
Rollback: Zero rollbacks required
Good Example: Comprehensive Success Criteria
## Success Criteria: Enterprise SSO Launch
### Adoption (Product owns)
| Metric | Target | Measurement |
|--------|--------|-------------|
| Enterprise accounts enabling SSO | 25% in 30 days | Product analytics |
| Time to SSO setup | < 30 minutes | Instrumentation |
| SSO login rate post-setup | > 90% | Auth logs |
### Quality (Engineering owns)
| Metric | Target | Measurement |
|--------|--------|-------------|
| SSO auth latency | < 500ms p95 | APM |
| Auth error rate | < 0.01% | Error tracking |
| Availability | 99.99% | Uptime monitoring |
### Business (Sales/Product owns)
| Metric | Target | Measurement |
|--------|--------|-------------|
| Pipeline influenced | $500K in 90 days | CRM attribution |
| Enterprise close rate | +5% improvement | Sales analytics |
| Competitive win rate | +10% vs SSO objection | Win/loss analysis |
### Sentiment (Product/Support owns)
| Metric | Target | Measurement |
|--------|--------|-------------|
| Setup NPS | > 50 | Post-setup survey |
| Support tickets | < 10% of enablements | Zendesk |
| Admin feedback | Qualitative positive | Customer calls |
Bad Example: Vague Success Criteria
## Success Criteria: New Dashboard
- Users should like it
- It should be fast
- We want lots of adoption
- Good feedback from customers
**Why This Fails:**
- No specific numbers
- No timeline for measurement
- No owners assigned
- No measurement method defined
- "Lots" and "good" are not metrics
The SMART Criteria Test
Every metric should pass SMART:
| Criteria | Question | Example |
|---|---|---|
| Specific | What exactly? | "SSO activation rate" not "adoption" |
| Measurable | How do we count? | "Product analytics event" |
| Achievable | Is it realistic? | Based on comparable launches |
| Relevant | Does it matter? | Tied to business outcome |
| Time-bound | By when? | "30 days post-launch" |
Baseline Requirements
Before setting targets, establish baselines:
Current State Analysis:
┌─────────────────────────────────────────┐
│ Comparable Feature Launch (6 months ago) │
│ - Day 1 adoption: 2% │
│ - Week 1 activation: 8% │
│ - Month 1 retention: 18% │
│ │
│ Target for This Launch: │
│ - Day 1 adoption: 3% (+50%) │
│ - Week 1 activation: 10% (+25%) │
│ - Month 1 retention: 25% (+38%) │
└─────────────────────────────────────────┘
Success Criteria Sign-off
| Stakeholder | Signs Off On | Before |
|---|---|---|
| Product Lead | All metrics & targets | Launch brief approval |
| Engineering Lead | Quality metrics & feasibility | T-4 weeks |
| Marketing Lead | Campaign success metrics | T-4 weeks |
| Sales Lead | Business metrics & attribution | T-4 weeks |
| Executive Sponsor | Overall success bar | Go/no-go |
Post-Launch Success Review Template
## Launch Success Review: [Feature Name]
**Launch Date:** [Date]
**Review Date:** [Date]
### Adoption Results
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
| [Metric] | [Target] | [Actual] | [Met/Missed] |
### Quality Results
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
### Business Results
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
### Sentiment Results
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
### Overall Assessment: [SUCCESS / PARTIAL / MISSED]
### Key Learnings:
1. [Learning]
2. [Learning]
### Follow-up Actions:
1. [Action] - Owner: [Name] - Due: [Date]
Anti-Patterns
- Post-launch metrics — Deciding what success looks like after you know results
- Vanity metrics only — Page views without activation
- No baseline — Targets with no reference point
- Moving goalposts — Changing targets when you're missing them
- Single dimension — Measuring adoption but not quality
- No owner — Metrics without accountability
- Unmeasurable targets — "Improved customer satisfaction"
title: Launch Timeline Planning impact: HIGH tags: planning, timeline, milestones, scheduling
Launch Timeline Planning
Impact: HIGH
Great launches work backwards from the date. A well-structured timeline prevents last-minute chaos and ensures every team is ready.
Timeline Templates by Tier
Tier 1: Flagship Launch (12 weeks)
Week -12: Kickoff
├── Launch brief drafted
├── Stakeholder alignment meeting
├── Tier confirmed by exec sponsor
└── Core team identified
Week -10: Strategy
├── Messaging framework complete
├── Competitive positioning defined
├── Press/analyst strategy finalized
└── Event/moment identified
Week -8: Beta Launch
├── Alpha complete, closed beta begins
├── Sales enablement content drafted
├── PR outreach starts
└── Success criteria locked
Week -6: Content Development
├── All marketing content in review
├── Support documentation drafted
├── Demo environment ready
└── Beta feedback incorporated
Week -4: Enablement
├── Sales training complete
├── Support training complete
├── CS playbook ready
├── Final content approved
└── Go/no-go checkpoint #1
Week -2: Final Prep
├── All systems tested
├── Monitoring configured
├── Runbook reviewed
├── Communication scheduled
└── War room setup
└── Go/no-go checkpoint #2
Week -1: Hypercare Prep
├── Final deployment to production
├── On-call schedule confirmed
├── Rollback tested
└── Final walkthrough
LAUNCH DAY
Week +1: Hypercare
├── Daily monitoring
├── Rapid response to issues
└── Quick wins identified
Week +2: Retrospective
├── Success criteria review
├── Launch retrospective
└── Learnings documented
Tier 2: Major Launch (6 weeks)
Week -6: Kickoff + Strategy
├── Launch brief approved
├── Messaging complete
└── Team aligned
Week -4: Content + Beta
├── Beta program active
├── Content development
├── Enablement drafts
└── Go/no-go checkpoint
Week -2: Enablement + Prep
├── All training complete
├── Content approved
├── Systems tested
└── Final go/no-go
Week -1: Final Prep
├── Deployment ready
├── War room setup
└── Runbook reviewed
LAUNCH DAY
Week +1: Monitor + Retro
├── Monitoring
└── Retrospective
Tier 3: Standard Launch (3 weeks)
Week -3: Planning
├── Launch brief
├── Content assignment
└── Success criteria
Week -2: Development
├── Content complete
├── Documentation ready
└── Go/no-go
Week -1: Prep
├── Final review
└── Scheduled deployment
LAUNCH DAY
Week +1: Monitor
└── Quick check
Milestone Dependencies
DEPENDENCY MAP
══════════════
Feature Complete
│
├──► Beta Can Begin
│ │
│ └──► Beta Feedback
│ │
│ └──► Final Feature Adjustments
│
├──► Documentation Drafted
│ │
│ └──► Support Training
│
├──► Demo Environment
│ │
│ └──► Sales Enablement
│
└──► API Finalized
│
└──► Partner Integration
Messaging Approved
│
├──► Content Creation
│ │
│ └──► Content Review
│ │
│ └──► Content Published
│
└──► PR Outreach
│
└──► Press Coverage
Good Example: Well-Planned Timeline
## Launch Timeline: Enterprise SSO
### Phase 1: Foundation (Weeks -8 to -6)
| Date | Milestone | Owner | Dependencies |
|------|-----------|-------|--------------|
| Aug 15 | Launch brief approved | Sarah | None |
| Aug 17 | Tier 2 confirmed | Sarah | Launch brief |
| Aug 19 | Core team kickoff | Sarah | Tier decision |
| Aug 22 | Messaging framework v1 | Mike | Launch brief |
| Aug 26 | Beta cohort selected | Sarah | Feature stable |
| Aug 29 | Success criteria locked | Sarah | Stakeholder input |
### Phase 2: Development (Weeks -5 to -3)
| Date | Milestone | Owner | Dependencies |
|------|-----------|-------|--------------|
| Sep 1 | Closed beta begins | Sarah | Beta cohort, feature |
| Sep 5 | Sales one-pager draft | Mike | Messaging |
| Sep 8 | Support docs draft | Lisa | Feature specs |
| Sep 12 | Demo environment ready | Priya | Feature stable |
| Sep 15 | Messaging approved | Mike | Exec review |
| Sep 19 | Go/no-go #1 | Sarah | Multiple |
### Phase 3: Enablement (Weeks -2 to -1)
| Date | Milestone | Owner | Dependencies |
|------|-----------|-------|--------------|
| Sep 22 | Sales training | Mike | Demo env, materials |
| Sep 24 | Support training | Lisa | Docs, demo env |
| Sep 26 | All content approved | Mike | Reviews complete |
| Sep 28 | CS playbook ready | CS Lead | Training complete |
| Sep 29 | Monitoring configured | Priya | Feature deployed |
| Sep 30 | Go/no-go #2 | Sarah | All prep complete |
### Phase 4: Launch (Week 0)
| Date | Milestone | Owner | Dependencies |
|------|-----------|-------|--------------|
| Oct 1 | War room setup | Sarah | Go/no-go passed |
| Oct 2 | Final deployment | Priya | All clear |
| Oct 3 | LAUNCH DAY | Sarah | Everything ready |
### Phase 5: Post-Launch (Weeks +1 to +2)
| Date | Milestone | Owner | Dependencies |
|------|-----------|-------|--------------|
| Oct 3-6 | Hypercare monitoring | Sarah/Priya | Launch |
| Oct 10 | Stabilization check | Sarah | Hypercare clear |
| Oct 17 | Launch retrospective | Sarah | Metrics available |
Bad Example: Unrealistic Timeline
## Launch Timeline: API v2 (What Not To Do)
Week -2:
- Decide we're launching
- Start everything at once
Week -1:
- Feature still being built
- Marketing asks "what are we launching?"
- Sales finds out from Slack
- Support: "What documentation?"
Week 0:
- Deploy morning of launch
- Marketing scrambles to publish blog
- Launch "on time"
Week +1:
- Support overwhelmed
- Sales can't answer questions
- 3 critical bugs discovered
- "Why didn't we plan better?"
**What went wrong:**
- 2 weeks for a Tier 2 launch (needed 6)
- No parallel workstreams
- No checkpoints
- No dependency mapping
- Feature not complete before enablement
Go/No-Go Checkpoints
Checkpoint 1 (T-4 weeks for Tier 2):
□ Feature development on track
□ Beta feedback positive
□ Messaging approved
□ No major blockers
Decision: Continue / Delay / Scope Reduce
Checkpoint 2 (T-1 week):
□ Feature complete and tested
□ All content approved and scheduled
□ All enablement complete
□ Monitoring configured
□ Rollback plan ready
□ On-call scheduled
Decision: Go / No-Go
Timeline Risk Management
| Risk | Indicator | Mitigation |
|---|---|---|
| Feature delay | Sprint slipping | Scope reduction options ready |
| Content delay | Drafts not submitted | Parallel content creation |
| Enablement gap | Training not scheduled | Mandatory calendar holds |
| Review delays | Approvers unavailable | Clear deadlines + escalation |
| Integration issues | Partner delays | Buffer time built in |
Buffer Time Guidelines
| Activity | Standard Duration | Buffer |
|---|---|---|
| Feature development | Per engineering | +20% |
| Content creation | 1-2 weeks | +3-5 days |
| Review cycles | 3-5 days each | +2 days |
| Training delivery | 1 week | +2 days |
| Deployment | Per engineering | +1 day |
Timeline Communication
Weekly Status Update Template:
## Launch Status: [Feature] - Week of [Date]
**Overall Status:** 🟢 On Track / 🟡 At Risk / 🔴 Blocked
**This Week:**
- [Completed milestone]
- [Completed milestone]
**Next Week:**
- [Upcoming milestone]
- [Upcoming milestone]
**Risks/Blockers:**
- [Risk] - Mitigation: [Action]
**Decisions Needed:**
- [Decision] - From: [Person] - By: [Date]
Anti-Patterns
- No timeline — "We'll launch when it's ready"
- Unrealistic compression — Tier 1 launch in 2 weeks
- Serial planning — Not starting parallel workstreams
- No checkpoints — All-or-nothing at launch
- No buffer — Every day scheduled
- Feature-first only — Marketing starts when code complete
- No dependencies — Enabling sales before demo ready
- Fixed date, variable scope — Date matters, quality doesn't
title: Post-Launch Monitoring impact: HIGH tags: postlaunch, monitoring, metrics, operations
Post-Launch Monitoring
Impact: HIGH
The launch isn't over when it ships. Systematic monitoring catches issues early and validates success.
Monitoring Phases
| Phase | Duration | Focus | Cadence |
|---|---|---|---|
| Hypercare | Days 1-3 | Stability, critical issues | Continuous |
| Stabilization | Days 4-14 | Adoption, edge cases | Daily |
| Steady State | Days 15-30 | Trends, optimization | Weekly |
| Long-term | Day 30+ | Business impact | Monthly |
Hypercare Dashboard
POST-LAUNCH HYPERCARE DASHBOARD
═══════════════════════════════
System Health
├── Error Rate: 0.03% [✓] (threshold: 0.1%)
├── Latency p95: 145ms [✓] (threshold: 200ms)
├── Availability: 100% [✓] (threshold: 99.9%)
└── Queue Depth: 12 [✓] (threshold: 1000)
Adoption (T+24 hours)
├── Feature Views: 2,847
├── Feature Usage: 1,203 (42% activation)
├── Unique Users: 891
└── Target: 500 [✓] EXCEEDED
Quality
├── P0 Incidents: 0 [✓]
├── P1 Incidents: 0 [✓]
├── Bug Reports: 7 [!] (investigating 2)
└── Rollbacks: 0 [✓]
Support
├── Tickets (feature): 23
├── Avg Response Time: 12 min [✓]
├── Escalations: 1
└── CSAT: 4.2/5 [✓]
Sentiment
├── Social Mentions: 47
├── Positive: 38 (81%)
├── Neutral: 7 (15%)
└── Negative: 2 (4%)
Alert Configuration
| Metric | Warning | Critical | Action |
|---|---|---|---|
| Error rate | > 0.5% | > 1% | Page on-call |
| Latency p95 | > 300ms | > 500ms | Page on-call |
| Feature errors | > 10/hour | > 50/hour | Page on-call |
| Support volume | > 3x normal | > 5x normal | Alert PM |
| Negative social | > 5 mentions | > 10 mentions | Alert PMM |
Good Example: Effective Monitoring
## Post-Launch Monitoring Report: Enterprise SSO
### Day 3 Summary
**Executive Summary:**
SSO launch performing above expectations. No critical issues.
Adoption 40% ahead of target. Recommend moving to stabilization phase.
**System Health:**
| Metric | Day 1 | Day 2 | Day 3 | Target |
|--------|-------|-------|-------|--------|
| Error rate | 0.05% | 0.03% | 0.02% | < 0.1% |
| Latency | 180ms | 165ms | 155ms | < 200ms |
| Uptime | 100% | 100% | 100% | 99.9% |
**Adoption Metrics:**
| Metric | Day 1 | Day 2 | Day 3 | Target |
|--------|-------|-------|-------|--------|
| SSO setups | 89 | 67 | 54 | 50/day |
| Active SSO users | 2,340 | 4,120 | 5,890 | 3,000 |
| Setup success rate | 94% | 96% | 97% | > 90% |
**Issues Identified:**
1. Minor: Setup wizard confusing for Azure AD (12 tickets)
- Action: UX improvement scheduled for next sprint
- Workaround: Updated documentation
2. Edge case: SCIM sync delay for > 500 user orgs
- Action: Engineering investigating
- Impact: 3 customers, all contacted
**Support Analysis:**
- Total tickets: 47 (expected ~60)
- Most common: "How do I set up?" (18) - Resolved with docs link
- Top issue: Azure AD confusion (12) - Action item above
**Recommendation:** Exit hypercare, enter stabilization phase
Bad Example: No Monitoring
## What Happened: Reporting 2.0 Launch
**Week 1:**
- PM: "Launch went great!"
- No dashboard, no tracking
**Week 2:**
- Support: "We're getting a lot of tickets about reports"
- PM: "How many?"
- Support: "I don't know, a lot"
**Week 3:**
- Exec: "Customer X says reports are broken"
- PM: "First I'm hearing of this"
- Discovered: 15% error rate for 2 weeks
- Discovered: 340 support tickets about feature
- Discovered: 3 enterprise customers considering churn
**Root Cause:**
- No error monitoring configured
- No feature-specific dashboards
- No support ticket categorization
- No regular review cadence
Monitoring Checklist by Role
Engineering:
Daily (Hypercare):
□ Review error rate trends
□ Check latency percentiles
□ Verify no P0/P1 incidents
□ Review bug reports
□ Check infrastructure capacity
Weekly (Stabilization):
□ Performance trend analysis
□ Edge case identification
□ Technical debt assessment
□ Scaling projections
Product:
Daily (Hypercare):
□ Review adoption metrics
□ Check support trends
□ Triage bug reports
□ Customer feedback review
Weekly (Stabilization):
□ Adoption funnel analysis
□ Success criteria assessment
□ Feature iteration planning
□ Stakeholder update
Marketing:
Daily (Hypercare):
□ Social sentiment monitoring
□ Press coverage tracking
□ Campaign performance
Weekly (Stabilization):
□ Content performance analysis
□ Lead attribution review
□ Campaign optimization
Success Criteria Tracking
## Success Criteria Dashboard: SSO Launch
### Adoption (Target: Week 4)
| Metric | Target | Week 1 | Week 2 | Week 3 | Week 4 |
|--------|--------|--------|--------|--------|--------|
| Setups | 200 | 210 ✓ | 285 ✓ | 340 ✓ | 390 ✓ |
| Active | 5,000 | 3,800 | 6,200 ✓ | 8,100 ✓ | 9,500 ✓ |
| Activation % | 30% | 25% | 32% ✓ | 35% ✓ | 38% ✓ |
### Quality (Target: Ongoing)
| Metric | Target | Week 1 | Week 2 | Week 3 | Week 4 |
|--------|--------|--------|--------|--------|--------|
| Error rate | < 0.1% | 0.03% ✓ | 0.02% ✓ | 0.02% ✓ | 0.01% ✓ |
| P0 incidents | 0 | 0 ✓ | 0 ✓ | 0 ✓ | 0 ✓ |
| Setup NPS | > 50 | 52 ✓ | 58 ✓ | 61 ✓ | 64 ✓ |
### Business (Target: Quarter)
| Metric | Target | Month 1 | Month 2 | Month 3 |
|--------|--------|---------|---------|---------|
| Pipeline | $500K | $180K | $380K | $620K ✓ |
| Won deals | 5 | 2 | 4 | 7 ✓ |
**Overall Status: ON TRACK FOR SUCCESS**
Escalation Triggers
| Signal | Trigger | Escalate To | Action |
|---|---|---|---|
| Error rate | > 1% sustained | Eng Lead | Incident response |
| Adoption | < 50% of target at midpoint | PM + PMM | Adoption campaign |
| Support volume | > 5x normal, sustained | Support Lead + PM | Triage + fix |
| Churn signal | Any customer citing feature | CS Lead + PM | Customer outreach |
| Negative press | Major outlet | Comms Lead | Response plan |
Transition Criteria
Hypercare → Stabilization:
□ Zero P0/P1 incidents for 48 hours
□ Error rate < threshold for 48 hours
□ Support volume trending down
□ No unresolved critical bugs
Stabilization → Steady State:
□ No incidents for 2 weeks
□ Support volume at normal levels
□ Adoption tracking to targets
□ All launch bugs resolved or scheduled
Anti-Patterns
- No monitoring — Assuming it works until someone complains
- Vanity dashboards — Tracking page views, not activation
- Alert fatigue — So many alerts, real issues ignored
- No thresholds — Looking at numbers without targets
- PM absence — Engineering monitors alone
- Snapshot only — Checking once, not trending
- No support integration — Tickets not linked to feature
- Too short hypercare — Ending monitoring after 24 hours
title: Launch Retrospectives impact: HIGH tags: postlaunch, retrospective, learning, improvement
Launch Retrospectives
Impact: HIGH
Every launch is a learning opportunity. Structured retrospectives turn experience into institutional knowledge.
Retrospective Timeline
| Launch Tier | Retro Timing | Duration | Participants |
|---|---|---|---|
| Tier 1 | T+2 weeks | 90 min | Full cross-functional team |
| Tier 2 | T+1 week | 60 min | Core launch team |
| Tier 3 | T+1 week | 30 min | PM + key contributors |
| Tier 4 | Optional | 15 min | PM async doc |
Retrospective Framework
THE 4L RETROSPECTIVE
════════════════════
┌─────────────────┬─────────────────┐
│ LIKED │ LEARNED │
│ │ │
│ What went well │ New insights │
│ that we should │ and knowledge │
│ repeat │ gained │
│ │ │
├─────────────────┼─────────────────┤
│ LACKED │ LONGED FOR │
│ │ │
│ What was │ What we wish │
│ missing or │ we had done │
│ insufficient │ differently │
│ │ │
└─────────────────┴─────────────────┘
Retrospective Agenda Template
## Launch Retrospective: [Feature Name]
**Date:** [Date]
**Facilitator:** [Name]
**Attendees:** [List]
### Agenda (60-90 minutes)
1. **Context Setting** (5 min)
- Launch goals recap
- Success criteria recap
- Key metrics summary
2. **Success Criteria Review** (10 min)
- Did we hit our targets?
- What do the numbers tell us?
3. **Timeline Review** (10 min)
- Walk through major milestones
- Where did we get ahead/behind?
4. **4Ls Exercise** (25 min)
- 5 min silent brainstorming
- 20 min group discussion
5. **Action Items** (15 min)
- What will we do differently?
- Who owns each action?
6. **Kudos** (5 min)
- Recognize contributions
Good Example: Comprehensive Retrospective
## Launch Retrospective: Enterprise SSO
**Date:** October 28, 2024
**Facilitator:** Sarah Chen (PM)
**Attendees:** Mike (PMM), Priya (Eng), Tom (Sales), Lisa (Support)
### Success Criteria Results
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
| SSO setups (30d) | 200 | 390 | EXCEEDED |
| Active SSO users | 5,000 | 9,500 | EXCEEDED |
| Error rate | < 0.1% | 0.02% | MET |
| Setup NPS | > 50 | 64 | EXCEEDED |
| Pipeline influenced | $500K | $620K | EXCEEDED |
**Overall: SUCCESSFUL LAUNCH**
### Liked (What went well)
1. **Cross-functional coordination**
- Weekly syncs kept everyone aligned
- Slack channel was invaluable for real-time updates
- "Best coordinated launch I've been part of" - Tom
2. **Beta program**
- Caught the Azure AD issue before GA
- Customer quotes ready for launch day
- Created internal champions
3. **Enablement materials**
- Sales one-pager was "actually useful" (Tom's words)
- Support troubleshooting guide prevented escalations
- 90% of support tickets resolved with existing docs
4. **Launch day execution**
- War room worked smoothly
- No deployment issues
- Marketing timing was perfect
### Learned (New insights)
1. **Beta cohort diversity matters**
- Almost missed Azure AD edge case
- Need to ensure geographic + IdP diversity
2. **Sales enablement timing**
- 2 weeks before launch was right
- Too early = forgotten, too late = unprepared
3. **In-app messaging impact**
- In-app notification drove 40% of setups
- Higher than email (25%) or blog (15%)
4. **Enterprise sales cycle**
- Feature announcement → close takes 45-60 days
- Should set 90-day pipeline targets, not 30-day
### Lacked (What was missing)
1. **International readiness**
- No GDPR-specific documentation at launch
- EU customers asked questions we couldn't answer
2. **Partner enablement**
- Okta partnership team found out from blog post
- Should have briefed integration partners
3. **Video content**
- Multiple requests for setup walkthrough video
- Added friction for visual learners
4. **Customer success playbook**
- CS team wasn't sure how to position SSO in renewals
- Created after launch, should have been ready
### Longed For (What we'd do differently)
1. **Earlier executive alignment**
- Tier decision debated for 2 weeks
- Should lock this at kickoff
2. **Longer beta for enterprise features**
- 4 weeks felt rushed for enterprise IT teams
- 6-8 weeks for enterprise-focused features
3. **Pre-written FAQ expansion**
- First week support was reactive
- Should anticipate more edge case questions
4. **Competitive response plan**
- Competitor launched similar feature 2 weeks later
- Were caught flat-footed on positioning
### Action Items
| Action | Owner | Due Date | Priority |
|--------|-------|----------|----------|
| Add GDPR docs to launch checklist | Lisa | Nov 5 | High |
| Create partner briefing template | Mike | Nov 10 | High |
| Add video to enablement checklist | Mike | Nov 10 | Medium |
| CS playbook required for Tier 1-2 | Sarah | Nov 15 | High |
| Extend enterprise beta standard to 6 weeks | Sarah | Next launch | Medium |
| Add competitive response to launch brief | Mike | Next launch | Medium |
### Kudos
- **Priya:** Flawless deployment, zero incidents
- **Tom:** Closed first SSO deal in 3 weeks
- **Lisa:** Support NPS highest ever for a launch
- **Mike:** Blog post generated 3 press pickups
- **Everyone:** Best cross-functional collab in 2024
Bad Example: Superficial Retrospective
## Retro: Widget 2.0
**Date:** Friday 5pm (30 min scheduled, 15 min actual)
**Who showed up:** PM, 1 engineer
### What happened:
- We launched
- Some bugs
- Customers complained
### What we learned:
- Test more
- Communicate better
### Action items:
- None assigned
**Why This Failed:**
- Rushed timing (Friday 5pm)
- Missing key stakeholders
- No metrics reviewed
- Vague insights ("communicate better")
- No ownership of actions
- Will repeat same mistakes
Retrospective Facilitation Tips
Before:
□ Schedule 2+ weeks in advance (calendars fill up)
□ Send pre-work: metrics summary, timeline, 4Ls template
□ Book 30 min buffer after for overflow
□ Ensure all key roles can attend
□ Prepare metrics dashboard
During:
□ Start with metrics (ground the conversation in data)
□ Use silent brainstorming (prevents groupthink)
□ Timebox each section
□ Capture everything (assign note-taker)
□ End with action items + owners + dates
□ Always include kudos
After:
□ Send notes within 24 hours
□ Add action items to project tracker
□ Follow up on action items at next team meeting
□ Archive retro for future reference
Retrospective Metrics to Review
| Category | Metrics | Source |
|---|---|---|
| Timeline | Planned vs. actual dates | Project tracker |
| Quality | Error rate, incidents, bugs | Monitoring |
| Adoption | Usage, activation, retention | Analytics |
| Business | Pipeline, revenue, wins | CRM |
| Sentiment | NPS, CSAT, social | Surveys, social |
| Effort | Estimated vs. actual hours | Time tracking |
Institutional Knowledge Capture
## Launch Playbook Update: Learnings from SSO Launch
### Added to Standard Checklist:
□ GDPR documentation (for EU-facing features)
□ Partner briefing (for integration features)
□ Setup video (for complex features)
□ CS playbook (for Tier 1-2 launches)
### Updated Standards:
- Enterprise feature beta: 6-8 weeks (was 4 weeks)
- Partner notification: T-2 weeks (was T-1 week)
- Competitive response plan: Required for Tier 1-2
### Templates Updated:
- Launch brief: Added competitive section
- Enablement checklist: Added video requirement
- Support prep: Expanded FAQ template
Anti-Patterns
- Skipping the retro — "We're too busy with the next thing"
- Blame game — Focusing on who, not what
- No metrics — Opinions without data
- Too soon — Before results are measurable
- Too late — Details forgotten
- No actions — Insights without follow-through
- Same mistakes — Not consulting past retros
- Missing stakeholders — Only PM attends
- Rushed timing — Friday 5pm, everyone checked out