AI SkillRun discoveryProduct & Engineering

When you need to validate a product bet, /product-discovery runs structured research, so you can ship with evidence. — Claude Skill

A Claude Skill for Claude Code by Nick Jensen — run /product-discovery in Claude·Updated

Compatible withChatGPT·Claude·Gemini·OpenClaw

Run user research, JTBD interviews, and opportunity sizing from a prompt.

  • Generates JTBD interview scripts with follow-up trees
  • Sizes opportunities using frequency-times-importance scoring
  • Synthesizes qualitative themes across multiple interview transcripts
  • Creates prototype test plans with task scenarios and success metrics
  • Produces usability test reports with severity-ranked findings

Who this is for

What it does

JTBD interview prep

Run /product-discovery with your product area to get a 12-question interview script, screener criteria, and a recruiting brief — ready to send in one pass.

Opportunity scoring

Paste survey data into /product-discovery to get an opportunity-solution tree with frequency x importance scores for the top 8 jobs.

Research synthesis

Feed 5-10 interview transcripts to /product-discovery and receive a themed insight report with verbatim evidence tags and confidence levels.

Usability test design

Give /product-discovery your prototype link and target persona to generate a 6-task test plan with success criteria and a post-test questionnaire.

How it works

1

Describe the product area, target segment, or hypothesis you want to test.

2

The skill selects the right research method — JTBD interviews, opportunity sizing, prototype testing, or synthesis — based on your input.

3

It generates structured artifacts: interview guides, scoring frameworks, test plans, or themed reports.

4

Review the output, adjust emphasis or add constraints, and iterate until the research plan is field-ready.

Example

Research brief
We think freelance designers struggle to price fixed-bid projects. Validate this with JTBD interviews targeting solo designers earning $50K-$150K.
Interview kit
Screener criteria
Solo freelance designer, 2+ years experience, completed 10+ fixed-bid projects, annual revenue $50K-$150K, no agency affiliation.
Interview script (12 questions)
1. Walk me through the last time you quoted a fixed-bid project. What happened?
2. What were you trying to achieve when you set that price?
3. What alternatives did you consider before choosing your approach?
4. Where did you feel most uncertain during the process?
...
Analysis framework
Code responses into job statements: 'When [situation], I want to [motivation], so I can [outcome].' Score each job by frequency (1-5) and satisfaction gap (1-5). Prioritize jobs with gap > 3 and frequency > 3.

Metrics this improves

Conversion Rate
+10-20%
Product & Engineering
ICP Clarity
+2-3x
Product & Engineering

Works with

Product Discovery

Strategic user research and problem validation expertise — from interview techniques and JTBD to opportunity sizing and insight synthesis.

Philosophy

Great products start with great problems. Discovery is how you find problems worth solving for people who will pay.

The best product discovery:

  1. Talk to users, not stakeholders — Customers know their problems, not solutions
  2. Validate problems before solutions — Build the right thing, then build it right
  3. Quantify and qualify — Numbers tell you what, conversations tell you why
  4. Continuous over batched — Weekly habits beat quarterly projects

How This Skill Works

When invoked, apply the guidelines in rules/ organized by:

  • research-* — User interview techniques, survey design, research ops
  • discovery-* — Problem discovery, JTBD framework, validation
  • analysis-* — Synthesis, segmentation, competitive analysis
  • testing-* — Prototype testing, usability testing

Core Frameworks

Discovery Process

PhaseActivitiesOutputs
ExploreInterviews, observation, data miningProblem space map
ValidateProblem interviews, surveys, experimentsValidated problems
PrioritizeOpportunity scoring, segmentationPrioritized roadmap
TestPrototype testing, usability studiesSolution validation

Jobs-to-be-Done Framework

                    ┌─────────────────────┐
                    │    FUNCTIONAL JOB   │
                    │   (What they do)    │
                    └──────────┬──────────┘
                               │
              ┌────────────────┼────────────────┐
              │                │                │
              ▼                ▼                ▼
       ┌──────────┐     ┌──────────┐     ┌──────────┐
       │ EMOTIONAL│     │  SOCIAL  │     │ CONTEXT  │
       │   JOB    │     │   JOB    │     │ (When/   │
       │ (Feel)   │     │ (Appear) │     │  Where)  │
       └──────────┘     └──────────┘     └──────────┘

Opportunity Scoring (OST)

FactorWeightDescription
Importance40%How important is this job to the customer?
Satisfaction30%How satisfied are they with current solutions?
Frequency20%How often do they encounter this problem?
Willingness to Pay10%Will they pay to solve this?

Opportunity Score = Importance + max(Importance - Satisfaction, 0)

Research Method Selection

┌─────────────────────────────────────────────────────────────┐
│                   GENERATIVE RESEARCH                       │
│              (Discover unknown unknowns)                    │
│  ┌───────────┐  ┌───────────┐  ┌───────────┐               │
│  │Contextual │  │ Discovery │  │ Diary    │               │
│  │ Inquiry   │  │ Interviews│  │ Studies  │               │
│  └───────────┘  └───────────┘  └───────────┘               │
├─────────────────────────────────────────────────────────────┤
│                   EVALUATIVE RESEARCH                       │
│              (Validate known hypotheses)                    │
│  ┌───────────┐  ┌───────────┐  ┌───────────┐               │
│  │ Usability │  │  A/B      │  │ Prototype │               │
│  │ Testing   │  │  Testing  │  │ Testing   │               │
│  └───────────┘  └───────────┘  └───────────┘               │
├─────────────────────────────────────────────────────────────┤
│                   QUANTITATIVE RESEARCH                     │
│              (Measure and prioritize)                       │
│  ┌───────────┐  ┌───────────┐  ┌───────────┐               │
│  │  Surveys  │  │ Analytics │  │ Card      │               │
│  │           │  │  Review   │  │ Sorting   │               │
│  └───────────┘  └───────────┘  └───────────┘               │
└─────────────────────────────────────────────────────────────┘

Customer Segmentation Matrix

DimensionConsumer (B2C)Business (B2B)
DemographicsAge, income, locationCompany size, industry, revenue
BehaviorUsage patterns, purchase historyBuying process, tech stack
PsychographicsValues, lifestyle, attitudesCompany culture, risk tolerance
NeedsProblems, goals, aspirationsBusiness outcomes, KPIs

Continuous Discovery Cadence

Weekly:
├── 2-3 customer interviews
├── Review analytics/feedback
└── Update opportunity backlog

Monthly:
├── Synthesis session
├── Prioritization review
└── Stakeholder alignment

Quarterly:
├── Deep-dive research sprint
├── Competitive analysis refresh
└── Segment review

Interview Quick Reference

Interview TypeWhen to UseKey Questions
DiscoveryExploring problem space"Tell me about the last time..."
ProblemValidating specific pain"How painful is this 1-10? Why?"
SolutionTesting concepts"Would this solve your problem?"
JTBDUnderstanding motivation"What were you trying to accomplish?"
UsabilityTesting interfaces"What do you expect to happen?"

Anti-Patterns

  • Solution-first discovery — Falling in love with solutions before validating problems
  • Leading the witness — Asking questions that suggest desired answers
  • Confirmation bias — Only hearing what supports your hypothesis
  • Sample of one — Making decisions from a single interview
  • Proxy research — Asking salespeople instead of customers
  • Feature requests as research — Users ask for features, not problems
  • Analysis paralysis — Researching forever, never deciding
  • HiPPO-driven — Highest Paid Person's Opinion overriding data

Reference documents


title: Section Organization

1. Research Methods (research)

Impact: CRITICAL Description: User interview techniques, survey design, and research operations. The foundation of all discovery work.

2. Problem Discovery (discovery)

Impact: CRITICAL Description: Problem identification, validation, and Jobs-to-be-Done framework. Finding problems worth solving.

3. Analysis & Synthesis (analysis)

Impact: HIGH Description: Segmentation, competitive analysis, opportunity sizing, and insight synthesis. Making sense of data.

4. Testing & Validation (testing)

Impact: HIGH Description: Prototype testing, usability testing, and experiment design. Validating solutions before building.


title: Competitive Analysis impact: HIGH tags: analysis, competitive, market, positioning

Competitive Analysis

Impact: HIGH

Know your competition deeply — not to copy them, but to differentiate from them. The goal is finding gaps, not matching features.

Competition Landscape

Types of Competitors

TypeDefinitionExample
DirectSame solution, same marketFigma vs Sketch
IndirectDifferent solution, same jobNotion vs Confluence vs Google Docs
AlternativeDifferent approach entirelyHiring an agency vs using a tool
InertiaDoing nothingStatus quo, manual processes

Map Your Competitive Landscape

                    HIGH AWARENESS
                          │
        Direct ───────────┼────────── Indirect
        Competitors       │           Competitors
        (Same solution)   │           (Different solution)
                          │
    ──────────────────────┼────────────────────────
                          │
        Alternative ──────┼────────── Inertia
        Approaches        │           (Non-consumption)
                          │
                    LOW AWARENESS

Competitive Research Framework

What to Research

CategoryQuestionsSources
ProductFeatures, pricing, UX, integrationsWebsite, free trials, demos
PositioningWho they target, how they differentiateWebsite, ads, content
Go-to-MarketSales model, channels, pricingPublic info, job postings
TractionRevenue, customers, growthPress, funding, reviews
StrategyWhere they're headingJob posts, leadership talks, roadmap
WeaknessesWhere they fall shortReviews, churned customers, forums

Competitor Profile Template

┌─────────────────────────────────────────────────────────┐
│ COMPETITOR: [Name]                                      │
│ WEBSITE: [URL]                                          │
│ FOUNDED: [Year]    FUNDING: [Amount]    EMPLOYEES: [#]  │
├─────────────────────────────────────────────────────────┤
│ POSITIONING                                             │
│ Tagline: [Their headline]                               │
│ Target: [Who they serve]                                │
│ Category: [How they describe themselves]                │
├─────────────────────────────────────────────────────────┤
│ PRODUCT                                                 │
│ Core features: [Key capabilities]                       │
│ Pricing: [Model and price points]                       │
│ Integrations: [Key platforms]                           │
├─────────────────────────────────────────────────────────┤
│ STRENGTHS                                               │
│ • [Strength 1]                                          │
│ • [Strength 2]                                          │
├─────────────────────────────────────────────────────────┤
│ WEAKNESSES                                              │
│ • [Weakness 1]                                          │
│ • [Weakness 2]                                          │
├─────────────────────────────────────────────────────────┤
│ SIGNALS                                                 │
│ Recent moves: [Product launches, funding, hires]        │
│ Review sentiment: [What customers say]                  │
└─────────────────────────────────────────────────────────┘

Feature Comparison Matrix

Build a Feature Matrix

FeatureYouComp AComp BComp C
Core feature 1YesYesYesPartial
Core feature 2YesYesNoYes
Differentiator 1YesNoNoNo
Their strengthNoYesYesYes
Integration XYesYesNoYes
Self-serve signupYesNoYesNo
Enterprise featuresPartialYesNoYes

How to Use This:

  • Green cells: Your advantages
  • Red cells: Their advantages
  • White cells: Table stakes

Positioning Analysis

Competitive Positioning Map

                    ENTERPRISE
                        │
                        │
                        │         [Comp C]
    COMPLEX ────────────┼──────────────────── SIMPLE
                        │
              [Comp A]  │
                        │    [You]
                        │
                    STARTUP/SMB

Positioning Questions:

  • What axes matter most to customers?
  • Where are competitors clustered?
  • Where is whitespace?
  • Can you own a unique position?

Win/Loss Analysis

Track Why You Win and Lose

OpportunityResultPrimary ReasonSecondaryCompetitor
Acme CorpWonBetter UXPriceComp A
Beta IncLostMissing feature XComp B
Gamma LLCLostExisting relationshipComp A
Delta CoWonIntegration with YSupportComp C

Patterns to Look For:

  • Which competitor do you win against most?
  • Which features matter in won deals?
  • Why do you lose? Is it fixable?
  • Are there deals you shouldn't pursue?

Churned Customer Research

Interview Customers Who Left for Competitors

Questions:
"What prompted you to start looking at alternatives?"
"What options did you consider?"
"What made you choose [competitor]?"
"What does [competitor] do better?"
"What do you miss about us?"
"What would it take for you to come back?"

Churn Analysis Framework

Reason Category% of ChurnActionable?Priority
Missing feature X35%YesHigh
Price25%PartiallyMedium
Competitor relationship15%NoLow
Poor support15%YesHigh
Business closed10%NoN/A

Competitive Intelligence Sources

Public Sources

- Website and product (use free trial)
- G2, Capterra reviews
- LinkedIn (headcount, job posts)
- Crunchbase (funding, investors)
- Press releases, news
- Conference talks, podcasts
- Social media presence
- SEO/content analysis

Research Sources

- Win/loss interviews
- Customer interviews (what else they considered)
- Sales team feedback
- Industry analyst reports
- User testing of competitor products

Competitive Monitoring

Track ongoing:
- Job postings (what they're building)
- Pricing changes
- New feature announcements
- Funding rounds
- Leadership changes
- Customer reviews (new themes)

Battlecard Template

For Sales and Positioning

┌─────────────────────────────────────────────────────────┐
│ BATTLECARD: vs [Competitor]                             │
├─────────────────────────────────────────────────────────┤
│ QUICK TAKE                                              │
│ [One sentence on who they are and key difference]       │
├─────────────────────────────────────────────────────────┤
│ WHY CUSTOMERS CHOOSE THEM                               │
│ • [Reason 1]                                            │
│ • [Reason 2]                                            │
├─────────────────────────────────────────────────────────┤
│ WHY CUSTOMERS CHOOSE US                                 │
│ • [Reason 1]                                            │
│ • [Reason 2]                                            │
├─────────────────────────────────────────────────────────┤
│ COMMON OBJECTIONS AND RESPONSES                         │
│ "They have feature X"                                   │
│ → [Response]                                            │
│                                                         │
│ "They're more established"                              │
│ → [Response]                                            │
├─────────────────────────────────────────────────────────┤
│ KILLER QUESTIONS TO ASK PROSPECT                        │
│ • [Question that exposes their weakness]                │
│ • [Question that highlights our strength]               │
├─────────────────────────────────────────────────────────┤
│ LANDMINES (Their FUD about us)                          │
│ • [What they say about us]                              │
│ → [Truth and response]                                  │
└─────────────────────────────────────────────────────────┘

Competitive Strategy Options

How to Compete

StrategyWhen to UseExample
DifferentiateClear unique valueFigma (collaborative) vs Sketch
Niche downCan't out-general them"CRM for real estate" vs Salesforce
Disrupt on priceCan sustain lower costNotion vs Confluence
Out-executeSame market, better productLinear vs Jira
Category creationGenuinely new approachSuperhuman (premium email)

Competitive Analysis Mistakes

Common Errors

1. Feature comparison only
   → Features don't capture positioning, experience, trust

2. Ignoring indirect competition
   → Spreadsheets and "do nothing" are competitors too

3. Static analysis
   → Markets change; refresh quarterly

4. Copying winners
   → Following competition means never leading

5. Analysis paralysis
   → Know enough to act, not everything

Anti-Patterns

  • Feature chasing — Adding features because competitors have them
  • Competitor obsession — Spending more time on them than customers
  • Ignoring non-consumption — The biggest competitor is often "do nothing"
  • Static battlecards — Competitive info that's never updated
  • Second-hand intel — Only hearing about competitors through sales
  • Copying positioning — Being a worse version of them instead of different

title: Opportunity Sizing & Prioritization impact: HIGH tags: analysis, opportunity, prioritization, roadmap

Opportunity Sizing & Prioritization

Impact: HIGH

Not all opportunities are equal. Sizing and prioritization ensure you work on problems worth solving for markets worth serving.

Market Sizing Fundamentals

TAM, SAM, SOM

┌─────────────────────────────────────────────────────────┐
│                         TAM                             │
│         Total Addressable Market                        │
│    "Everyone who could possibly buy"                    │
│                                                         │
│    ┌───────────────────────────────────────────┐       │
│    │                   SAM                      │       │
│    │       Serviceable Addressable Market      │       │
│    │    "Who we can realistically reach"       │       │
│    │                                           │       │
│    │    ┌───────────────────────────────┐     │       │
│    │    │             SOM               │     │       │
│    │    │   Serviceable Obtainable     │     │       │
│    │    │   "What we can capture"      │     │       │
│    │    └───────────────────────────────┘     │       │
│    └───────────────────────────────────────────┘       │
└─────────────────────────────────────────────────────────┘

Sizing Approaches

ApproachMethodBest For
Top-downIndustry reports × market shareLarge markets, investor pitches
Bottom-upCustomers × price × penetrationValidation, realistic planning
ComparableSimilar company revenueBenchmarking

Bottom-Up Sizing (Most Useful)

Formula:

Market Size = # of Target Customers × Average Contract Value × Purchase Frequency

Example:
- 50,000 Series A-C startups in US
- × 30% have the problem (15,000)
- × 50% would consider buying (7,500)
- × $5,000 ACV
- × 1 purchase per year
= $37.5M SOM

Refine with Segments:

SegmentCountProblem %Buy %ACVSOM
Series A (20-50 emp)30,00025%40%$3,000$9M
Series B (50-200 emp)15,00040%50%$6,000$18M
Series C (200-500 emp)5,00050%60%$15,000$22.5M
Total$49.5M

Opportunity Scoring Framework

RICE Scoring

FactorDescriptionHow to Estimate
ReachHow many customers affected?Data + interviews
ImpactHow much will it improve outcomes?0.25 (low) to 3 (massive)
ConfidenceHow sure are you?% based on evidence
EffortPerson-months to buildEngineering estimate

Score = (Reach × Impact × Confidence) / Effort

Example:

OpportunityReachImpactConfidenceEffortScore
Real-time collab5,000280%42,000
API access2,000290%21,800
Dark mode8,0000.595%13,800
Mobile app3,000170%6350

Opportunity-Solution Tree

Map Problems to Potential Solutions

OBJECTIVE: Increase activation rate

├── OPPORTUNITY: Users don't understand value
│   ├── Solution: Interactive onboarding
│   ├── Solution: Value-focused empty states
│   └── Solution: Personalized setup flow
│
├── OPPORTUNITY: Setup is too complex
│   ├── Solution: One-click templates
│   ├── Solution: Import from competitors
│   └── Solution: AI-assisted setup
│
└── OPPORTUNITY: Users don't reach "aha moment"
    ├── Solution: Guided first task
    ├── Solution: Sample data to play with
    └── Solution: Success milestone celebrations

Prioritization Matrices

Impact vs Effort (2x2)

HIGH IMPACT
    │
    │   Quick Wins      │    Big Bets
    │   (Do now)        │    (Plan carefully)
    │                   │
────┼───────────────────┼───────────────────
    │                   │
    │   Fill-ins        │    Money Pits
    │   (Do if easy)    │    (Avoid)
    │                   │
    └───────────────────┴──────────── HIGH EFFORT

ICE Scoring (Simplified)

FactorDescriptionScale
ImpactEffect on goal1-10
ConfidenceEvidence strength1-10
EaseEffort to implement1-10

Score = (Impact + Confidence + Ease) / 3

Customer Value vs Business Value

Balance Both Perspectives

OpportunityCustomer ValueBusiness ValueCombined
Pain severity (1-10)8--
Frequency (1-10)6--
WTP (1-10)7--
Revenue potential-8-
Retention impact-9-
Strategic fit-7-
Total212445

Validation Before Prioritization

Evidence Levels

LevelEvidence TypeConfidence
1Team opinion20%
2Sales/support feedback40%
3Qualitative interviews (5+)60%
4Survey data (100+ responses)75%
5Behavioral data (actual usage)85%
6Experiment results (A/B test)95%

Minimum Evidence by Effort

EffortMinimum Evidence LevelRationale
< 1 weekLevel 2Low risk, can learn quickly
1-4 weeksLevel 3Need some validation
1-3 monthsLevel 4Significant investment
> 3 monthsLevel 5-6High risk, need strong signals

Portfolio Approach

Balance Your Bets

70/20/10 Rule:

70% → Core improvements
      Low risk, known value
      Features customers request
      Optimizations, debt reduction

20% → Adjacent bets
      Medium risk, probable value
      New use cases for existing users
      Expansion features

10% → Transformational bets
      High risk, high potential
      New markets, new products
      Big swings

Saying No

Deprioritization Framework

When to say no:

- Opportunity score too low
- Doesn't serve target segment
- Conflicts with strategy
- Better alternatives exist
- Evidence is weak
- Timing is wrong

How to say no:

1. Acknowledge the request
2. Explain the reasoning
3. Share what you ARE doing
4. Leave door open if things change

Example:
"We've heard this from several customers. Right now, we're
focused on [X] because [evidence/strategy]. We're tracking
this and will revisit in Q3 when we have more data."

Opportunity Tracking

Opportunity Backlog Template

IDOpportunitySegmentEvidenceScoreStatus
O-1Speed up report generationPower Users8 interviews, 40% mention24Validating
O-2Mobile accessField teams3 requests12Backlog
O-3Integration with SlackAllAnalytics: 60% use Slack28Building
O-4AI-generated insightsEnterprise2 enterprise requests8Watching

Anti-Patterns

  • HiPPO prioritization — Highest Paid Person's Opinion wins
  • Squeaky wheel — Loudest customer gets priority
  • Recency bias — Last request heard seems most important
  • Feature factory — Building without validating outcomes
  • Analysis paralysis — Scoring forever, building never
  • Single-metric tyranny — Over-indexing on one factor
  • Ignoring strategic fit — Building things that don't compound
  • Sunk cost fallacy — Continuing because you started

title: Customer Segmentation Research impact: HIGH tags: analysis, segmentation, personas, targeting

Customer Segmentation Research

Impact: HIGH

Not all customers are equal. Segmentation reveals which customers to prioritize and how to serve them differently.

Segmentation vs Personas

ConceptWhat It IsBased OnHow to Use
SegmentGroup of customers with shared characteristicsData + behaviorPrioritization, positioning
PersonaArchetype representing a segmentResearch synthesisDesign, messaging
ICPIdeal customer profileBest customer analysisSales, marketing targeting

Segmentation Approaches

Behavioral Segmentation (Most Valuable)

Segment by what they DO:
- Usage patterns (power user vs casual)
- Purchase behavior (frequency, value)
- Feature adoption (which capabilities)
- Engagement level (active vs dormant)
- Workflow (how they use product)

Needs-Based Segmentation

Segment by what they NEED:
- Primary job-to-be-done
- Must-have vs nice-to-have features
- Success metrics (what does "working" mean?)
- Pain intensity (how urgent)

Firmographic Segmentation (B2B)

Segment by COMPANY characteristics:
- Size (employees, revenue)
- Industry/vertical
- Stage (startup, growth, enterprise)
- Geography
- Tech stack

Demographic Segmentation (B2C)

Segment by PERSON characteristics:
- Age, income, location
- Role, seniority
- Experience level
- Company type (if relevant)

Building Segments from Research

Step 1: Collect Data

Sources:
- User interviews (qualitative)
- Survey responses (quantitative)
- Product analytics (behavioral)
- CRM data (firmographic)
- Support tickets (pain points)

Step 2: Identify Patterns

Look for clusters around:
- Similar problems/jobs
- Similar behaviors
- Similar outcomes
- Similar objections

Interview notes → Affinity mapping → Pattern recognition

Step 3: Define Segments

SegmentSizeBehaviorNeedsValue
Power Users15%Daily use, all featuresAdvanced capabilities, integrationsHigh LTV, low support
Task Completers45%Weekly use, core featuresSimplicity, speedMedium LTV, medium support
Occasional Users30%Monthly, basic featuresMinimal frictionLow LTV, low support
Struggling Users10%Started, stalledHand-holding, guidanceAt-risk, high support

Step 4: Validate Segments

Validation criteria:
- Identifiable: Can you find them?
- Substantial: Big enough to matter?
- Accessible: Can you reach them?
- Differentiable: Distinct from others?
- Actionable: Can you serve them differently?

Segment Prioritization Matrix

Score Each Segment

FactorWeightHow to Measure
Market size25%TAM for this segment
Revenue potential25%Willingness to pay × volume
Fit with product20%How well you solve their problem
Acquisition ease15%How hard to reach and convert
Strategic value15%Learning, brand, network effects

Example Prioritization

SegmentSizeRevenueFitAcquisitionStrategicTotal
Startups759897.4
Enterprise9106477.3
SMB867756.7
Agencies568645.9

Persona Development

From Segments to Personas

Segment: "Startups (20-100 employees)"

Persona: "Scaling Sarah"
- Role: Head of Engineering
- Context: Series A startup, growing fast
- Primary job: Ship features while maintaining quality
- Key pain: Technical debt piling up as team grows
- Success metric: Deploy frequency without incidents
- Quote: "I can't slow down to speed up."

Persona Template

┌─────────────────────────────────────────────────────────┐
│ PERSONA NAME: [Descriptive name]                        │
├─────────────────────────────────────────────────────────┤
│ ROLE: [Job title]                                       │
│ CONTEXT: [Company type, size, stage]                    │
├─────────────────────────────────────────────────────────┤
│ GOALS:                                                  │
│ • [Primary goal]                                        │
│ • [Secondary goal]                                      │
├─────────────────────────────────────────────────────────┤
│ FRUSTRATIONS:                                           │
│ • [Pain point 1]                                        │
│ • [Pain point 2]                                        │
├─────────────────────────────────────────────────────────┤
│ BEHAVIORS:                                              │
│ • [How they work]                                       │
│ • [How they buy]                                        │
│ • [Where they get information]                          │
├─────────────────────────────────────────────────────────┤
│ QUOTE: "[Representative statement from interviews]"     │
└─────────────────────────────────────────────────────────┘

Segment-Specific Research

Research Questions by Segment Stage

New Segment (exploring):
- What jobs are they hiring products for?
- What alternatives do they use today?
- What would make them switch?
- How do they buy? Who's involved?

Existing Segment (deepening):
- What features do they use most/least?
- Where do they get stuck?
- What would make them expand usage?
- What's their actual workflow?

At-Risk Segment (retaining):
- Why did they originally buy?
- What changed?
- What would it take to re-engage?
- What can we learn for other segments?

Multi-Segment Strategy

Beachhead → Expand

Phase 1: Beachhead
Choose ONE segment to dominate first.
Criteria:
- Easiest to reach
- Fastest sales cycle
- Most forgiving of early product
- Best for learning

Phase 2: Adjacent Expansion
Add segments that:
- Share similar jobs
- Can use same core product
- Don't require major pivots
- Build on existing reputation

Phase 3: Market Coverage
Expand to larger segments that:
- Require product maturity
- Need case studies/proof
- Have longer sales cycles

When to Serve Multiple Segments

Yes, if:
- Core product serves both
- Marginal cost is low
- Segments don't conflict
- You have resources

No, if:
- Requires different products
- Segments have opposing needs
- Spreads team too thin
- One segment is clearly better

Segmentation Anti-Patterns

Segment by Demographics Alone

Bad: "Small businesses" (too broad, what do they have in common?)
Good: "Growing e-commerce businesses with 10-50 employees struggling
       with inventory management"

Too Many Segments

Bad: 12 personas with nuanced differences
Good: 3-4 distinct segments with clear differentiation

Static Segments

Bad: "We defined personas 2 years ago"
Good: Quarterly review of segment fit and behavior changes

Segment by Wish, Not Reality

Bad: "Enterprise would be great customers"
      (but you've never sold to them)
Good: Segment based on actual customer data

Segment Research Tools

ToolUse CaseData Type
InterviewsDeep understandingQualitative
SurveysValidate segment sizeQuantitative
AnalyticsBehavioral patternsQuantitative
CRM analysisCustomer attributesFirmographic
Support ticketsPain points by segmentQualitative
Cohort analysisSegment performanceQuantitative

Anti-Patterns

  • Segment by convenience — Using data you have vs data that matters
  • Persona theater — Beautiful documents no one uses
  • Ignoring negative segments — Not explicitly deprioritizing bad-fit customers
  • One-size-fits-all — Same messaging/product for all segments
  • Segment proliferation — Creating new segments instead of validating existing
  • Demographic obsession — Age/size over behavior/needs

title: Research Synthesis & Insight Generation impact: HIGH tags: analysis, synthesis, insights, research-ops

Research Synthesis & Insight Generation

Impact: HIGH

Raw research is just data. Synthesis transforms observations into insights that drive decisions.

The Synthesis Process

COLLECT → ORGANIZE → ANALYZE → SYNTHESIZE → COMMUNICATE

Observations → Themes → Patterns → Insights → Recommendations

From Observation to Insight

LevelWhat It IsExample
ObservationWhat you saw/heard"User clicked back button 3 times"
PatternRecurring observations"5 users struggled to find settings"
ThemeCategory of patterns"Navigation is confusing"
InsightWhy it matters"Users expect settings where they expect account info because of mental models from other apps"
RecommendationWhat to do"Move settings to account dropdown"

Affinity Mapping

Step-by-Step Process

1. PREPARE
   - Print/write each observation on a sticky note
   - One observation per note
   - Include participant ID for traceability

2. CLUSTER
   - Group similar observations together
   - Don't force categories — let them emerge
   - Move notes around as you go

3. LABEL
   - Name each cluster with a theme
   - Theme should capture the essence, not just describe

4. ORGANIZE
   - Arrange clusters by relationship
   - Look for meta-themes
   - Note outliers (don't discard)

5. DOCUMENT
   - Photo your wall
   - Transfer to digital (Miro, FigJam)
   - Note the count in each cluster

Example Affinity Map

┌─────────────────────────────────────────────────────────┐
│                    RESEARCH FINDINGS                    │
├─────────────┬──────────────┬─────────────┬─────────────┤
│   TRUST     │  ONBOARDING  │   VALUE     │   WORKFLOW  │
│   ISSUES    │   FRICTION   │   CLARITY   │   GAPS      │
├─────────────┼──────────────┼─────────────┼─────────────┤
│ • "Worried  │ • "Too many  │ • "Not sure │ • "Have to  │
│   about     │   steps"     │   what it   │   switch    │
│   security" │ • "Didn't    │   does"     │   between   │
│ • "Who has  │   know where │ • "Why      │   tools"    │
│   access?"  │   to start"  │   should I  │ • "Manual   │
│ • "Can I    │ • "Overwhel- │   use this?"│   copy/     │
│   delete    │   ming       │ • "Pricing  │   paste"    │
│   my data?" │   options"   │   unclear"  │             │
│             │              │             │             │
│ [5 notes]   │ [8 notes]    │ [6 notes]   │ [4 notes]   │
└─────────────┴──────────────┴─────────────┴─────────────┘

Insight Frameworks

Insight = Observation + Interpretation + Implication

OBSERVATION: What did we see/hear?
"7 of 10 users tried to find settings in the top nav"

INTERPRETATION: What does it mean?
"Users have a mental model where account-level actions
 live in the header, based on other apps they use"

IMPLICATION: Why does it matter?
"Our sidebar-based settings creates friction and makes
 users feel lost, impacting activation"

Insight Quality Checklist

Good insights are:

  • Actionable — You can do something about it
  • Non-obvious — Not already known
  • Specific — Clear and concrete
  • Grounded — Supported by evidence
  • Generative — Inspires solutions

Research Repository

Organize Findings for Reuse

/research
├── /projects
│   ├── /2024-q1-onboarding-study
│   │   ├── research-plan.md
│   │   ├── interview-notes/
│   │   ├── synthesis.md
│   │   └── presentation.pdf
│   └── /2024-q2-pricing-research
│       └── ...
├── /insights
│   ├── navigation.md
│   ├── pricing-perception.md
│   └── trust-barriers.md
├── /personas
│   └── personas.md
└── /templates
    └── interview-guide.md

Insight Documentation Template

# Insight: [Title]

## Summary
[One sentence insight]

## Evidence
- Study: [Link to research project]
- Participants: [Who/how many]
- Key quotes:
  - "[Quote 1]" — P3
  - "[Quote 2]" — P7

## Analysis
[What does this mean for the product?]

## Recommendations
- [Action 1]
- [Action 2]

## Related Insights
- [Link to related insight]

## Status
- Discovered: [Date]
- Last validated: [Date]
- Action taken: [None/In progress/Resolved]

Synthesis Sessions

Running a Team Synthesis

PARTICIPANTS: PM, Designer, Engineer, optional: researcher, stakeholder

PREP (before session):
- All participants review raw notes
- Each person marks 5-10 standout observations
- Prepare wall or digital board

SESSION STRUCTURE (2-3 hours):

1. Share observations (30 min)
   - Each person shares their top observations
   - No debate yet — just capture

2. Cluster and theme (45 min)
   - Group similar observations
   - Name the clusters
   - Discuss disagreements

3. Prioritize themes (30 min)
   - Which themes are most impactful?
   - Which have most evidence?
   - Force rank top 5

4. Generate insights (30 min)
   - For top themes, articulate insights
   - Use Observation → Interpretation → Implication

5. Discuss implications (30 min)
   - What should we do differently?
   - What needs more research?
   - Who else needs to know?

Communicating Findings

Research Readout Structure

1. CONTEXT (2 min)
   - Why we did this research
   - What questions we had
   - Methodology overview

2. KEY INSIGHTS (10-15 min)
   - Top 3-5 insights
   - For each: evidence + implication
   - Include representative quotes/clips

3. DETAILED FINDINGS (optional, 10 min)
   - Additional themes
   - Surprising observations
   - Segment differences

4. RECOMMENDATIONS (5 min)
   - Prioritized actions
   - What to explore further
   - What to stop/avoid

5. DISCUSSION (10 min)
   - Questions
   - Reactions
   - Next steps

One-Page Summary Template

┌─────────────────────────────────────────────────────────┐
│ RESEARCH SUMMARY: [Study Name]                          │
│ Date: [Date]    Participants: [N]    Method: [Type]     │
├─────────────────────────────────────────────────────────┤
│ RESEARCH QUESTIONS                                      │
│ 1. [Question 1]                                         │
│ 2. [Question 2]                                         │
├─────────────────────────────────────────────────────────┤
│ KEY INSIGHTS                                            │
│                                                         │
│ 1. [Insight headline]                                   │
│    Evidence: [Brief support]                            │
│    Implication: [What it means]                         │
│                                                         │
│ 2. [Insight headline]                                   │
│    Evidence: [Brief support]                            │
│    Implication: [What it means]                         │
│                                                         │
│ 3. [Insight headline]                                   │
│    Evidence: [Brief support]                            │
│    Implication: [What it means]                         │
├─────────────────────────────────────────────────────────┤
│ RECOMMENDATIONS                                         │
│ • [Action 1]                                            │
│ • [Action 2]                                            │
│ • [Action 3]                                            │
├─────────────────────────────────────────────────────────┤
│ OPEN QUESTIONS                                          │
│ • [What we still need to learn]                         │
└─────────────────────────────────────────────────────────┘

Avoiding Synthesis Bias

Common Biases

BiasDescriptionMitigation
ConfirmationFinding what you expectedHave skeptic in session
RecencyOver-weighting recent interviewsReview all notes equally
Loudest voiceOne strong opinion dominatesIndividual prep before group
False consensusAssuming agreementExplicitly surface disagreements
Cherry-pickingSelecting supporting evidenceCount evidence systematically

Bias Mitigation Tactics

- Multiple synthesizers (don't do it alone)
- Count observations (numbers reduce bias)
- Include disconfirming evidence
- Document uncertainty
- Revisit after time passes

Research Operations

Building a Research Practice

Weekly:
- 2-3 customer interviews
- Update insight repository
- Share key learnings

Monthly:
- Synthesis session
- Stakeholder readout
- Prioritize research backlog

Quarterly:
- Research roadmap review
- Repository cleanup
- Methods retrospective

Anti-Patterns

  • Insight hoarding — Research that never gets shared
  • Endless analysis — Synthesizing forever, never concluding
  • Presentation-only — Deck created, never referenced again
  • One researcher — Single person knows everything, can't scale
  • Stale insights — 2-year-old personas treated as current
  • Feature validation only — Only researching what to build, not why
  • Quote abuse — Cherry-picking quotes to support decisions already made

title: Jobs-to-be-Done Framework impact: CRITICAL tags: discovery, jtbd, jobs, motivation, customer-needs

Jobs-to-be-Done Framework

Impact: CRITICAL

People don't buy products. They hire them to do a job. Understanding the job changes everything about how you build and market.

The Core Concept

"People don't want a quarter-inch drill.
 They want a quarter-inch hole."
    — Theodore Levitt

"Actually, they want to hang a picture.
 Actually, they want to feel at home."
    — JTBD thinking

Traditional View vs JTBD View

TraditionalJTBD
"Who is our customer?""What job are they hiring us to do?"
Demographics-focusedProgress-focused
Competes with similar productsCompetes with any alternative
Features drive decisionsOutcomes drive decisions

Job Anatomy

A Complete Job Statement

┌─────────────────────────────────────────────────────────┐
│                     COMPLETE JOB                        │
├─────────────────────────────────────────────────────────┤
│ WHEN [situation/context]                                │
│ I WANT TO [motivation/action]                           │
│ SO I CAN [expected outcome]                             │
├─────────────────────────────────────────────────────────┤
│ FUNCTIONAL JOB: What they're trying to accomplish       │
│ EMOTIONAL JOB: How they want to feel                    │
│ SOCIAL JOB: How they want to be perceived               │
└─────────────────────────────────────────────────────────┘

Example: Project Management Tool

WHEN I'm leading a complex project with multiple stakeholders
I WANT TO see everyone's progress and blockers in one place
SO I CAN keep the project on track and look competent to leadership

Functional: Track project status across team members
Emotional: Feel in control, reduce anxiety about surprises
Social: Appear organized and competent to executives

The Four Forces of Progress

                    PUSH OF CURRENT SITUATION
                    (Problems with status quo)
                              │
                              ▼
┌─────────────────────────────────────────────────────────┐
│                                                         │
│                      SWITCHING                          │
│                                                         │
└─────────────────────────────────────────────────────────┘
                              ▲
                              │
                    PULL OF NEW SOLUTION
                    (Attraction of new way)

──────────────────────────────────────────────────────────

                   ANXIETY OF NEW SOLUTION
                   (Fear of change, learning)
                              │
                              ▼
┌─────────────────────────────────────────────────────────┐
│                                                         │
│                   NOT SWITCHING                         │
│                                                         │
└─────────────────────────────────────────────────────────┘
                              ▲
                              │
                   HABIT OF PRESENT
                   (Comfort of status quo)

For Switching to Occur: Push + Pull > Anxiety + Habit

JTBD Interview Technique

The Timeline

Map the journey from first thought to purchase:

First Thought ─► Passive Looking ─► Active Looking ─► Decision ─► Purchase ─► Use
     │                │                    │              │           │         │
     ▼                ▼                    ▼              ▼           ▼         ▼
"What triggered "Where did you  "What did you   "Why this   "How did    "Is it
 this?"          look?"          compare?"       one?"       you buy?"   working?"

Timeline Interview Questions

First Thought:
"Take me back to when you first started thinking about [solution]."
"What was happening in your life/work at that moment?"
"What pushed you over the edge to start looking?"

Passive Looking:
"What did you do next?"
"Where did you look for solutions?"
"Who did you talk to about this?"

Active Looking:
"When did it become serious?"
"What options did you consider?"
"How did you evaluate them?"

Decision:
"What made you choose [solution]?"
"What almost stopped you?"
"Who else was involved in the decision?"

Use:
"How's it going now?"
"What's better? What's not?"
"Would you make the same choice again?"

Job Mapping

Break Down the Main Job into Steps

Job StepCustomer ActionPotential Pain Points
DefineDetermine goalsUnclear requirements
LocateFind inputs neededHard to find resources
PrepareSet up for executionComplex configuration
ConfirmVerify readinessUncertainty about correctness
ExecutePerform main activityDifficulty, errors
MonitorTrack progressLack of visibility
ModifyMake adjustmentsRigid, hard to change
ConcludeFinish the jobIncomplete, cleanup needed

Example: Job = "Prepare for Client Presentation"

StepActionPain PointsOpportunity
DefineUnderstand what client needsUnclear expectationsIntake templates
LocateFind relevant dataData in silosUnified dashboard
PrepareBuild presentationManual formattingAuto-formatting
ConfirmReview with teamAsync, slow feedbackReal-time collab
ExecutePresent to clientTechnical issuesReliable platform
MonitorRead room reactionsHard to gauge virtuallyEngagement analytics
ModifyAdjust on the flyRigid slidesDynamic content
ConcludeSend follow-upsManual trackingAutomated actions

Competing Solutions (Real Competition)

Your Real Competitors Are Alternatives, Not Similar Products

Job: "Stay informed about my industry"

Traditional competitors:
- Trade publications
- Industry newsletters

Real competitors (alternatives):
- Twitter/X
- LinkedIn feed
- Podcasts during commute
- Slack communities
- Doing nothing (JOMO)

Competitive Analysis Through JTBD

AlternativeFunctional FitEmotional FitTrade-offs
Trade publicationsHigh qualityFeel informedSlow, time-consuming
TwitterReal-timeFOMO, anxietyNoisy, unreliable
PodcastsConvenientPassive learningCan't skim
LinkedInProfessionalCareer signalingEngagement bait
NothingZero effortReliefMiss important things

Outcome Statements

Write Outcomes, Not Features

Format: [Direction] + [Metric] + [Object of Control] + [Context]

Good Outcome Statements

"Minimize the time it takes to find relevant information"
"Reduce the likelihood of missing important updates"
"Increase the ability to share insights with colleagues"
"Minimize the effort to stay current on trends"

Bad Outcome Statements

"Easy to use" (vague)
"Has good features" (not outcome)
"Users like it" (not specific)
"Fast" (not measurable)

Opportunity Scoring

Rate Each Outcome by Importance and Satisfaction

Opportunity = Importance + max(Importance - Satisfaction, 0)
OutcomeImportance (1-10)Satisfaction (1-10)Opportunity
Minimize time to find info949 + 5 = 14
Reduce missed updates868 + 2 = 10
Increase sharing ability575 + 0 = 5
Minimize effort staying current737 + 4 = 11

Interpretation:

  • Score > 12: Underserved opportunity (gold mine)
  • Score 10-12: Opportunity worth exploring
  • Score < 10: Adequately served or not important

JTBD Examples Across Industries

B2B Software

Job: "When I need to onboard a new team member, I want to give them
      access to everything they need, so I can get them productive
      quickly without creating security risks."

Competitors: Manual processes, IT tickets, spreadsheets, existing IAM tools

Consumer App

Job: "When I have 10 minutes of downtime, I want to feel like I'm
      learning something valuable, so I can feel productive and
      have interesting things to share."

Competitors: Podcasts, Twitter, news apps, YouTube, just resting

B2B Service

Job: "When we're planning our annual budget, I want to understand
      what we're actually spending on software, so I can cut waste
      and justify new investments."

Competitors: Finance team spreadsheets, procurement tools, doing it annually

Common JTBD Mistakes

Mistake 1: Jobs Too Broad

Bad: "Communicate with my team"
Good: "When I need quick feedback on a decision, get input from
       relevant people without disrupting deep work"

Mistake 2: Jobs Too Narrow

Bad: "Send a Slack message"
Good: "Get quick, informal answers from colleagues"

Mistake 3: Confusing Jobs with Tasks

Task: "Create a report"
Job: "Demonstrate value to stakeholders to secure continued funding"

Mistake 4: Ignoring Emotional/Social Jobs

Functional only: "Manage my calendar"
Complete: "Feel in control of my time and appear reliable to others"

Anti-Patterns

  • Feature-first thinking — Starting with "we could build X" instead of "what job exists"
  • Demographic segmentation — "Millennials want..." vs "People trying to [job] want..."
  • Ignoring non-consumption — Not considering why people do nothing
  • Jobs too abstract — "Make life better" is not a job
  • Skipping the timeline — Not understanding the journey to purchase
  • Functional-only jobs — Missing emotional and social dimensions

title: Problem Discovery & Validation impact: CRITICAL tags: discovery, validation, problems, product-market-fit

Problem Discovery & Validation

Impact: CRITICAL

The most expensive mistake in product is building something no one needs. Problem validation prevents building solutions to non-problems.

Problem vs Solution Fit

┌─────────────────────────────────────────────────────────┐
│                    PRODUCT SUCCESS                      │
│                                                         │
│   Problem-Solution Fit    →    Product-Market Fit       │
│   (Are we solving a         (Can we scale this         │
│    real problem?)            profitably?)              │
│                                                         │
│   Discovery Phase          →    Growth Phase            │
└─────────────────────────────────────────────────────────┘

Most startups fail at Problem-Solution Fit.
They build solutions to problems that:
- Don't exist
- Aren't painful enough
- Affect too few people
- People won't pay to solve

Problem Validation Framework

The 4 U's of Problem Validation

CriteriaQuestionValidation Method
UnworkableIs the current situation truly broken?Interview: "What happens if this isn't solved?"
UnavoidableCan they work around it?Interview: "How do you handle this today?"
UrgentIs it a priority right now?Interview: "Where does this rank vs other priorities?"
UnderservedAre existing solutions inadequate?Interview: "What have you tried? What's missing?"

Scoring:

  • 4/4 U's = Validated problem worth solving
  • 3/4 U's = Promising, investigate further
  • 2/4 U's = Weak problem, likely won't sustain business
  • 1/4 U's = Not a real problem

Problem Interview Script

Setup (2 min)

"Thanks for talking with me. I'm trying to understand how
people handle [problem area]. There are no right or wrong
answers — I'm here to learn from your experience."

Current State (10 min)

"Tell me about your role and what you're responsible for."
"Walk me through how you currently handle [area]."
"What tools or processes do you use?"
"How much time do you spend on this each week?"

Problem Exploration (15 min)

"What's the hardest part about [area]?"
"Tell me about the last time that happened."
"What did you do? How did it turn out?"
"How often does this happen?"
"On a scale of 1-10, how painful is this? Why that number?"

Attempted Solutions (10 min)

"What have you tried to solve this?"
"How well did that work? What's still missing?"
"Have you looked at other solutions? Which ones?"
"Why haven't you solved this yet?"

Prioritization (5 min)

"If you could wave a magic wand, what would change?"
"Where does solving this rank among your priorities?"
"What would happen if this never got solved?"
"Is this something your company would pay to solve?"

Validation Signals

Strong Problem Signals

+ They've already tried to solve it (spent money/time)
+ They describe specific negative consequences
+ They can quantify the cost (time, money, opportunity)
+ Multiple people independently mention same problem
+ They ask when your solution will be ready
+ They offer to pay or be a design partner

Weak Problem Signals

- "That would be nice to have"
- "I could see using that"
- Can't give specific examples
- Haven't tried to solve it themselves
- Problem is theoretical, not experienced
- Only one person mentions it

Danger Signals

! They're being polite, not honest
! They want to help you, not themselves
! "Yes, but..." followed by objections
! Can't describe current workaround
! Would use if free, but wouldn't pay

Problem Stack Ranking

Create a Problem Stack from Interviews

ProblemFrequencySeverityExisting SolutionsScore
Manual data entry8/10 mention8/10 painExcel, hacky scripts24
Report generation6/10 mention7/10 painManual, basic tools19
Data accuracy5/10 mention9/10 painDouble-checking18
Collaboration4/10 mention5/10 painEmail, Slack11

Scoring Formula:

Problem Score = (Frequency × 3) + (Severity × 2) + (Solution Gap × 1)

Where:
- Frequency = How many interviewees mentioned (1-10)
- Severity = How painful when it occurs (1-10)
- Solution Gap = How inadequate current solutions are (1-10)

Problem Statement Templates

Format 1: Situation-Complication-Question

Situation: [Current state]
Complication: [What makes it problematic]
Question: [What needs to be solved]

Example:
Situation: Engineering teams manage secrets across multiple environments.
Complication: Secrets are scattered in .env files, Slack messages, and wikis,
             leading to security risks and onboarding delays.
Question: How might we give teams a secure, centralized way to manage secrets
          that's as easy as copy-paste?

Format 2: User-Problem-Impact

[User segment] struggle with [problem]
which causes [negative impact].

Example:
DevOps engineers at startups struggle with secret sprawl
which causes security vulnerabilities and slows down new hire onboarding by 2+ days.

Format 3: Job-to-be-Done

When [situation], I want to [motivation],
so I can [expected outcome].

Example:
When onboarding a new developer, I want to give them access to all necessary secrets,
so I can get them productive on day one without compromising security.

Validation Experiments

Before Building Anything

ExperimentEffortSignal StrengthWhen to Use
Problem interviewsLowHighAlways first
Landing page testLowMediumValidate demand
Concierge MVPMediumHighTest workflow manually
Wizard of OzMediumHighFake automation, real value
Smoke testLowMediumMeasure intent to purchase

Landing Page Test

Create a page describing the problem and solution.
Add an email capture: "Get early access"
Drive traffic (ads, posts, outreach)

Success metrics:
- > 20% email signup rate = strong signal
- 10-20% = promising
- < 10% = weak problem or messaging

Smoke Test

Describe the solution with pricing.
Add "Buy Now" or "Start Trial" button.
Capture clicks (no actual purchase).

Success metrics:
- > 5% click = strong buying intent
- 2-5% = worth exploring
- < 2% = problem or pricing issue

Problem Pivot Signals

When to Pivot Your Problem Focus

- 5+ interviews with no consistent pain points
- Users satisfied with existing solutions
- Problem exists but won't pay to solve
- Market too small (< $10M opportunity)
- Problem is symptom, not root cause

Adjacent Problem Exploration

Original problem: "Reports take too long"
Adjacent problems:
- Data is inaccurate
- Stakeholders don't read reports
- Insights aren't actionable
- Manual data collection

Sometimes the bigger problem is next door.

Anti-Patterns

  • Solution-first thinking — Starting with "what if we built X" instead of "what problem exists"
  • Proxy validation — Asking friends/investors instead of target customers
  • Confirmation bias — Only hearing problems that fit your solution
  • Single-interview decisions — One person ≠ a market
  • Feature requests as problems — "I want X" is not the same as "I struggle with Y"
  • Hypothetical validation — "Would you use X?" vs "Tell me about last time..."
  • Ignoring workarounds — If they've built workarounds, you've found a problem

title: User Interview Techniques impact: CRITICAL tags: research, interviews, qualitative, discovery

User Interview Techniques

Impact: CRITICAL

User interviews are the cornerstone of discovery. Great interviews uncover truth; bad interviews confirm bias.

The Interview Spectrum

TypeGoalDurationSample Size
DiscoveryExplore problem space45-60 min8-12
Problem validationConfirm specific pain30-45 min6-10
Solution testingEvaluate concepts30-45 min5-8
UsabilityTest interface30-60 min5-8
JTBDUnderstand motivation45-60 min8-15

The Mom Test

Every question should pass "The Mom Test" — could someone lie to you to be nice?

Bad Questions (Your Mom Would Lie)

"Would you use an app that does X?"
"Do you think this is a good idea?"
"How much would you pay for this?"
"Would this solve your problem?"

Good Questions (Mom Can't Lie)

"Tell me about the last time you had this problem."
"Walk me through how you currently handle this."
"What have you tried to solve this? What happened?"
"How much time/money do you spend on this today?"

Question Frameworks

Start Wide, Go Deep

Level 1: "Tell me about your role..."
Level 2: "What are your biggest challenges in [area]?"
Level 3: "You mentioned X — tell me more about that."
Level 4: "When did this last happen? Walk me through it."
Level 5: "What did you do next? How did that work out?"

The Five Whys

"I'm frustrated with our reporting."
  → Why? "It takes too long to create reports."
  → Why? "I have to pull data from three different systems."
  → Why? "Our tools don't integrate."
  → Why? "We bought them at different times for different teams."
  → Why? "No one thought about the big picture."

Root cause: Lack of integrated planning, not the reporting tool itself.

Timeline Reconstruction

"Take me back to when you first realized this was a problem..."
"What happened next?"
"Then what did you do?"
"Who else was involved?"
"How did you feel at that point?"
"What ultimately happened?"

Interview Structure

Opening (5 min)

1. Thank them, set expectations
2. Get permission to record
3. Explain confidentiality
4. Start with easy warmup questions

Core Interview (25-45 min)

5. Current state exploration
6. Problem deep-dives (use 5 whys)
7. Past behavior (not future promises)
8. Specific examples and stories

Closing (5-10 min)

9. Summary and reflection
10. Ask who else to talk to
11. Thank and explain next steps

Good vs Bad Interview Practices

Good Practices

- Ask about specific past behavior
  "Tell me about the last time..."

- Let them talk (aim for 80% them, 20% you)
  "Mm-hmm. And then?"

- Follow the energy
  "You seemed frustrated when you mentioned X. Tell me more."

- Get concrete
  "Can you show me how you do that?"

- Embrace silence (count to 5 before jumping in)
  [Wait]

- Ask about money already spent
  "What have you tried? How much did that cost?"

Bad Practices

- Ask about future behavior (unreliable)
  "Would you use..." "Do you think..."

- Lead the witness
  "Don't you find X frustrating?"

- Pitch during interview
  "What if I told you our product does..."

- Multiple questions at once
  "How do you do X and does Y ever happen?"

- Accept vague answers
  "It's hard." → "Hard how? Give me an example."

- Interrupt or fill silences
  [They're still thinking!]

Capturing Insights

During Interview

- Record with permission (and always take notes as backup)
- Note emotional reactions (frustration, excitement)
- Capture exact quotes
- Mark surprising statements with "!!"

Note-Taking Template

Participant: [ID, not name]
Date:
Context: [Role, company type, etc.]

Key Quotes:
- "[Exact quote]" — about [topic]

Behaviors Observed:
- Currently does X
- Tried Y, didn't work because...

Pain Points:
- Problem: [Description]
  Frequency: [How often]
  Severity: [1-10]
  Current solution: [What they do now]

Surprises:
- Didn't expect: [Observation]

Follow-up Questions:
- Need to explore: [Question]

Sample Size Guidelines

Research GoalMinimumIdealWhen to Stop
Exploratory discovery58-12New themes stop emerging
Problem validation56-1080% confirm the problem
Segment comparison5 per segment8 per segmentClear differences emerge
JTBD mapping1012-20Jobs and contexts repeat

Signs You've Done Enough

  • Same themes recurring
  • No new problems emerging
  • You can predict what they'll say
  • Team aligned on insights

Anti-Patterns

  • Confirmation bias — Only hearing what supports your hypothesis
  • Friendly user trap — Only talking to people who like you
  • Leading questions — "Don't you hate when X happens?"
  • Solution pushing — Pitching instead of listening
  • Proxy interviews — Asking sales/support instead of users
  • Premature synthesis — Concluding after 2-3 interviews
  • Note-taking neglect — Trusting memory over documentation

title: Survey Design impact: HIGH tags: research, surveys, quantitative, validation

Survey Design

Impact: HIGH

Surveys quantify what interviews discover. Use them to measure prevalence and priority, not to explore unknowns.

When to Use Surveys

Good Use Cases

- Validate problem prevalence ("How many have this issue?")
- Prioritize features ("Which matters most?")
- Measure satisfaction (NPS, CSAT)
- Segment users by behavior
- Track changes over time

Bad Use Cases

- Discover new problems (use interviews)
- Understand "why" (surveys give what, not why)
- Predict future behavior (unreliable)
- Test new concepts (use prototype testing)

Survey Design Principles

1. Start with Objectives

Before writing questions:
- What decision will this inform?
- What will you do differently based on results?
- How will you analyze the data?

Bad: "Let's see what people think about our product."
Good: "We need to decide which of 3 features to build first."

2. Keep It Short

Survey Length Impact:
- < 5 min: 80%+ completion
- 5-10 min: 60-70% completion
- 10-15 min: 40-50% completion
- > 15 min: < 30% completion

Rule: If you can cut a question, cut it.

3. One Concept Per Question

Bad: "Is our product easy to use and affordable?"
(What if it's easy but expensive?)

Good:
"How easy is our product to use?"
"How affordable is our product?"

Question Types

TypeWhen to UseExample
Multiple choiceCategorical data"What's your role?"
Rating scaleMeasure intensity"How satisfied are you? 1-5"
Likert scaleAgreement/frequency"Strongly disagree → Strongly agree"
RankingPrioritization"Rank these features 1-5"
Open-endedQualitative depth"What's your biggest challenge?"
MatrixMultiple items, same scale"Rate each feature: 1-5"

Writing Good Questions

Good Questions

Specific and measurable:
"In the past 30 days, how many times did you [action]?"

Clear options:
"Never / 1-2 times / 3-5 times / 6+ times"

Neutral wording:
"How would you rate X?"

Behavioral focus:
"When did you last [do X]?"

Bad Questions

Double-barreled:
"Is our product fast and reliable?"

Leading:
"How much do you love our new feature?"

Vague:
"Do you sometimes have problems?"

Hypothetical:
"Would you use a feature that does X?"

Jargon-heavy:
"How do you feel about our ML-powered NLP capabilities?"

Scale Design

Likert Scales (Agreement)

5-point: Strongly disagree → Strongly agree
7-point: Same, with "Somewhat" options

Best practice:
- Use odd numbers for neutral midpoint
- Use even numbers to force a direction
- Be consistent throughout survey

Rating Scales (Satisfaction/Quality)

1-5: Simple, familiar, sufficient for most cases
1-7: More granularity when needed
1-10: Familiar but inconsistent interpretation

Best practice:
- Always label endpoints
- Consider labeling all points
- 1-5 is usually enough

NPS (Net Promoter Score)

"How likely are you to recommend X to a colleague?"
0-10 scale

Scoring:
- Promoters: 9-10
- Passives: 7-8
- Detractors: 0-6
- NPS = % Promoters - % Detractors

Always follow with: "What's the primary reason for your score?"

Survey Structure

Optimal Flow

1. Screening questions (filter unqualified respondents)
2. Easy, engaging questions (build momentum)
3. Core questions (get key data)
4. Sensitive/demographic questions (save for end)
5. Open-ended questions (optional, for depth)

Example Structure

1. Screener: "Do you use [product category]?" (Yes/No)
2. Warmup: "What's your primary role?" (Multiple choice)
3. Core: "How satisfied are you with [X]?" (Scale)
4. Core: "Rank these features by importance" (Ranking)
5. Demographics: "Company size?" (Multiple choice)
6. Open: "What's missing from current solutions?" (Text)

Sample Size Calculator

Population95% Confidence, 5% Margin95% Confidence, 3% Margin
1008092
500217341
1,000278516
10,000370964
100,000+3841,067

Quick Rule:

  • Small decisions: 50-100 responses
  • Medium decisions: 100-300 responses
  • Major decisions: 300+ responses

Survey Distribution

ChannelResponse RateBiasBest For
In-app10-30%Active usersProduct feedback
Email5-15%Engaged usersCustomer research
PanelVariesCan targetMarket research
Social1-5%Self-selectedQuick pulse checks

Increasing Response Rates

- Keep it short (< 5 min)
- Explain why and who benefits
- Personalize invitation
- Offer incentive (carefully)
- Send reminders (2-3 max)
- Mobile-friendly design

Analysis Framework

Quantitative Analysis

1. Response rate and completion rate
2. Descriptive stats (mean, median, distribution)
3. Segment comparisons (by role, company size, etc.)
4. Cross-tabulation (how does X relate to Y?)
5. Statistical significance (for major decisions)

Open-Ended Analysis

1. Read all responses
2. Code themes (tag with categories)
3. Count theme frequency
4. Pull representative quotes
5. Look for surprises

Survey Template: Feature Prioritization

1. "What's your primary role?"
   [ ] Engineering [ ] Product [ ] Design [ ] Other

2. "How often do you [use case]?"
   [ ] Daily [ ] Weekly [ ] Monthly [ ] Rarely [ ] Never

3. "How important are each of these capabilities to you?"
   [1-5 scale for each feature]

4. "How satisfied are you with your current solution for each?"
   [1-5 scale for each feature]

5. "If you could only improve one thing, what would it be?"
   [Open text]

6. "Anything else we should know?"
   [Open text]

Anti-Patterns

  • Survey-first research — Surveying before interviewing (you'll ask wrong questions)
  • Too many questions — Every question costs completions
  • Leading scales — "Poor, Fair, Good, Great, Excellent" (biased toward positive)
  • Required open-ends — People will write garbage to proceed
  • Survey fatigue — Surveying same users too often
  • Ignoring completion rate — 20% completion means 80% bias
  • Over-engineering — Simple surveys beat complex ones

title: Prototype Testing impact: HIGH tags: testing, prototypes, validation, concepts

Prototype Testing

Impact: HIGH

Test concepts before building. A day of prototype testing saves months of building the wrong thing.

Prototype Fidelity Spectrum

FidelityWhat It IsWhen to UseEffort
Paper sketchesHand-drawn screensEarly explorationMinutes
WireframesLow-fi digital layoutsFlow validationHours
Clickable prototypeInteractive but no real dataUX testingDays
High-fidelity prototypeLooks real, fake backendConcept validationDays-Week
Functional prototypePartial real functionalityTechnical validationWeeks

Choosing Prototype Fidelity

Match Fidelity to Question

QUESTION: "Is this the right problem to solve?"
→ Paper sketches, conversation
→ Don't need any prototype

QUESTION: "Does this workflow make sense?"
→ Low-fi wireframes, paper prototype
→ Test the flow, not the polish

QUESTION: "Can users complete this task?"
→ Clickable prototype
→ Enough to navigate, but not distracting

QUESTION: "Is this desirable/valuable?"
→ High-fidelity prototype
→ Must look real to get honest reaction

QUESTION: "Is this technically feasible?"
→ Functional prototype / spike
→ Real code, limited scope

Paper Prototype Testing

When to Use

  • Very early exploration
  • Testing broad concepts
  • Quick iteration needed
  • Resource constraints

How to Run

SETUP:
1. Sketch key screens on paper/cards
2. Prepare a "starting screen"
3. Have blank cards ready for new screens

FACILITATION:
1. Explain: "This is an early idea, nothing is built yet"
2. Present starting screen
3. Ask: "What would you do to [accomplish task]?"
4. When they "click" — swap to next screen
5. Note confusion, hesitation, unexpected paths

DEBRIEF:
- What made sense?
- What was confusing?
- What's missing?

Concept Testing

Testing Ideas Before Execution

MethodWhat to TestArtifacts Needed
Fake doorDemandButton/link that goes nowhere
Landing pageValue prop resonanceSingle page, email capture
Video demoComplex concept understanding60-90 second explainer
Wizard of OzEnd-to-end experienceFake automation, real delivery
ConciergeJob-to-be-doneManual service, learning

Concept Testing Script

INTRO:
"I'm going to show you an early concept. It's not built yet —
we're trying to learn if it would be valuable."

FIRST IMPRESSION (before explaining):
"What do you think this is?"
"Who do you think it's for?"
"What do you expect it to do?"

WALKTHROUGH:
[Show concept]
"What stands out to you?"
"What questions do you have?"

VALUE ASSESSMENT:
"On a scale of 1-10, how valuable would this be to you?"
"What would make it more valuable?"
"What would you use instead if this didn't exist?"

PURCHASE INTENT:
"Would you pay for this?"
"What would you expect it to cost?"
"What would stop you from using this?"

Clickable Prototype Testing

Building the Prototype

Tools: Figma, Sketch, InVision, Framer

Include:
- Happy path (main flow)
- Key decision points
- Error states (if testing robustness)
- Realistic content (not "Lorem ipsum")

Exclude:
- Every edge case
- Animations (unless testing animation)
- Complete design system

Running the Test

TASK-BASED TESTING:

1. Set context (but not instructions)
   "Imagine you're [scenario]. You want to [goal]."

2. Ask them to think aloud
   "Please tell me what you're thinking as you go."

3. Observe without helping
   Note: hesitations, wrong turns, questions

4. Ask follow-up questions
   "What did you expect to happen there?"
   "What would you do next?"

5. Debrief
   "How would you describe this to a colleague?"
   "What was confusing?"
   "What's missing?"

Prototype Testing Metrics

Measure During Testing

MetricHow to CaptureWhat It Tells You
Task successDid they complete it?Flow works/doesn't
Time on taskStopwatchEfficiency
Error rateCount wrong pathsClarity
HesitationsNote pausesConfusion points
Help requestsCount "how do I..."Missing affordances

Measure After Testing

MetricHow to AskWhat It Tells You
Ease score"How easy? 1-7"Perceived complexity
Value score"How valuable? 1-10"Desirability
Likelihood to use"How likely? 1-10"Intent
Confidence"How confident were you?"Clarity

A/B Prototype Testing

Testing Multiple Concepts

Approach 1: Within-subjects
- Show all concepts to each participant
- Randomize order
- Compare directly
- Good for: Understanding preferences

Approach 2: Between-subjects
- Show one concept per participant
- Different people see different versions
- Compare across groups
- Good for: Unbiased reactions

Comparison Framework

For each concept, rate:
- Clarity (do they understand it?)
- Value (do they want it?)
- Usability (can they use it?)
- Preference (which do they prefer?)

Then ask:
- "Which would you choose? Why?"
- "What would make [less preferred] better?"

Common Prototype Testing Mistakes

Mistake 1: Testing Too Early

Problem: Concept is half-baked, feedback is noise
Fix: Know your key assumptions before testing

Mistake 2: Testing Too Late

Problem: Already committed, testing is theater
Fix: Test when you can still change direction

Mistake 3: Testing with Wrong People

Problem: Friends/colleagues aren't your users
Fix: Recruit actual target users

Mistake 4: Explaining Too Much

Problem: "Here's how it works..." removes realistic context
Fix: Set context, then observe

Mistake 5: Asking Leading Questions

Problem: "Don't you think this is easy?"
Fix: "How would you describe the difficulty?"

Interpreting Results

Strong Positive Signals

+ Completed task without help
+ "When can I use this?"
+ Offered to pay / be design partner
+ Compared favorably to current solution
+ 8+ on value/likelihood scales

Weak/Negative Signals

- Needed explanation to proceed
- "This is interesting" (polite disinterest)
- "I could see using this" (hypothetical)
- Compared to competitors unfavorably
- < 6 on value/likelihood scales

What to Do With Results

FindingAction
Clear usability issuesFix before building
Value unclearRevisit positioning/messaging
Flow works, details wrongIterate design
Concept rejectedRevisit problem validation
Mixed signalsMore testing needed

Rapid Iteration Cycles

Test-Learn-Iterate Loop

Day 1: Build prototype
Day 2: Test with 3 users
Day 3: Synthesize, identify top 3 issues
Day 4: Update prototype
Day 5: Test with 3 more users
...repeat until diminishing returns

Iteration Decision Tree

Did users complete the task?
├── Yes → Did they do it easily?
│         ├── Yes → Is the value clear?
│         │         ├── Yes → Ship it
│         │         └── No → Fix messaging
│         └── No → Fix usability issues
└── No → Is the concept right?
          ├── Yes → Fix critical blockers
          └── No → Revisit problem/concept

Anti-Patterns

  • Pixel-perfect too early — High-fi when you need to test concept, not polish
  • Testing in a vacuum — No task context, just "what do you think?"
  • Helping too much — Explaining when they struggle (they won't have you in real life)
  • Sample of one — Major decisions from single test
  • Ignoring negative signals — "They just didn't understand" (maybe it's unclear)
  • Feature validation theater — Testing to confirm, not to learn
  • Prototype-production gap — Prototype tests well, production doesn't match

title: Usability Testing impact: HIGH tags: testing, usability, ux, validation

Usability Testing

Impact: HIGH

Usability testing reveals whether real users can accomplish real tasks with your product. It's the fastest way to find friction before it costs you customers.

Usability Testing Fundamentals

What Usability Testing Measures

DimensionQuestionHow to Measure
EffectivenessCan they complete the task?Success rate
EfficiencyHow long does it take?Time on task
LearnabilityHow easy to learn?First vs repeat performance
Error toleranceCan they recover from mistakes?Error rate, recovery rate
SatisfactionHow do they feel about it?Post-task ratings

When to Run Usability Tests

Appropriate for Usability Testing

- Evaluating existing product flows
- Testing redesigned features
- Comparing design alternatives
- Finding friction in onboarding
- Validating information architecture
- Testing with specific user segments

Not Appropriate for Usability Testing

- Validating if problem is worth solving (use problem interviews)
- Predicting market adoption (use concept testing)
- Understanding user needs (use discovery interviews)
- Measuring actual usage (use analytics)

Test Types

TypeParticipantsBest ForTrade-offs
Moderated remote1-on-1, video callDeep insights, follow-upTime-intensive
Unmoderated remoteSolo, recordedScale, convenienceLess depth
In-personFace-to-faceComplex tasks, observationLogistics
GuerrillaIntercept in publicQuick feedbackLess control

Planning the Study

Define Test Scope

1. What are we testing?
   [Specific feature/flow]

2. What questions do we need answered?
   - Can users find X?
   - Do users understand Y?
   - Where do users get stuck?

3. Who are we testing with?
   [Target user criteria]

4. What's the success criteria?
   - X% complete task successfully
   - Average time < Y minutes
   - SUS score > Z

Sample Size Guidelines

GoalSample SizeWhy
Find major issues5 usersCatches ~85% of problems
Find most issues8-10 usersCatches ~95% of problems
Compare designs10-15 per designStatistical confidence
Segment comparison5+ per segmentMeaningful differences

Writing Tasks

Good Task Characteristics

- Scenario-based (realistic context)
- Goal-focused (what, not how)
- No hints (don't give away the answer)
- Specific enough to measure
- Important to actual usage

Good vs Bad Tasks

Bad TaskWhy It's BadGood Task
"Click on Settings"Gives away answer"Change your notification preferences"
"Create a report"Too vague"Create a report showing last month's sales by region"
"Test the dashboard"Not actionable"Find out which product sold the most last week"
"Use the search feature"Not goal-based"Find the order you placed last Tuesday"

Task Template

SCENARIO:
[Context that makes the task realistic]

TASK:
[Clear goal without instructions]

SUCCESS CRITERIA:
[How you'll know if they succeeded]

Example:
SCENARIO: You just signed up for the service and want to
          add your team members so they can start using it.

TASK: Add a colleague named "Alex Smith" with the email
      [email protected] to your team.

SUCCESS CRITERIA: User lands on team page with new member visible.

Facilitating the Session

Session Structure (60 min)

0:00 - Welcome and setup (5 min)
       - Thank them, explain purpose
       - Get consent for recording
       - Explain think-aloud protocol

0:05 - Background questions (5 min)
       - Relevant experience
       - Current tools/processes
       - Familiarity with product

0:10 - Tasks (40 min)
       - 3-5 tasks
       - Think aloud throughout
       - Post-task questions after each

0:50 - Wrap-up (10 min)
       - Overall impressions
       - SUS or satisfaction scale
       - Any questions from them

Facilitator Do's and Don'ts

DoDon't
Stay neutralReact positively/negatively
Let them struggle (briefly)Jump in to help
Ask "what are you thinking?"Ask "why did you do that?" (judgmental)
Note what you observeOnly note what they say
Follow up on interesting momentsStick rigidly to script
Thank them for confusionApologize for issues

Useful Probes

When they pause:
- "What are you thinking?"
- "What do you expect to happen?"

When they click something:
- "What made you choose that?"
- "What do you expect to see?"

When they seem stuck:
- "What would you do in real life?"
- "Is there anything you'd like to try?"

When they complete a task:
- "How did that go?"
- "How confident are you that worked?"

Think-Aloud Protocol

Concurrent Think-Aloud

User narrates while doing.
+ Real-time insights
- Can affect behavior
- Some users struggle

Retrospective Think-Aloud

User watches their recording and narrates.
+ More natural behavior during test
- Slower, requires replay
- Some details forgotten

Coaching Think-Aloud

INTRODUCTION:
"As you go through the tasks, please tell me what you're
thinking out loud — what you're looking at, what you're
trying to do, what you expect to happen, any reactions you have.

Pretend you're alone working through this, but talking to yourself.
There are no wrong answers — we're testing the product, not you."

REMINDERS:
"What are you thinking right now?"
"Keep talking..."

Measuring Results

Quantitative Metrics

MetricHow to CalculateTarget
Success rate% who completed task>80%
Time on taskAverage completion timeContext-dependent
Error rateErrors per task<1 per task
Clicks to completeAverage click countBenchmark against optimal
Task difficultyPost-task rating 1-7>5 average

System Usability Scale (SUS)

10 questions, 1-5 scale (Strongly Disagree → Strongly Agree)

1. I would like to use this frequently
2. This is unnecessarily complex
3. This is easy to use
4. I would need support to use this
5. Functions are well integrated
6. Too much inconsistency
7. Most people would learn quickly
8. Very cumbersome to use
9. I felt confident using this
10. Needed to learn a lot before starting

Scoring:
- Odd questions: score - 1
- Even questions: 5 - score
- Sum × 2.5 = SUS score (0-100)

Interpretation:
- >80: Excellent
- 68-80: Good
- 50-67: Needs improvement
- <50: Poor

Analyzing Findings

Severity Rating Scale

SeverityDescriptionAction
Critical (4)Prevents task completionMust fix before launch
Major (3)Causes significant delay/confusionFix soon
Minor (2)Causes slight delayFix when possible
Cosmetic (1)Noticed but doesn't affect taskNice to fix

Findings Template

FINDING: [Brief description]

SEVERITY: [1-4]

EVIDENCE:
- [X] of [Y] participants experienced this
- Task: [Which task]
- Behavior: [What happened]
- Quotes: "[What they said]"

RECOMMENDATION:
[Specific fix or direction to explore]

Issue Prioritization Matrix

                    HIGH FREQUENCY
                          │
                          │
        CRITICAL FIXES    │    TOP PRIORITY
        (Many people,     │    (Many people,
         can work around) │     can't complete)
                          │
    ──────────────────────┼────────────────────
                          │
        LOW PRIORITY      │    FIX IF EASY
        (Few affected,    │    (Few people,
         minor impact)    │     big impact)
                          │
                    LOW FREQUENCY

Remote Unmoderated Testing

Tools: UserTesting, Maze, Lookback, UsabilityHub

When to Use:

  • Need more participants quickly
  • Geographic diversity
  • Limited budget for moderation
  • Quantitative metrics priority

Setup Checklist:

[ ] Clear task instructions
[ ] Working prototype link
[ ] Screener questions
[ ] Recording consent
[ ] Post-task questions
[ ] Completion criteria

Watch Out For:

  • Lower engagement than moderated
  • Can't ask follow-up questions
  • Technical issues with recording
  • Participants may rush

Reporting Results

Usability Report Structure

1. EXECUTIVE SUMMARY
   - Key findings (top 3-5)
   - Overall usability score
   - Recommended priorities

2. METHODOLOGY
   - Who we tested
   - What we tested
   - How we tested

3. FINDINGS BY TASK
   - Success rate
   - Time on task
   - Issues encountered
   - Severity ratings

4. TOP ISSUES
   - Issue description
   - Evidence
   - Severity
   - Recommendation

5. POSITIVE FINDINGS
   - What worked well
   - User quotes

6. RECOMMENDATIONS
   - Prioritized action items
   - Quick wins vs longer-term

Anti-Patterns

  • Testing too late — Product is built, can't change it
  • Testing with colleagues — They're not your users
  • Too many tasks — Participant fatigue, rushed responses
  • Leading participants — "You'd click here, right?"
  • Helping too much — Real users won't have you there
  • Ignoring positive signals — Only focusing on problems
  • Death by metrics — Measuring everything, learning nothing
  • Testing once — Usability is iterative, not one-time