AI SkillDesign SurveyProduct & Engineering

When you need real signal from a survey, /designing-surveys writes tight questions and the right metric, so you ship decisions not averages. — Claude Skill

A Claude Skill for Claude Code by Refound — run /designing-surveys in Claude·Updated

Compatible withChatGPT·Claude·Gemini·OpenClaw

Write survey questions that force prioritization and give actionable signal

  • CSAT over NPS by default (Judd Antin): better data properties, tighter business correlation, 5-7 item scales
  • Single-variable questions only: catches 'double-barreled' questions Nicole Forsgren flags
  • MaxDiff (Most/Least) for feature prioritization per Madhavan Ramanujam, forces trade-offs instead of 'all 5 stars'
  • Best-customer targeting (Gia Laudi): 3-6 month cohort so the memory of the 'before' state is still fresh
  • Sean Ellis 'very disappointed' PMF survey with 40% threshold built in

Who this is for

What it does

Quarterly satisfaction survey without the NPS trap

You've been running NPS for a year and the number barely moves. /designing-surveys replaces it with a Judd Antin-style CSAT (5-7 scale, mobile-visible) plus open-text 'biggest barrier' follow-up, keyed to a decision your team can actually act on.

Feature prioritization survey for the next quarter

You have 15 candidate features and 200 power users. /designing-surveys builds a Madhavan Ramanujam MaxDiff (Most/Least) survey instead of a rating scale, so you get a true willingness-to-pay ordering rather than 'all of them sound good'.

PMF check for an early product

You have 120 users and need to know if you're close to PMF. /designing-surveys writes the Sean Ellis 'very disappointed' survey with best-customer targeting (3-6 month cohort, per Gia Laudi) so you can segment the 40% signal properly.

Onboarding profiling without killing conversion

You want to ask 4 qualifying questions in signup but fear a drop-off. /designing-surveys drafts a Laura Schaffer-style 'good friction' block: short, relevance-building, tested to lift conversion rather than hurt it.

How it works

1

Tell the skill what decision the survey should inform

2

Pick the right instrument: CSAT, MaxDiff, Sean Ellis PMF, onboarding profiler

3

Draft questions, checked against Nicole Forsgren's 'one variable per question' rule

4

Define the right audience (best customers, 3-6 month cohort, or specific segment)

5

Get a survey with scales, routing logic, and a follow-up analysis plan

Example

Your context
120 users, B2B SaaS, 6 months in market
Goal: do we have PMF in the startup segment?
Constraint: 3 minutes, mobile-friendly
Avoid: NPS (CEO loves it but we've seen no signal)
20 minutes later
Instrument: Sean Ellis PMF + CSAT
Q1 (Sean Ellis): How would you feel if you could no longer use [product]?
  - Very disappointed / Somewhat disappointed / Not disappointed / I no longer use it
Q2 (CSAT):        How satisfied are you overall? (1-7 scale, all options visible)
Q3 (open):        What's the main benefit you get?
Audience (Gia Laudi filter)
Best customers only:
  - Signed up 3-6 months ago
  - Active in last 14 days
  - Paying plan OR weekly active free
Exclude: first-week users, churned, internal test accounts
Estimated n: ~60 of 120
Analysis Plan
→ PMF threshold: 40% 'very disappointed' (Sean Ellis)
→ Segment by: plan, team size, primary use case
→ Follow-up: interview every 'very disappointed' respondent
→ Ignore 'somewhat disappointed' — don't build for lukewarm
Watch-Out
Nicole Forsgren check: Q2 asks only about satisfaction, not speed+satisfaction.
Mobile check: 1-7 scale fits in portrait view, no horizontal scroll.
Don't add a 5th question: past 4, completion drops.

Metrics this improves

Form Completion Rate
Well-designed questions lift survey completion rates
Product & Engineering
ICP Clarity
Segmented survey data sharpens ICP definition
Product & Engineering
Data Quality
Unbiased question design produces cleaner, decision-grade data
Product & Engineering

Works with

Designing Surveys

Help the user design effective surveys using frameworks from 9 product leaders who have built rigorous research and feedback systems.

How to Help

When the user asks for help with surveys:

  1. Clarify the goal - Determine if they're measuring satisfaction, identifying problems, or prioritizing features
  2. Choose the right metric - Help them select between NPS, CSAT, PMF survey, or custom approaches
  3. Design clean questions - Ensure each question measures one thing precisely
  4. Target the right respondents - Help them reach users with fresh, relevant experience

Core Principles

NPS is scientifically flawed

Judd Antin: "NPS is the best example of the marketing industry marketing itself. The consensus in the survey science community is that NPS makes all the mistakes. Customer satisfaction, a simple CSAT metric, is better. It has better data properties, it is more precise, it is more correlated to business outcomes." Use CSAT with 5-7 item scales instead.

Force prioritization with constraints

Nicole Forsgren: "Let them pick three, just three. Of those three, how often does this affect you? Is this hourly? Is this daily? Is this weekly?" Limit respondents to their top barriers to keep data clean, then measure frequency to weight impact.

Survey your best customers at the right time

Gia Laudi: "Very importantly, they signed up for your product recently enough that they remember what life was like before. Generally, we say that's in the three to six-month range." Target customers who have been using the product 3-6 months so their memory of the 'before' state is fresh.

Onboarding surveys improve conversion

Laura Schaffer: "We just asked for forgiveness and put these questions into the signup flow. An improved conversion by like 5%, just improved signups." Adding 'good friction' in the form of targeted questions can increase conversion by reassuring users they're in the right place.

Avoid double-barreled questions

Nicole Forsgren: "You're asking four different questions there. If someone answers yes, was it the build? Was it the test? Was it slow or was it flaky?" Ensure each survey question only asks about one specific variable.

Use MaxDiff for feature prioritization

Madhavan Ramanujam: "Identify the most important for you, and the least important. If you do this a few times, you will be able to prioritize the entire feature set in a relative fashion." MaxDiff (Most/Least) surveys are superior to simple ranking for identifying value drivers.

Questions to Help Users

  • "What specific decision will this survey inform?"
  • "Are you asking about one thing per question, or multiple things?"
  • "Who are your 'best' customers and when did they sign up?"
  • "Are all scale options visible on mobile without scrolling?"
  • "How will you force respondents to prioritize rather than rate everything high?"

Common Mistakes to Flag

  • Double-barreled questions - Asking about speed AND complexity in one question
  • Too many options - Allowing respondents to select unlimited items instead of forcing prioritization
  • Wrong timing - Surveying customers who are too new (no experience) or too old (forgot the 'before')
  • NPS worship - Relying on a metric with known scientific flaws over simpler, better alternatives
  • Hidden scale options - Mobile surveys where users can't see all options create response bias

Deep Dive

For all 10 insights from 9 guests, see references/guest-insights.md

Related Skills

  • Writing North Star Metrics
  • Defining Product Vision
  • Prioritizing Roadmap
  • Setting OKRs & Goals

Reference documents

Designing Surveys - All Guest Insights

9 guests, 10 mentions


Chris Hutchins

Chris Hutchins

"run an ad for your podcast... what was my click-through rate on the ad? Which will tell you if someone doesn't click, it's either not a good description or it's not a good set of content... if they don't subscribe... my content probably sucks."

Insight: Paid advertising metrics can serve as a quantitative survey to validate product descriptions and content quality.

Tactical advice:

  • Use click-through rates (CTR) to test the appeal of your product's description or imagery
  • Use conversion rates to validate if the actual product meets the expectations set by the marketing

Timestamp: 01:01:57

Elena Verna

Elena Verna 2.0

"Profile your people, know who you're talking to. It's more important than a couple percentage drop-off that will never going to activate in the first place. No, no, no die user please because all of us are tired of receiving outbound that is not relevant to us."

Insight: Onboarding surveys are essential for identifying the 'buyer' vs. the 'user' and preventing irrelevant sales outreach.

Tactical advice:

  • Ask about company size, department, seniority, and use case during sign-up
  • Limit onboarding profiling to 3-4 screens to maintain completion rates

Timestamp: 01:10:36

Gia Laudi

Gia Laudi

"We identified SparkToro's best customers. Now, what I mean by best customers is those that get a ton of value from your product as of exist today, pay obviously. They're happy. They're low maintenance. And very importantly, they signed up for your product recently enough that they remember what life was like before. So generally, we say that's in the three to six-month range."

Insight: To get accurate data on the customer journey, survey 'best' customers who signed up 3-6 months ago so their memory of the 'before' state is fresh.

Tactical advice:

  • Target survey participants who have been customers for 3-6 months
  • Ask what was going on in their life when they first started seeking a solution
  • Identify the 'trigger moment' that led them to search for a product

Timestamp: 00:22:45

Judd Antin

Judd Antin

"NPS is the best example of the marketing industry marketing itself... the consensus in the survey science community is that NPS makes all the mistakes... Customer satisfaction, a simple CSAT metric, is better. It has better data properties, it is more precise, it is more correlated to business outcomes."

Insight: NPS is often a flawed and imprecise metric; Customer Satisfaction (CSAT) is a more scientifically rigorous and business-correlated alternative.

Tactical advice:

  • Replace NPS with CSAT questions (e.g., 'Overall, how satisfied are you with your experience?').
  • Use 5 to 7 item scales for better precision.
  • Ensure all scale options are visible on mobile screens to avoid bias.

Timestamp: 01:03:50

Laura Schaffer

Laura Schaffer

"We just asked for forgiveness and put these questions into the silent flow and ran as Navy test with a small group... I'm not kidding, an improved conversion. There's no personalization, nothing past it, just the questions. An improved conversion by like 5%, just improved signups."

Insight: Adding 'good friction' in the form of targeted questions can increase conversion by reassuring users they are in the right place.

Tactical advice:

  • Ask questions about the user's coding language or specific use case early in the flow
  • Use questions to alleviate the user's fear that the product won't support their needs
  • Test adding questions to the signup flow even if it seems counterintuitive to reducing friction

Timestamp: 00:22:50

Madhavan Ramanujam

Madhavan Ramanujam

"Identify the most important for you, and the least important... If you do this a few times, you will be able to prioritize the entire feature set in a relative fashion, and truly understand what drives willingness to pay."

Insight: MaxDiff (Most/Least) surveys are superior to simple ranking for identifying value drivers.

Tactical advice:

  • Present subsets of features and ask respondents to pick the 'most important' and 'least important'
  • Use purchase probability scales (1-5) to build demand curves, discounting '4s' and '5s' to reflect real-world behavior

Timestamp: 00:30:06

Naomi Ionita

Naomi Ionita

"We would make a list of our features that we had and maybe new things we wanted to build and have people rank them as a must-have, nice to have, or not necessary that help us understand the relative prioritization. You can also get at it with a hundred point question where you give users a hundred points and say, 'Spend them across these different features.'"

Insight: Use forced-ranking or point-allocation surveys to identify which features actually drive conversion and value.

Tactical advice:

  • Use a '100-point question' to force users to prioritize feature value
  • Categorize features as 'must-have', 'nice-to-have', or 'not necessary'

Timestamp: 21:23

Nicole Forsgren

Nicole Forsgren 2.0

"You can just ask a few questions, 'How satisfied are you? What are the biggest barriers to your productivity, or what are the biggest challenges to getting work done?' and let them pick either from a set of tools or maybe a set of processes and then say... Let them pick three, just three. Of those three, how often does this affect you? Is this hourly? Is this daily? Is this weekly? Is this quarterly?"

Insight: Effective developer surveys should force prioritization and measure the frequency of friction to provide clear signals for improvement.

Tactical advice:

  • Limit respondents to picking their top three barriers to keep data clean.
  • Measure the frequency of issues (e.g., hourly vs. quarterly) to weight their impact.

Timestamp: 00:54:12


"A lot of folks go to write a survey question and they'll say something like, 'Were the build and test system slow or complicated in the last week?' You're asking four different questions there. If someone answers yes, was it the build? Was it the test? Was it slow or was it flaky or complicated or something?"

Insight: Avoid 'double-barreled' questions in surveys to ensure the data collected is specific and actionable.

Tactical advice:

  • Ensure each survey question only asks about one specific variable (e.g., separate 'speed' from 'complexity').
  • Review survey questions with an LLM or expert to identify and fix ambiguous phrasing.

Timestamp: 00:55:17

Nilan Peiris

Nilan Peiris

"We ask customers, is the short answer. So, we have an attribution model, as you can imagine, and we've had one from the early days, and it overlays all the referrer data and cookie data you have on visits comes to the website. So you kind of know that. And then you obviously have the soundtrack stuff, and we sample, and ask customers a set of questions on this, and then overlay that onto the... What turns up in your web tracking as direct traffic to give us a sense of how big that word of mouth number is"

Insight: Combine in-product attribution surveys with digital tracking data to accurately measure the impact of word-of-mouth growth.

Tactical advice:

  • Integrate attribution questions directly into the user flow
  • Overlay survey responses with referrer and cookie data to validate direct traffic
  • Sample customers regularly to maintain a clear picture of acquisition sources

Timestamp: 00:06:28