conducting-user-interviews
About
This skill helps developers plan, execute, and analyze user interviews to gather actionable insights. It generates a complete User Interview Pack, including a recruiting plan, discussion guide, and synthesis report. Use it for discovery, concept testing, or understanding user churn and behavior.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/conducting-user-interviewsCopy and paste this command in Claude Code to install this skill
Documentation
Conducting User Interviews
Scope
Covers
- Planning an interview study that supports a specific product decision
- Recruiting the right participants (including early adopters when appropriate)
- Running interviews that capture specific stories and behaviors (not opinions)
- Synthesizing interviews into actionable insights, opportunities, and next steps
- Creating a lightweight “customer panel” habit for fast follow-ups
When to use
- “Create a discussion guide for discovery interviews.”
- “Recruit and run 8 user interviews about onboarding / activation.”
- “We need to understand why users switched (or churned) — run switch interviews.”
- “Help me synthesize interviews into insights + opportunities.”
- “I’m a PM and need to run customer conversations next week.”
When NOT to use
- You primarily need quantitative evidence (survey/experiment/analytics) or statistical confidence
- You’re doing usability testing with task-based evaluation as the main output (different protocol)
- You’re working with high-risk populations or sensitive topics (medical, legal, minors) without appropriate approvals/training
- You have no decision to support (you’ll produce anecdotes without impact)
Inputs
Minimum required
- Product + target user/customer segment (who, context of use)
- The decision the interviews should inform (e.g., positioning, onboarding redesign, roadmap bet)
- Interview type: discovery / JTBD switch / churn / concept test (or “recommend”)
- Target participants (role, behaviors, situation, recency) + “who NOT to interview”
- Constraints: number of interviews, time box, language/region, recording allowed, incentives (if any)
Missing-info strategy
- Ask up to 5 questions from references/INTAKE.md.
- If answers aren’t available, proceed with explicit assumptions and label unknowns.
Outputs (deliverables)
Produce a User Interview Pack in Markdown (in-chat; or as files if requested):
- Context snapshot (goal, decision, hypotheses, constraints)
- Recruiting plan (channels, outreach copy, scheduling logistics) + screener
- Interview guide (script + question bank + probes) + consent/recording plan
- Note-taking template + tagging scheme
- Synthesis report (themes, evidence, opportunities, recommendations, confidence)
- Follow-up plan (thank-you, keep-in-touch, “customer panel” list/cadence)
- Risks / Open questions / Next steps (always included)
Templates: references/TEMPLATES.md
Workflow (8 steps)
1) Frame the decision and choose interview type
- Inputs: Context + references/INTAKE.md.
- Actions: Define the decision, what you need to learn (unknowns), and pick the interview type (discovery vs switch vs churn vs concept).
- Outputs: Context snapshot + study intent.
- Checks: You can answer: “What will we do differently after these interviews?”
2) Define participant criteria (who/when/why) and sampling plan
- Inputs: Target segment, product context, constraints.
- Actions: Specify inclusion/exclusion criteria; prioritize recency (recent switch/churn/attempt) when relevant; decide sample mix (e.g., 6 core + 2 edge cases).
- Outputs: Participant profile + sampling plan.
- Checks: Criteria are behavior/situation-based (not demographic proxies).
3) Create recruiting plan + screener + outreach copy
- Inputs: Participant profile; available channels (CRM, support, community, ads, LinkedIn).
- Actions: Draft outreach messages, a screener, and scheduling logistics. Expect high drop-off; plan volume accordingly.
- Outputs: Recruiting plan + screener + outreach copy.
- Checks: Screener screens for the story you need (recency, context, alternatives), not “interest in our product.”
4) Draft the interview guide (story-first)
- Inputs: Interview type + hypotheses/unknowns.
- Actions: Build a guide that elicits specific stories (“last time…”) and avoids leading questions. Include probes, pivots, and time boxes. Add consent/recording script.
- Outputs: Interview guide + consent/recording plan.
- Checks: At least 70% of questions ask about past behavior and concrete examples.
5) Run interviews + capture clean notes (PM/Design present)
- Inputs: Guide, logistics, notes template.
- Actions: Run the session, follow the story, and capture verbatims. If possible, have PM + design observe live (or listen to recordings) to avoid secondhand dilution.
- Outputs: Completed notes per interview + key quotes + immediate highlights.
- Checks: Each interview yields 2–5 “story moments” (trigger → struggle → workaround → outcome).
6) Debrief immediately and normalize evidence
- Inputs: Interview notes, recordings/transcripts (if any).
- Actions: Do a 10–15 min debrief right after each interview: surprises, hypotheses updates, follow-ups. Tag notes consistently.
- Outputs: Debrief bullets + tagged notes.
- Checks: Unclear claims are marked as “needs follow-up” instead of treated as facts.
7) Synthesize across interviews into themes and opportunities
- Inputs: Tagged notes across interviews.
- Actions: Cluster by outcomes/struggles; capture contradictions; quantify lightly (counts) without over-claiming. Translate insights into opportunities and recommendations with confidence levels.
- Outputs: Synthesis report + opportunity list.
- Checks: Every major insight has at least 2 supporting interviews (or is labeled “single anecdote”).
8) Share, decide, follow up, and run the quality gate
- Inputs: Draft pack.
- Actions: Produce a shareable readout, propose next steps, and create a lightweight customer panel habit (5–10 engaged users). Run references/CHECKLISTS.md and score with references/RUBRIC.md.
- Outputs: Final User Interview Pack + Risks/Open questions/Next steps.
- Checks: Stakeholders can restate (a) key learning, (b) decision implication, (c) what happens next.
Quality gate (required)
- Use references/CHECKLISTS.md and references/RUBRIC.md.
- Always include: Risks, Open questions, Next steps.
Examples
Example 1 (Discovery): “I’m redesigning onboarding for a B2B product. Create a recruiting plan + discussion guide for 8 discovery interviews with new trial users.”
Expected: participant criteria, outreach + screener, discovery guide, notes template, synthesis plan, and a ready-to-run pack.
Example 2 (Switch/JTBD): “We lose deals to spreadsheets. Run switch interviews to learn what triggers teams to move off spreadsheets and what they try instead.”
Expected: switch interview guide (timeline + forces), recruiting criteria emphasizing recency, and a synthesis structure that outputs ‘push/pull/anxieties/habits’.
Boundary example: “Ask users what features they want and build whatever they say.”
Response: redirect to story-based interviewing; clarify decision context; avoid feature-request interviews without behavioral grounding.
GitHub Repository
Related Skills
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
MetaThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
Algorithmic Art Generation
MetaThis skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.
