Back to Skills

user-research-analysis

aj-geddes
Updated Today
23 views
7
7
View on GitHub
Designdesigndata

About

This skill analyzes user research data to uncover insights and identify patterns from both qualitative and quantitative sources. It helps developers synthesize research findings into actionable recommendations that inform design decisions and prioritize user needs. Use it when processing user interviews, surveys, or validating design assumptions to communicate clear insights to stakeholders.

Documentation

User Research Analysis

Overview

Effective research analysis transforms raw data into actionable insights that guide product development and design.

When to Use

  • Synthesis of user interviews and surveys
  • Identifying patterns and themes
  • Validating design assumptions
  • Prioritizing user needs
  • Communicating insights to stakeholders
  • Informing design decisions

Instructions

1. Research Synthesis Methods

# Analyze qualitative and quantitative data

class ResearchAnalysis:
    def synthesize_interviews(self, interviews):
        """Extract themes and insights from interviews"""
        return {
            'interviews_analyzed': len(interviews),
            'methodology': 'Thematic coding and affinity mapping',
            'themes': self.identify_themes(interviews),
            'quotes': self.extract_key_quotes(interviews),
            'pain_points': self.identify_pain_points(interviews),
            'opportunities': self.identify_opportunities(interviews)
        }

    def identify_themes(self, interviews):
        """Find recurring patterns across interviews"""
        themes = {}
        theme_frequency = {}

        for interview in interviews:
            for statement in interview['statements']:
                theme = self.categorize_statement(statement)
                theme_frequency[theme] = theme_frequency.get(theme, 0) + 1

        # Sort by frequency
        return sorted(theme_frequency.items(), key=lambda x: x[1], reverse=True)

    def analyze_survey_data(self, survey_responses):
        """Quantify and analyze survey results"""
        return {
            'response_rate': self.calculate_response_rate(survey_responses),
            'sentiment': self.analyze_sentiment(survey_responses),
            'key_findings': self.find_key_findings(survey_responses),
            'segment_analysis': self.segment_responses(survey_responses),
            'statistical_significance': self.calculate_significance(survey_responses)
        }

    def triangulate_findings(self, interviews, surveys, analytics):
        """Cross-check findings across sources"""
        return {
            'confirmed_insights': self.compare_sources([interviews, surveys, analytics]),
            'conflicting_data': self.identify_conflicts([interviews, surveys, analytics]),
            'confidence_level': self.assess_confidence(),
            'recommendations': self.generate_recommendations()
        }

2. Affinity Mapping

Affinity Mapping Process:

Step 1: Data Preparation
  - Print or write user quotes on cards (one per card)
  - Include source (interview name, survey #)
  - Include relevant demographic info

Step 2: Grouping
  - Place cards on wall or digital board
  - Group related insights together
  - Allow overlapping if relevant
  - Move cards as relationships become clear

Step 3: Theme Identification
  - Name each grouping with theme
  - Move up one level of abstraction
  - Create meta-themes grouping clusters

Step 4: Synthesis
  - Describe each theme in 1-2 sentences
  - Capture key insight
  - Note supporting evidence

Example Output:

Theme: Discovery & Onboarding
  Sub-themes:
    - Learning curve too steep
    - Documentation unclear
    - Need guided onboarding
  Quote: "I didn't know where to start, wish there was a tutorial"
  Frequency: 8 of 12 users mentioned

Theme: Performance Issues
  Sub-themes:
    - App is slow
    - Loading times unacceptable
    - Mobile particularly bad
  Quote: "I just switched to competitor, too slow"
  Frequency: 6 of 12 users mentioned

3. Insight Documentation

// Document and communicate insights

class InsightDocumentation {
  createInsightStatement(insight) {
    return {
      title: insight.name,
      description: insight.detailed_description,
      evidence: {
        quotes: insight.supporting_quotes,
        frequency: `${insight.frequency_count} of ${insight.total_participants} participants`,
        data_sources: ['Interviews', 'Surveys', 'Analytics']
      },
      implications: {
        for_design: insight.design_implications,
        for_product: insight.product_implications,
        for_strategy: insight.strategy_implications
      },
      recommended_actions: [
        {
          action: 'Redesign onboarding flow',
          priority: 'High',
          owner: 'Design team',
          timeline: '2 sprints'
        }
      ],
      confidence: 'High (8/12 users mentioned, consistent pattern)'
    };
  }

  createResearchReport(research_data) {
    return {
      title: 'User Research Synthesis Report',
      executive_summary: 'Key findings in 2-3 sentences',
      methodology: 'How research was conducted',
      key_insights: [
        'Insight 1 with supporting evidence',
        'Insight 2 with supporting evidence',
        'Insight 3 with supporting evidence'
      ],
      personas_informed: ['Persona 1', 'Persona 2'],
      recommendations: ['Design recommendation 1', 'Product recommendation 2'],
      appendix: ['Raw data', 'Quotes', 'Demographic breakdown']
    };
  }

  presentInsights(insights) {
    return {
      format: 'Presentation + Report',
      audience: 'Product team, stakeholders',
      duration: '30 minutes',
      structure: [
        'Research overview (5 min)',
        'Key findings (15 min)',
        'Supporting evidence (5 min)',
        'Recommendations (5 min)'
      ],
      handout: 'One-page insight summary'
    };
  }
}

4. Research Validation Matrix

Validation Matrix:

Research Finding: "Onboarding is too complex"

Supporting Evidence:
  Source 1: Interviews
    - 8 of 12 users mentioned difficulty
    - Average time to first value: 45 min vs target 10 min
    - 3 users abandoned before completing setup

  Source 2: Analytics
    - Drop-off at step 3 of onboarding: 35%
    - Bounce rate on onboarding page: 28% vs site avg 12%

  Source 3: Support Tickets
    - 15% of support tickets about onboarding
    - Most common: "How do I get started?"

Confidence Level: HIGH (consistent across 3 sources)

Action: Prioritize onboarding redesign in next quarter

Best Practices

✅ DO

  • Use multiple research methods
  • Triangulate findings across sources
  • Document quotes and evidence
  • Look for patterns and frequency
  • Separate findings from interpretation
  • Validate findings with users
  • Share insights across team
  • Connect to design decisions
  • Document methodology
  • Iterate research approach based on learnings

❌ DON'T

  • Over-interpret small samples
  • Ignore conflicting data
  • Base decisions on single data point
  • Skip documentation
  • Cherry-pick quotes that support assumptions
  • Present without supporting evidence
  • Forget to note limitations
  • Analyze without involving participants
  • Create insights without actionable recommendations
  • Let research sit unused

Research Analysis Tips

  • Use affinity mapping for qualitative synthesis
  • Quantify qualitative findings (frequency counts)
  • Create insight posters for sharing
  • Use direct quotes to support findings
  • Cross-check insights across data sources

Quick Install

/plugin add https://github.com/aj-geddes/useful-ai-prompts/tree/main/user-research-analysis

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

aj-geddes/useful-ai-prompts
Path: skills/user-research-analysis

Related Skills

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill

Algorithmic Art Generation

Meta

This skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.

View skill

webapp-testing

Testing

This Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.

View skill

requesting-code-review

Design

This skill dispatches a code-reviewer subagent to analyze code changes against requirements before proceeding. It should be used after completing tasks, implementing major features, or before merging to main. The review helps catch issues early by comparing the current implementation with the original plan.

View skill