← Back to Skills

six-thinking-hats

proffesor-for-testing
Updated Today
73 views
99
21
99
View on GitHub
Otherthinkingmethodologydecision-makingcollaborationanalysis

About

This Claude skill applies the Six Thinking Hats methodology to software testing, providing structured perspectives for analyzing test strategies, failures, and discussions. It guides you through distinct lenses like facts, risks, creativity, and process to ensure comprehensive quality analysis. Use it when designing test approaches, conducting retrospectives, or evaluating testing methods.

Documentation

Six Thinking Hats for Testing

<default_to_action> When analyzing testing decisions:

  1. DEFINE focus clearly (specific testing question)
  2. APPLY each hat sequentially (5 min each)
  3. DOCUMENT insights per hat
  4. SYNTHESIZE into action plan

Quick Hat Rotation (30 min):

🀍 WHITE (5 min) - Facts only: metrics, data, coverage
❀️ RED (3 min) - Gut feelings (no justification needed)
πŸ–€ BLACK (7 min) - Risks, gaps, what could go wrong
πŸ’› YELLOW (5 min) - Strengths, opportunities, what works
πŸ’š GREEN (7 min) - Creative ideas, alternatives
πŸ”΅ BLUE (3 min) - Action plan, next steps

Example for "API Test Strategy":

  • 🀍 47 endpoints, 30% coverage, 12 integration tests
  • ❀️ Anxious about security, confident on happy paths
  • πŸ–€ No auth tests, rate limiting untested, edge cases missing
  • πŸ’› Good docs, CI/CD integrated, team experienced
  • πŸ’š Contract testing with Pact, chaos testing, property-based
  • πŸ”΅ Security tests first, contract testing next sprint </default_to_action>

Quick Reference Card

The Six Hats

HatFocusKey Question
🀍 WhiteFacts & DataWhat do we KNOW?
❀️ RedEmotionsWhat do we FEEL?
πŸ–€ BlackRisksWhat could go WRONG?
πŸ’› YellowBenefitsWhat's GOOD?
πŸ’š GreenCreativityWhat ELSE could we try?
πŸ”΅ BlueProcessWhat should we DO?

When to Use Each Hat

HatUse For
🀍 WhiteBaseline metrics, test data inventory
❀️ RedTeam confidence check, quality gut feel
πŸ–€ BlackRisk assessment, gap analysis, pre-mortems
πŸ’› YellowStrengths audit, quick win identification
πŸ’š GreenTest innovation, new approaches, brainstorming
πŸ”΅ BlueStrategy planning, retrospectives, decision-making

Hat Details

🀍 White Hat - Facts & Data

Output: Quantitative testing baseline

Questions:

  • What test coverage do we have?
  • What is our pass/fail rate?
  • What environments exist?
  • What is our defect history?
Example Output:
Coverage: 67% line, 45% branch
Test Suite: 1,247 unit, 156 integration, 23 E2E
Execution Time: Unit 3min, Integration 12min, E2E 45min
Defects: 23 open (5 critical, 8 major, 10 minor)

πŸ–€ Black Hat - Risks & Cautions

Output: Comprehensive risk assessment

Questions:

  • What could go wrong in production?
  • What are we NOT testing?
  • What assumptions might be wrong?
  • Where are the coverage gaps?
HIGH RISKS:
- No load testing (production outage risk)
- Auth edge cases untested (security vulnerability)
- Database failover never tested (data loss risk)

πŸ’› Yellow Hat - Benefits & Optimism

Output: Strengths and opportunities

Questions:

  • What's working well?
  • What strengths can we leverage?
  • What quick wins are available?
STRENGTHS:
- Strong CI/CD pipeline
- Team expertise in automation
- Stakeholders value quality

QUICK WINS:
- Add smoke tests (reduce incidents)
- Automate manual regression (save 2 days/release)

πŸ’š Green Hat - Creativity

Output: Innovative testing ideas

Questions:

  • How else could we test this?
  • What if we tried something completely different?
  • What emerging techniques could we adopt?
IDEAS:
1. AI-powered test generation
2. Chaos engineering for resilience
3. Property-based testing for edge cases
4. Production traffic replay
5. Synthetic monitoring

❀️ Red Hat - Emotions

Output: Team gut feelings (NO justification needed)

Questions:

  • How confident do you feel about quality?
  • What makes you anxious?
  • What gives you confidence?
FEELINGS:
- Confident: Unit tests, API tests
- Anxious: Authentication flow, payment processing
- Frustrated: Flaky tests, slow E2E suite

πŸ”΅ Blue Hat - Process

Output: Action plan with owners and timelines

Questions:

  • What's our strategy?
  • How should we prioritize?
  • What's the next step?
PRIORITIZED ACTIONS:
1. [Critical] Address security testing gap - Owner: Alice
2. [High] Implement contract testing - Owner: Bob
3. [Medium] Reduce flaky tests - Owner: Carol

Session Templates

Solo Session (30 min)

# Six Hats Analysis: [Topic]

## 🀍 White Hat (5 min)
Facts: [list metrics, data]

## ❀️ Red Hat (3 min)
Feelings: [gut reactions, no justification]

## πŸ–€ Black Hat (7 min)
Risks: [what could go wrong]

## πŸ’› Yellow Hat (5 min)
Strengths: [what works, opportunities]

## πŸ’š Green Hat (7 min)
Ideas: [creative alternatives]

## πŸ”΅ Blue Hat (3 min)
Actions: [prioritized next steps]

Team Session (60 min)

  • Each hat: 10 minutes
  • Rotate through hats as group
  • Document on shared whiteboard
  • Blue Hat synthesizes at end

Agent Integration

// Risk-focused analysis (Black Hat)
const risks = await Task("Identify Risks", {
  scope: 'payment-module',
  perspective: 'black-hat',
  includeMitigation: true
}, "qe-regression-risk-analyzer");

// Creative test approaches (Green Hat)
const ideas = await Task("Generate Test Ideas", {
  feature: 'new-auth-system',
  perspective: 'green-hat',
  includeEmergingTechniques: true
}, "qe-test-generator");

// Comprehensive analysis (All Hats)
const analysis = await Task("Six Hats Analysis", {
  topic: 'Q1 Test Strategy',
  hats: ['white', 'black', 'yellow', 'green', 'red', 'blue']
}, "qe-quality-analyzer");

Agent Coordination Hints

Memory Namespace

aqe/six-hats/
β”œβ”€β”€ analyses/*        - Complete hat analyses
β”œβ”€β”€ risks/*           - Black hat findings
β”œβ”€β”€ opportunities/*   - Yellow hat findings
└── innovations/*     - Green hat ideas

Fleet Coordination

const analysisFleet = await FleetManager.coordinate({
  strategy: 'six-hats-analysis',
  agents: [
    'qe-quality-analyzer',        // White + Blue hats
    'qe-regression-risk-analyzer', // Black hat
    'qe-test-generator'           // Green hat
  ],
  topology: 'parallel'
});

Related Skills


Anti-Patterns

❌ AvoidWhyβœ… Instead
Mixing hatsConfuses thinkingOne hat at a time
Justifying Red HatKills intuitionState feelings only
Skipping hatsMisses insightsUse all six
RushingShallow analysis5 min minimum per hat

Remember

Separate thinking modes for clarity. Each hat reveals different insights. Red Hat intuition often catches what Black Hat analysis misses.

Everyone wears all hats. This is parallel thinking, not role-based. The goal is comprehensive analysis, not debate.

Quick Install

/plugin add https://github.com/proffesor-for-testing/agentic-qe/tree/main/six-thinking-hats

Copy and paste this command in Claude Code to install this skill

GitHub δ»“εΊ“

proffesor-for-testing/agentic-qe
Path: .claude/skills/six-thinking-hats
agenticqeagenticsfoundationagentsquality-engineering

Related Skills

sparc-methodology

Development

The SPARC methodology provides a systematic development framework with 17 specialized modes for comprehensive software development from specification to completion. It integrates multi-agent orchestration to handle complex development workflows including architecture design, testing, and deployment. Use this skill when you need structured guidance throughout the entire development lifecycle with automated agent coordination.

View skill

performance-analysis

Other

This skill provides comprehensive performance analysis for Claude Flow swarms, detecting bottlenecks and profiling operations. It generates detailed reports and offers AI-powered optimization recommendations to improve swarm performance. Use it when you need to monitor, analyze, and optimize the efficiency of your Claude Flow implementations.

View skill

when-analyzing-performance-use-performance-analysis

Other

This skill provides comprehensive performance analysis and bottleneck detection for Claude Flow swarms. It identifies optimization opportunities and delivers actionable recommendations to improve system performance. Use it when you need to profile workflows, analyze metrics, and benchmark your swarm's efficiency.

View skill

xp-practices

Other

This skill helps developers implement Extreme Programming practices like TDD, pair programming, and continuous integration. Use it to improve team collaboration, adopt technical excellence, and establish sustainable agile workflows. It provides actionable guidance on prioritizing and adapting core XP practices for immediate value.

View skill