six-thinking-hats
关于
This Claude skill applies the Six Thinking Hats methodology to software testing, providing structured perspectives for analyzing test strategies, failures, and discussions. It guides you through distinct lenses like facts, risks, creativity, and process to ensure comprehensive quality analysis. Use it when designing test approaches, conducting retrospectives, or evaluating testing methods.
快速安装
Claude Code
推荐/plugin add https://github.com/proffesor-for-testing/agentic-qegit clone https://github.com/proffesor-for-testing/agentic-qe.git ~/.claude/skills/six-thinking-hats在 Claude Code 中复制并粘贴此命令以安装该技能
技能文档
Six Thinking Hats for Testing
<default_to_action> When analyzing testing decisions:
- DEFINE focus clearly (specific testing question)
- APPLY each hat sequentially (5 min each)
- DOCUMENT insights per hat
- SYNTHESIZE into action plan
Quick Hat Rotation (30 min):
🤍 WHITE (5 min) - Facts only: metrics, data, coverage
❤️ RED (3 min) - Gut feelings (no justification needed)
🖤 BLACK (7 min) - Risks, gaps, what could go wrong
💛 YELLOW (5 min) - Strengths, opportunities, what works
💚 GREEN (7 min) - Creative ideas, alternatives
🔵 BLUE (3 min) - Action plan, next steps
Example for "API Test Strategy":
- 🤍 47 endpoints, 30% coverage, 12 integration tests
- ❤️ Anxious about security, confident on happy paths
- 🖤 No auth tests, rate limiting untested, edge cases missing
- 💛 Good docs, CI/CD integrated, team experienced
- 💚 Contract testing with Pact, chaos testing, property-based
- 🔵 Security tests first, contract testing next sprint </default_to_action>
Quick Reference Card
The Six Hats
| Hat | Focus | Key Question |
|---|---|---|
| 🤍 White | Facts & Data | What do we KNOW? |
| ❤️ Red | Emotions | What do we FEEL? |
| 🖤 Black | Risks | What could go WRONG? |
| 💛 Yellow | Benefits | What's GOOD? |
| 💚 Green | Creativity | What ELSE could we try? |
| 🔵 Blue | Process | What should we DO? |
When to Use Each Hat
| Hat | Use For |
|---|---|
| 🤍 White | Baseline metrics, test data inventory |
| ❤️ Red | Team confidence check, quality gut feel |
| 🖤 Black | Risk assessment, gap analysis, pre-mortems |
| 💛 Yellow | Strengths audit, quick win identification |
| 💚 Green | Test innovation, new approaches, brainstorming |
| 🔵 Blue | Strategy planning, retrospectives, decision-making |
Hat Details
🤍 White Hat - Facts & Data
Output: Quantitative testing baseline
Questions:
- What test coverage do we have?
- What is our pass/fail rate?
- What environments exist?
- What is our defect history?
Example Output:
Coverage: 67% line, 45% branch
Test Suite: 1,247 unit, 156 integration, 23 E2E
Execution Time: Unit 3min, Integration 12min, E2E 45min
Defects: 23 open (5 critical, 8 major, 10 minor)
🖤 Black Hat - Risks & Cautions
Output: Comprehensive risk assessment
Questions:
- What could go wrong in production?
- What are we NOT testing?
- What assumptions might be wrong?
- Where are the coverage gaps?
HIGH RISKS:
- No load testing (production outage risk)
- Auth edge cases untested (security vulnerability)
- Database failover never tested (data loss risk)
💛 Yellow Hat - Benefits & Optimism
Output: Strengths and opportunities
Questions:
- What's working well?
- What strengths can we leverage?
- What quick wins are available?
STRENGTHS:
- Strong CI/CD pipeline
- Team expertise in automation
- Stakeholders value quality
QUICK WINS:
- Add smoke tests (reduce incidents)
- Automate manual regression (save 2 days/release)
💚 Green Hat - Creativity
Output: Innovative testing ideas
Questions:
- How else could we test this?
- What if we tried something completely different?
- What emerging techniques could we adopt?
IDEAS:
1. AI-powered test generation
2. Chaos engineering for resilience
3. Property-based testing for edge cases
4. Production traffic replay
5. Synthetic monitoring
❤️ Red Hat - Emotions
Output: Team gut feelings (NO justification needed)
Questions:
- How confident do you feel about quality?
- What makes you anxious?
- What gives you confidence?
FEELINGS:
- Confident: Unit tests, API tests
- Anxious: Authentication flow, payment processing
- Frustrated: Flaky tests, slow E2E suite
🔵 Blue Hat - Process
Output: Action plan with owners and timelines
Questions:
- What's our strategy?
- How should we prioritize?
- What's the next step?
PRIORITIZED ACTIONS:
1. [Critical] Address security testing gap - Owner: Alice
2. [High] Implement contract testing - Owner: Bob
3. [Medium] Reduce flaky tests - Owner: Carol
Session Templates
Solo Session (30 min)
# Six Hats Analysis: [Topic]
## 🤍 White Hat (5 min)
Facts: [list metrics, data]
## ❤️ Red Hat (3 min)
Feelings: [gut reactions, no justification]
## 🖤 Black Hat (7 min)
Risks: [what could go wrong]
## 💛 Yellow Hat (5 min)
Strengths: [what works, opportunities]
## 💚 Green Hat (7 min)
Ideas: [creative alternatives]
## 🔵 Blue Hat (3 min)
Actions: [prioritized next steps]
Team Session (60 min)
- Each hat: 10 minutes
- Rotate through hats as group
- Document on shared whiteboard
- Blue Hat synthesizes at end
Agent Integration
// Risk-focused analysis (Black Hat)
const risks = await Task("Identify Risks", {
scope: 'payment-module',
perspective: 'black-hat',
includeMitigation: true
}, "qe-regression-risk-analyzer");
// Creative test approaches (Green Hat)
const ideas = await Task("Generate Test Ideas", {
feature: 'new-auth-system',
perspective: 'green-hat',
includeEmergingTechniques: true
}, "qe-test-generator");
// Comprehensive analysis (All Hats)
const analysis = await Task("Six Hats Analysis", {
topic: 'Q1 Test Strategy',
hats: ['white', 'black', 'yellow', 'green', 'red', 'blue']
}, "qe-quality-analyzer");
Agent Coordination Hints
Memory Namespace
aqe/six-hats/
├── analyses/* - Complete hat analyses
├── risks/* - Black hat findings
├── opportunities/* - Yellow hat findings
└── innovations/* - Green hat ideas
Fleet Coordination
const analysisFleet = await FleetManager.coordinate({
strategy: 'six-hats-analysis',
agents: [
'qe-quality-analyzer', // White + Blue hats
'qe-regression-risk-analyzer', // Black hat
'qe-test-generator' // Green hat
],
topology: 'parallel'
});
Related Skills
- risk-based-testing - Black Hat deep dive
- exploratory-testing-advanced - Green Hat exploration
- context-driven-testing - Adapt to context
Anti-Patterns
| ❌ Avoid | Why | ✅ Instead |
|---|---|---|
| Mixing hats | Confuses thinking | One hat at a time |
| Justifying Red Hat | Kills intuition | State feelings only |
| Skipping hats | Misses insights | Use all six |
| Rushing | Shallow analysis | 5 min minimum per hat |
Remember
Separate thinking modes for clarity. Each hat reveals different insights. Red Hat intuition often catches what Black Hat analysis misses.
Everyone wears all hats. This is parallel thinking, not role-based. The goal is comprehensive analysis, not debate.
GitHub 仓库
相关推荐技能
sparc-methodology
开发SPARC Methodology为开发者提供了一套从需求分析到部署监控的17种开发模式,通过多智能体协作实现系统化软件开发。它支持TDD工作流和架构设计,适用于复杂项目的全生命周期管理。开发者可以按需调用特定模式来提升代码质量和开发效率。
performance-analysis
其他该Skill为Claude Flow群组提供全面的性能分析,能自动检测通信、处理和网络等瓶颈。它通过实时监控和性能剖析生成详细报告,并给出AI驱动的优化建议。开发者可快速识别系统性能问题并获得具体改进方案。
performance-analysis
其他这个Claude Skill为开发者提供全面的性能分析和瓶颈检测功能,帮助优化Claude Flow群集的运行效率。它能实时监控系统性能、识别通信和处理瓶颈,并生成详细的优化建议报告。开发者可以用它快速定位性能问题并获得AI驱动的改进方案。
sparc-methodology
开发SPARC Methodology是一个系统化的多智能体开发框架,提供从需求规范到代码完成的17种专项模式。它通过Specification、Pseudocode、Architecture、Refinement、Completion五个阶段,结合TDD工作流和智能体编排,帮助开发者结构化地构建复杂软件系统。该Skill特别适合需要严谨架构设计、团队协作或处理大型项目的开发场景。
