context-driven-testing
About
This skill applies context-driven testing principles to help developers make testing decisions based on specific project needs rather than universal rules. It guides you to analyze your project's unique context, question existing practices, and adapt your testing approach accordingly. Use it when evaluating testing strategies, challenging dogma, or tailoring methods to address specific risks and constraints.
Documentation
Context-Driven Testing
<default_to_action> When making testing decisions or adapting approaches:
- ANALYZE context: project goals, constraints, risks, team skills
- QUESTION practices: "Why this? What risk does it address? What's the cost?"
- INVESTIGATE not just check: Does software solve the problem, or create new ones?
- ADAPT approach based on context, not "best practices"
- DOCUMENT discoveries, not pre-written plans
Quick Context Analysis:
- Mission: "Find important problems fast enough to matter" (not "execute test cases")
- Risk: Safety-critical = high rigor; internal tool = lighter touch
- Constraints: Startup with tight timeline ≠ enterprise with compliance
- Skills: Novice needs structure; expert adapts intuitively
Critical Success Factors:
- No "best practices" work everywhere - only good practices in context
- Testing is investigation, not script execution
- Context changes; your approach should too </default_to_action>
Quick Reference Card
When to Use
- Making testing decisions for new project
- Questioning "that's how it's done" dogma
- Adapting approach to specific constraints
- Exploratory testing sessions
Seven Context-Driven Principles
- Value of any practice depends on its context
- Good practices in context, no universal best practices
- People working together are most important
- Projects unfold in unpredictable ways
- Product is a solution - if problem not solved, product fails
- Good testing is challenging intellectual work
- Judgment and skill determine right things at right times
Context Factors
| Factor | Questions |
|---|---|
| Project | Business goal? User needs? Failure impact? |
| Constraints | Timeline? Budget? Team skills? Legacy? |
| Risk | Safety-critical? Regulated? High volume? |
| Technical | Stack quirks? Integrations? Observability? |
RST Heuristics
| Heuristic | Application |
|---|---|
| SFDIPOT | Structure, Function, Data, Interfaces, Platform, Operations, Time |
| Oracles | Consistency with history, similar products, expectations, docs |
| Tours | Business District, Historical, Bad Neighborhood, Tourist, Museum |
Context-Driven Decisions
Example: Test Automation Level
Startup Context:
- Small team, rapid changes, unclear product-market fit
- Decision: Light automation on critical paths, heavy exploratory
- Rationale: Requirements change too fast for extensive automation
Enterprise Context:
- Stable features, regulatory requirements, large team
- Decision: Comprehensive automated regression suite
- Rationale: Stability allows automation investment to pay off
Example: Documentation
Regulated (FDA/medical):
- Decision: Detailed test protocols, traceability matrices
- Rationale: Regulatory compliance isn't optional
Fast-paced startup:
- Decision: Lightweight session notes, risk logs
- Rationale: Bureaucracy slows more than it helps
Investigation vs. Checking
| Checking | Testing (Investigation) |
|---|---|
| Did API return 200? | Does API meet user needs? |
| Does button work? | What happens under load? |
| Match the spec? | Does it solve the problem? |
Red Flags: Not Context-Driven
- Follow process "because that's how it's done"
- Can't explain why you're doing something
- Measure test cases executed, not problems found
- Test plan could apply to any project
- Stop thinking once you have a script
Agent-Assisted Context-Driven Testing
// Agent analyzes context and recommends approach
const context = await Task("Analyze Context", {
project: 'e-commerce-platform',
stage: 'startup',
constraints: ['timeline: tight', 'budget: limited'],
risks: ['payment-security', 'high-volume']
}, "qe-fleet-commander");
// Context-aware agent selection
// - qe-security-scanner (critical risk)
// - qe-performance-tester (high volume)
// - Skip: qe-visual-tester (low priority in startup context)
// Adaptive testing strategy
await Task("Generate Tests", {
context: 'startup',
focus: 'critical-paths-only',
depth: 'smoke-tests',
automation: 'minimal'
}, "qe-test-generator");
Agent Coordination Hints
Memory Namespace
aqe/context-driven/
├── context-analysis/* - Project context snapshots
├── decisions/* - Testing decisions with rationale
├── discoveries/* - What was learned during testing
└── adaptations/* - How approach changed over time
Fleet Coordination
const contextFleet = await FleetManager.coordinate({
strategy: 'context-driven',
context: {
type: 'greenfield-saas',
stage: 'growth',
compliance: 'gdpr-only'
},
agents: ['qe-test-generator', 'qe-security-scanner', 'qe-performance-tester'],
exclude: ['qe-visual-tester', 'qe-requirements-validator'] // Not priority
});
Practical Tips
- Start with risk assessment - List features, ask: How likely to fail? How bad? How hard to test?
- Time-box exploration - 2 hours checkout, 30 min error handling, 15 min per browser
- Document discoveries - Not "Enter invalid email, verify error" but "Payment API returns 500 instead of 400, no user-visible error. Bug filed."
- Talk to humans - Developers, users, support, product
- Pair with others - Different perspectives = different bugs
Related Skills
- agentic-quality-engineering - Context-aware agent selection
- holistic-testing-pact - Adapt holistic model to context
- risk-based-testing - Context affects risk assessment
- exploratory-testing-advanced - RST techniques
Remember
Context drives decisions. No universal best practices. Skilled testers make informed decisions based on specific goals, constraints, and risks.
You're not a test script executor. You're a skilled investigator helping teams build better products.
With Agents: Agents analyze context, adapt strategies, and learn what works in your situation. Use agents to scale context-driven thinking while maintaining human judgment for critical decisions.
Quick Install
/plugin add https://github.com/proffesor-for-testing/agentic-qe/tree/main/context-driven-testingCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
exploratory-testing-advanced
OtherThis skill provides advanced exploratory testing techniques for planning sessions, investigating bugs, and uncovering quality risks. It guides users through creating charters, applying RST heuristics like SFDIPOT, and executing systematic test tours. Use it when you need structured, session-based exploration to discover unknown issues in your software.
test-automation-strategy
OtherThis Claude Skill helps developers design and implement effective test automation frameworks by applying the test pyramid, F.I.R.S.T. principles, and design patterns like Page Object Model. It focuses on integrating automation into CI/CD for fast feedback and improved test efficiency. Use it when building new automation strategies or optimizing existing test suites.
testability-scoring
OtherThis skill provides AI-powered testability assessment for web applications using Playwright and optional Vibium integration. It evaluates applications against 10 principles of intrinsic testability including Observability, Controllability, and Stability. Use it when assessing software testability, identifying improvements, or generating testability reports.
shift-right-testing
OtherThis skill enables testing in production using feature flags, canary deployments, and chaos engineering. It's designed for implementing production observability and progressive delivery strategies. Key capabilities include progressive rollouts, synthetic monitoring, and failure injection to build system resilience.
