testing-guide
About
This skill provides testing pyramid standards and best practices for systematic testing across unit, integration, system, and end-to-end levels. Use it when writing tests, discussing coverage, or defining test strategy to implement proper testing proportions and naming conventions. It offers quick reference guides and actionable standards for developers implementing a structured testing approach.
Documentation
Testing Guide
This skill provides testing pyramid standards and best practices for systematic testing.
Quick Reference
Testing Pyramid
┌─────────┐
│ E2E │ ← Fewer, slower (3%)
─┴─────────┴─
┌─────────────┐
│ ST │ ← System (7%)
─┴─────────────┴─
┌─────────────────┐
│ IT │ ← Integration (20%)
─┴─────────────────┴─
┌─────────────────────┐
│ UT │ ← Unit (70%)
└─────────────────────┘
Test Levels Overview
| Level | Scope | Speed | Dependencies |
|---|---|---|---|
| UT | Single function/class | < 100ms | Mocked |
| IT | Component interaction | 1-10s | Real DB (containerized) |
| ST | Full system | Minutes | Production-like |
| E2E | User journeys | 30s+ | Everything real |
Coverage Targets
| Metric | Minimum | Recommended |
|---|---|---|
| Line | 70% | 85% |
| Branch | 60% | 80% |
| Function | 80% | 90% |
Detailed Guidelines
For complete standards, see:
Naming Conventions
File Naming
[ClassName]Tests.cs # C#
[ClassName].test.ts # TypeScript
[class_name]_test.py # Python
[class_name]_test.go # Go
Method Naming
[MethodName]_[Scenario]_[ExpectedResult]()
should_[behavior]_when_[condition]()
test_[method]_[scenario]_[expected]()
Test Doubles
| Type | Purpose | When to Use |
|---|---|---|
| Stub | Returns predefined values | Fixed API responses |
| Mock | Verifies interactions | Check method called |
| Fake | Simplified implementation | In-memory database |
| Spy | Records calls, delegates | Partial mocking |
When to Use What
- UT: Use mocks/stubs for all external deps
- IT: Use fakes for DB, stubs for external APIs
- ST: Real components, fake only external services
- E2E: Real everything
AAA Pattern
test('method_scenario_expected', () => {
// Arrange - Setup test data
const input = createTestInput();
const sut = new SystemUnderTest();
// Act - Execute behavior
const result = sut.execute(input);
// Assert - Verify result
expect(result).toBe(expected);
});
FIRST Principles
- Fast - Tests run quickly
- Independent - Tests don't affect each other
- Repeatable - Same result every time
- Self-validating - Clear pass/fail
- Timely - Written with production code
Anti-Patterns to Avoid
- ❌ Test Interdependence (tests must run in order)
- ❌ Flaky Tests (sometimes pass, sometimes fail)
- ❌ Testing Implementation Details
- ❌ Over-Mocking
- ❌ Missing Assertions
- ❌ Magic Numbers/Strings
Configuration Detection
This skill supports project-specific configuration.
Detection Order
- Check
CONTRIBUTING.mdfor "Disabled Skills" section- If this skill is listed, it is disabled for this project
- Check
CONTRIBUTING.mdfor "Testing Standards" section - If not found, default to standard coverage targets
First-Time Setup
If no configuration found and context is unclear:
- Ask the user: "This project hasn't configured testing standards. Would you like to customize coverage targets?"
- After user selection, suggest documenting in
CONTRIBUTING.md:
## Testing Standards
### Coverage Targets
| Metric | Target |
|--------|--------|
| Line | 80% |
| Branch | 70% |
| Function | 85% |
Configuration Example
In project's CONTRIBUTING.md:
## Testing Standards
### Coverage Targets
| Metric | Target |
|--------|--------|
| Line | 80% |
| Branch | 70% |
| Function | 85% |
### Testing Framework
- Unit Tests: Jest
- Integration Tests: Supertest
- E2E Tests: Playwright
License: CC BY 4.0 | Source: universal-doc-standards
Quick Install
/plugin add https://github.com/AsiaOstrich/universal-dev-skills/tree/main/testing-guideCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
Algorithmic Art Generation
MetaThis skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.
webapp-testing
TestingThis Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.
