verification-before-completion
About
This debugging skill enforces mandatory verification before claiming any task as complete. It requires developers to run full verification commands, thoroughly examine their output, and confirm results before declaring work finished. This evidence-based approach is particularly crucial before commits, pull requests, or task completion to ensure quality and accuracy.
Documentation
Verification Before Completion
Overview
Claiming work is complete without verification is dishonesty, not efficiency.
Core principle: Evidence before claims, always.
Violating the letter of this rule is violating the spirit of this rule.
This skill enforces mandatory verification before ANY completion claim, preventing false positives, broken builds, and trust violations.
When to Use This Skill
Activate ALWAYS before claiming:
- Success, completion, or satisfaction ("Done!", "Fixed!", "Great!")
- Tests pass, linter clean, build succeeds
- Committing, pushing, creating PRs
- Marking tasks complete or delegating to agents
Use this ESPECIALLY when:
- Under time pressure or tired
- "Quick fix" seems obvious or you're confident
- Agent reports success or tests "should" pass
The Iron Law
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this message, you cannot claim it passes.
Core Principles
- Evidence Required: Every claim needs supporting evidence
- Fresh Verification: Must verify now, not rely on previous runs
- Complete Verification: Full command, not partial checks
- Honest Reporting: Report actual state, not hoped-for state
Quick Start
The five-step gate function:
- IDENTIFY: What command proves this claim?
- RUN: Execute the FULL command (fresh, complete)
- READ: Full output, check exit code, count failures
- VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
- ONLY THEN: Make the claim
Skip any step = lying, not verifying.
Key Patterns
Correct Pattern:
✅ [Run pytest] [Output: 34/34 passed] "All tests pass"
Incorrect Patterns:
❌ "Should pass now"
❌ "Looks correct"
❌ "Tests were passing"
❌ "I'm confident it works"
Red Flags - STOP Immediately
If you catch yourself:
- Using "should", "probably", "seems to"
- Expressing satisfaction before verification
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
ALL of these mean: STOP. Run verification first.
Why This Matters
Statistics from real-world failures:
- Verification cost: 2 minutes
- Recovery cost: 120+ minutes (60x more expensive)
- 40% of unverified "complete" claims required rework
Core violation: "If you lie, you'll be replaced"
Navigation
For detailed information:
- Gate Function: Complete five-step verification process with decision trees
- Verification Patterns: Correct verification patterns for tests, builds, deployments, and more
- Red Flags and Failures: Common failure modes, red flags, and real-world examples with time/cost data
- Integration and Workflows: Integration with other skills, CI/CD patterns, and agent delegation workflows
The Bottom Line
No shortcuts for verification.
Run the command. Read the output. THEN claim the result.
This is non-negotiable.
Quick Install
/plugin add https://github.com/bobmatnyc/claude-mpm/tree/main/verification-before-completionCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
Verification & Quality Assurance
OtherThis skill automatically verifies and scores the quality of code and agent outputs using a 0.95 accuracy threshold. It performs truth scoring, code correctness checks, and can instantly roll back changes that fail verification. Use it to ensure high-quality outputs and maintain codebase reliability in your development workflow.
Verification & Quality Assurance
OtherThis skill provides automated verification and quality assurance for agent outputs, featuring truth scoring, code quality validation, and automatic rollback when scores fall below a 0.95 threshold. It ensures codebase reliability through real-time metrics and statistical analysis. Use it to maintain high-quality outputs in Claude Code projects with integrated CI/CD checks.
Verification & Quality Assurance
OtherThis skill provides automated verification and quality assurance for agent outputs by implementing truth scoring and code quality checks. It automatically rolls back changes that fall below a 0.95 accuracy threshold, ensuring reliable code quality. Use it to maintain high standards in automated code generation and CI/CD pipelines.
testing-strategy-builder
MetaThis skill helps developers create comprehensive testing strategies for applications. It provides templates, coverage targets, and structured guidance for unit, integration, end-to-end, and performance testing. Use it when planning a new project, improving existing test coverage, or establishing quality gates.
