Back to Skills

verification-before-completion

bobmatnyc
Updated Yesterday
28 views
22
3
22
View on GitHub
Otherverificationquality-assurancehonestyevidence

About

This debugging skill enforces mandatory verification before claiming any task as complete. It requires developers to run full verification commands, thoroughly examine their output, and confirm results before declaring work finished. This evidence-based approach is particularly crucial before commits, pull requests, or task completion to ensure quality and accuracy.

Documentation

Verification Before Completion

Overview

Claiming work is complete without verification is dishonesty, not efficiency.

Core principle: Evidence before claims, always.

Violating the letter of this rule is violating the spirit of this rule.

This skill enforces mandatory verification before ANY completion claim, preventing false positives, broken builds, and trust violations.

When to Use This Skill

Activate ALWAYS before claiming:

  • Success, completion, or satisfaction ("Done!", "Fixed!", "Great!")
  • Tests pass, linter clean, build succeeds
  • Committing, pushing, creating PRs
  • Marking tasks complete or delegating to agents

Use this ESPECIALLY when:

  • Under time pressure or tired
  • "Quick fix" seems obvious or you're confident
  • Agent reports success or tests "should" pass

The Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

If you haven't run the verification command in this message, you cannot claim it passes.

Core Principles

  1. Evidence Required: Every claim needs supporting evidence
  2. Fresh Verification: Must verify now, not rely on previous runs
  3. Complete Verification: Full command, not partial checks
  4. Honest Reporting: Report actual state, not hoped-for state

Quick Start

The five-step gate function:

  1. IDENTIFY: What command proves this claim?
  2. RUN: Execute the FULL command (fresh, complete)
  3. READ: Full output, check exit code, count failures
  4. VERIFY: Does output confirm the claim?
    • If NO: State actual status with evidence
    • If YES: State claim WITH evidence
  5. ONLY THEN: Make the claim

Skip any step = lying, not verifying.

Key Patterns

Correct Pattern:

✅ [Run pytest] [Output: 34/34 passed] "All tests pass"

Incorrect Patterns:

❌ "Should pass now"
❌ "Looks correct"
❌ "Tests were passing"
❌ "I'm confident it works"

Red Flags - STOP Immediately

If you catch yourself:

  • Using "should", "probably", "seems to"
  • Expressing satisfaction before verification
  • About to commit/push/PR without verification
  • Trusting agent success reports
  • Relying on partial verification

ALL of these mean: STOP. Run verification first.

Why This Matters

Statistics from real-world failures:

  • Verification cost: 2 minutes
  • Recovery cost: 120+ minutes (60x more expensive)
  • 40% of unverified "complete" claims required rework

Core violation: "If you lie, you'll be replaced"

Navigation

For detailed information:

The Bottom Line

No shortcuts for verification.

Run the command. Read the output. THEN claim the result.

This is non-negotiable.

Quick Install

/plugin add https://github.com/bobmatnyc/claude-mpm/tree/main/verification-before-completion

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

bobmatnyc/claude-mpm
Path: src/claude_mpm/skills/bundled/debugging/verification-before-completion

Related Skills

Verification & Quality Assurance

Other

This skill automatically verifies and scores the quality of code and agent outputs using a 0.95 accuracy threshold. It performs truth scoring, code correctness checks, and can instantly roll back changes that fail verification. Use it to ensure high-quality outputs and maintain codebase reliability in your development workflow.

View skill

Verification & Quality Assurance

Other

This skill provides automated verification and quality assurance for agent outputs, featuring truth scoring, code quality validation, and automatic rollback when scores fall below a 0.95 threshold. It ensures codebase reliability through real-time metrics and statistical analysis. Use it to maintain high-quality outputs in Claude Code projects with integrated CI/CD checks.

View skill

Verification & Quality Assurance

Other

This skill provides automated verification and quality assurance for agent outputs by implementing truth scoring and code quality checks. It automatically rolls back changes that fall below a 0.95 accuracy threshold, ensuring reliable code quality. Use it to maintain high standards in automated code generation and CI/CD pipelines.

View skill

testing-strategy-builder

Meta

This skill helps developers create comprehensive testing strategies for applications. It provides templates, coverage targets, and structured guidance for unit, integration, end-to-end, and performance testing. Use it when planning a new project, improving existing test coverage, or establishing quality gates.

View skill