Back to Skills

sherlock-review

proffesor-for-testing
Updated Today
72 views
99
21
99
View on GitHub
Otherinvestigationevidence-basedcode-reviewroot-causededuction

About

Sherlock-review is an evidence-based code review skill that uses deductive reasoning to systematically verify implementation claims, investigate bugs, and perform root cause analysis. It guides developers through a process of observation, deduction, and elimination to determine what actually happened versus what was claimed. This skill is ideal for validating fixes, conducting security audits, and performing performance validation.

Documentation

Sherlock Review

<default_to_action> When investigating code claims:

  1. OBSERVE: Gather all evidence (code, tests, history, behavior)
  2. DEDUCE: What does evidence actually show vs. what was claimed?
  3. ELIMINATE: Rule out what cannot be true
  4. CONCLUDE: Does evidence support the claim?
  5. DOCUMENT: Findings with proof, not assumptions

The 3-Step Investigation:

# 1. OBSERVE: Gather evidence
git diff <commit>
npm test -- --coverage

# 2. DEDUCE: Compare claim vs reality
# Does code match description?
# Do tests prove the fix/feature?

# 3. CONCLUDE: Verdict with evidence
# SUPPORTED / PARTIALLY SUPPORTED / NOT SUPPORTED

Holmesian Principles:

  • "Data! Data! Data!" - Collect before concluding
  • "Eliminate the impossible" - What cannot be true?
  • "You see, but do not observe" - Run code, don't just read
  • Trust only reproducible evidence </default_to_action>

Quick Reference Card

Evidence Collection Checklist

CategoryWhat to CheckHow
ClaimPR description, commit messagesRead thoroughly
CodeActual file changesgit diff
TestsCoverage, assertionsRun independently
BehaviorRuntime outputExecute locally
TimelineWhen things happenedgit log, git blame

Verdict Levels

VerdictMeaning
TRUEEvidence fully supports claim
PARTIALLY TRUEClaim accurate but incomplete
FALSEEvidence contradicts claim
? NONSENSICALClaim doesn't apply to context

Investigation Template

## Sherlock Investigation: [Claim]

### The Claim
"[What PR/commit claims to do]"

### Evidence Examined
- Code changes: [files, lines]
- Tests added: [count, coverage]
- Behavior observed: [what actually happens]

### Deductive Analysis

**Claim**: [specific assertion]
**Evidence**: [what you found]
**Deduction**: [logical conclusion]
**Verdict**: ✓/⚠/✗

### Findings
- What works: [with evidence]
- What doesn't: [with evidence]
- What's missing: [gaps in implementation/testing]

### Recommendations
1. [Action based on findings]

Investigation Scenarios

Scenario 1: "This Fixed the Bug"

Steps:

  1. Reproduce bug on commit before fix
  2. Verify bug is gone on commit with fix
  3. Check if fix addresses root cause or symptom
  4. Test edge cases not in original report

Red Flags:

  • Fix that just removes error logging
  • Works only for specific test case
  • Workarounds instead of root cause fix
  • No regression test added

Scenario 2: "Improved Performance by 50%"

Steps:

  1. Run benchmark on baseline commit
  2. Run same benchmark on optimized commit
  3. Compare in identical conditions
  4. Verify measurement methodology

Red Flags:

  • Tested only on toy data
  • Different comparison conditions
  • Trade-offs not mentioned

Scenario 3: "Handles All Edge Cases"

Steps:

  1. List all edge cases in code path
  2. Check each has test coverage
  3. Test boundary conditions
  4. Verify error handling paths

Red Flags:

  • catch {} swallowing errors
  • Generic error messages
  • No logging of critical errors

Example Investigation

## Case: PR #123 "Fix race condition in async handler"

### Claims Examined:
1. "Eliminates race condition"
2. "Adds mutex locking"
3. "100% thread safe"

### Evidence:
- File: src/handlers/async-handler.js
- Changes: Added `async/await`, removed callbacks
- Tests: 2 new tests for async flow
- Coverage: 85% (was 75%)

### Analysis:

**Claim 1: "Eliminates race condition"**
Evidence: Added `await` to sequential operations. No actual mutex.
Deduction: Race avoided by removing concurrency, not synchronization.
Verdict: ⚠ PARTIALLY TRUE (solved differently than claimed)

**Claim 2: "Adds mutex locking"**
Evidence: No mutex library, no lock variables, no sync primitives.
Verdict: ✗ FALSE

**Claim 3: "100% thread safe"**
Evidence: JavaScript is single-threaded. No worker threads used.
Verdict: ? NONSENSICAL (meaningless in this context)

### Conclusion:
Fix works but not for reasons claimed. Race condition avoided by
making operations sequential, not by adding synchronization.

### Recommendations:
1. Update PR description to accurately reflect solution
2. Add test for concurrent request handling
3. Remove incorrect technical claims

Agent Integration

// Evidence-based code review
await Task("Sherlock Review", {
  prNumber: 123,
  claims: [
    "Fixes memory leak",
    "Improves performance 30%"
  ],
  verifyReproduction: true,
  testEdgeCases: true
}, "qe-code-reviewer");

// Bug fix verification
await Task("Verify Fix", {
  bugCommit: 'abc123',
  fixCommit: 'def456',
  reproductionSteps: steps,
  testBoundaryConditions: true
}, "qe-code-reviewer");

Agent Coordination Hints

Memory Namespace

aqe/sherlock/
├── investigations/*   - Investigation reports
├── evidence/*         - Collected evidence
├── verdicts/*         - Claim verdicts
└── patterns/*         - Common deception patterns

Fleet Coordination

const investigationFleet = await FleetManager.coordinate({
  strategy: 'evidence-investigation',
  agents: [
    'qe-code-reviewer',        // Code analysis
    'qe-security-auditor',     // Security claim verification
    'qe-performance-validator' // Performance claim verification
  ],
  topology: 'parallel'
});

Related Skills


Remember

"It is a capital mistake to theorize before one has data." Trust only reproducible evidence. Don't trust commit messages, documentation, or "works on my machine."

The Sherlock Standard: Every claim must be verified empirically. What does the evidence actually show?

Quick Install

/plugin add https://github.com/proffesor-for-testing/agentic-qe/tree/main/sherlock-review

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

proffesor-for-testing/agentic-qe
Path: .claude/skills/sherlock-review
agenticqeagenticsfoundationagentsquality-engineering

Related Skills

micro-skill-creator

Meta

The micro-skill-creator rapidly generates atomic, single-purpose skills optimized with evidence-based prompting and specialist agents. It produces highly focused components using patterns like self-consistency and plan-and-solve, validated through systematic testing. This makes it ideal for developers building reliable, composable workflow elements in Claude Code.

View skill

github-code-review

Other

This skill automates comprehensive GitHub code reviews using AI-powered swarm coordination, enabling multi-agent analysis of pull requests. It performs security and performance analysis while orchestrating specialized review agents to generate intelligent comments. Use it when you need automated PR management with quality gate enforcement beyond traditional static analysis.

View skill

code-review-quality

Other

This skill conducts automated code reviews focused on quality, testability, and maintainability, using specialized agents for security, performance, and coverage analysis. It provides prioritized, context-driven feedback for pull requests or when establishing review practices. Developers should use it to get actionable, structured reviews that emphasize bugs and maintainability over subjective style preferences.

View skill

github-code-review

Other

This skill automates comprehensive GitHub code reviews using AI-powered swarm coordination. It deploys specialized review agents to perform multi-agent analysis including security, performance, and quality checks. Use it to enhance PR management with intelligent comment generation and automated quality gate enforcement.

View skill