Back to Skills

code-review

mrgoonie
Updated Today
27 views
738
118
738
View on GitHub
Designaidesign

About

This skill ensures rigorous code review practices by emphasizing technical evaluation over performative agreement. It enables systematic reviews through a code-reviewer subagent and requires evidence-based verification before making completion claims. Use it when receiving unclear feedback, completing major features, or before declaring work finished.

Documentation

Code Review

Guide proper code review practices emphasizing technical rigor, evidence-based claims, and verification over performative responses.

Overview

Code review requires three distinct practices:

  1. Receiving feedback - Technical evaluation over performative agreement
  2. Requesting reviews - Systematic review via code-reviewer subagent
  3. Verification gates - Evidence before any completion claims

Each practice has specific triggers and protocols detailed in reference files.

Core Principle

Technical correctness over social comfort. Verify before implementing. Ask before assuming. Evidence before claims.

When to Use This Skill

Receiving Feedback

Trigger when:

  • Receiving code review comments from any source
  • Feedback seems unclear or technically questionable
  • Multiple review items need prioritization
  • External reviewer lacks full context
  • Suggestion conflicts with existing decisions

Reference: references/code-review-reception.md

Requesting Review

Trigger when:

  • Completing tasks in subagent-driven development (after EACH task)
  • Finishing major features or refactors
  • Before merging to main branch
  • Stuck and need fresh perspective
  • After fixing complex bugs

Reference: references/requesting-code-review.md

Verification Gates

Trigger when:

  • About to claim tests pass, build succeeds, or work is complete
  • Before committing, pushing, or creating PRs
  • Moving to next task
  • Any statement suggesting success/completion
  • Expressing satisfaction with work

Reference: references/verification-before-completion.md

Quick Decision Tree

SITUATION?
│
├─ Received feedback
│  ├─ Unclear items? → STOP, ask for clarification first
│  ├─ From human partner? → Understand, then implement
│  └─ From external reviewer? → Verify technically before implementing
│
├─ Completed work
│  ├─ Major feature/task? → Request code-reviewer subagent review
│  └─ Before merge? → Request code-reviewer subagent review
│
└─ About to claim status
   ├─ Have fresh verification? → State claim WITH evidence
   └─ No fresh verification? → RUN verification command first

Receiving Feedback Protocol

Response Pattern

READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT

Key Rules

  • ❌ No performative agreement: "You're absolutely right!", "Great point!", "Thanks for [anything]"
  • ❌ No implementation before verification
  • ✅ Restate requirement, ask questions, push back with technical reasoning, or just start working
  • ✅ If unclear: STOP and ask for clarification on ALL unclear items first
  • ✅ YAGNI check: grep for usage before implementing suggested "proper" features

Source Handling

  • Human partner: Trusted - implement after understanding, no performative agreement
  • External reviewers: Verify technically correct, check for breakage, push back if wrong

Full protocol: references/code-review-reception.md

Requesting Review Protocol

When to Request

  • After each task in subagent-driven development
  • After major feature completion
  • Before merge to main

Process

  1. Get git SHAs: BASE_SHA=$(git rev-parse HEAD~1) and HEAD_SHA=$(git rev-parse HEAD)
  2. Dispatch code-reviewer subagent via Task tool with: WHAT_WAS_IMPLEMENTED, PLAN_OR_REQUIREMENTS, BASE_SHA, HEAD_SHA, DESCRIPTION
  3. Act on feedback: Fix Critical immediately, Important before proceeding, note Minor for later

Full protocol: references/requesting-code-review.md

Verification Gates Protocol

The Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

Gate Function

IDENTIFY command → RUN full command → READ output → VERIFY confirms claim → THEN claim

Skip any step = lying, not verifying

Requirements

  • Tests pass: Test output shows 0 failures
  • Build succeeds: Build command exit 0
  • Bug fixed: Test original symptom passes
  • Requirements met: Line-by-line checklist verified

Red Flags - STOP

Using "should"/"probably"/"seems to", expressing satisfaction before verification, committing without verification, trusting agent reports, ANY wording implying success without running verification

Full protocol: references/verification-before-completion.md

Integration with Workflows

  • Subagent-Driven: Review after EACH task, verify before moving to next
  • Pull Requests: Verify tests pass, request code-reviewer review before merge
  • General: Apply verification gates before any status claims, push back on invalid feedback

Bottom Line

  1. Technical rigor over social performance - No performative agreement
  2. Systematic review processes - Use code-reviewer subagent
  3. Evidence before claims - Verification gates always

Verify. Question. Then implement. Evidence. Then claim.

Quick Install

/plugin add https://github.com/mrgoonie/claudekit-skills/tree/main/code-review

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

mrgoonie/claudekit-skills
Path: .claude/skills/code-review

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill