MCP HubMCP Hub
スキル一覧に戻る

code-review

mrgoonie
更新日 Today
56 閲覧
738
118
738
GitHubで表示
デザインaidesign

について

このスキルは、形式的な同意ではなく技術的評価を重視することで、厳格なコードレビュー慣行を保証します。コードレビュアーサブエージェントを通じて体系的なレビューを可能にし、完了宣言を行う前に証拠に基づく検証を要求します。不明確なフィードバックを受けた際、主要機能を完了した際、または作業完了を宣言する前にご利用ください。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/mrgoonie/claudekit-skills
Git クローン代替
git clone https://github.com/mrgoonie/claudekit-skills.git ~/.claude/skills/code-review

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

Code Review

Guide proper code review practices emphasizing technical rigor, evidence-based claims, and verification over performative responses.

Overview

Code review requires three distinct practices:

  1. Receiving feedback - Technical evaluation over performative agreement
  2. Requesting reviews - Systematic review via code-reviewer subagent
  3. Verification gates - Evidence before any completion claims

Each practice has specific triggers and protocols detailed in reference files.

Core Principle

Technical correctness over social comfort. Verify before implementing. Ask before assuming. Evidence before claims.

When to Use This Skill

Receiving Feedback

Trigger when:

  • Receiving code review comments from any source
  • Feedback seems unclear or technically questionable
  • Multiple review items need prioritization
  • External reviewer lacks full context
  • Suggestion conflicts with existing decisions

Reference: references/code-review-reception.md

Requesting Review

Trigger when:

  • Completing tasks in subagent-driven development (after EACH task)
  • Finishing major features or refactors
  • Before merging to main branch
  • Stuck and need fresh perspective
  • After fixing complex bugs

Reference: references/requesting-code-review.md

Verification Gates

Trigger when:

  • About to claim tests pass, build succeeds, or work is complete
  • Before committing, pushing, or creating PRs
  • Moving to next task
  • Any statement suggesting success/completion
  • Expressing satisfaction with work

Reference: references/verification-before-completion.md

Quick Decision Tree

SITUATION?
│
├─ Received feedback
│  ├─ Unclear items? → STOP, ask for clarification first
│  ├─ From human partner? → Understand, then implement
│  └─ From external reviewer? → Verify technically before implementing
│
├─ Completed work
│  ├─ Major feature/task? → Request code-reviewer subagent review
│  └─ Before merge? → Request code-reviewer subagent review
│
└─ About to claim status
   ├─ Have fresh verification? → State claim WITH evidence
   └─ No fresh verification? → RUN verification command first

Receiving Feedback Protocol

Response Pattern

READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT

Key Rules

  • ❌ No performative agreement: "You're absolutely right!", "Great point!", "Thanks for [anything]"
  • ❌ No implementation before verification
  • ✅ Restate requirement, ask questions, push back with technical reasoning, or just start working
  • ✅ If unclear: STOP and ask for clarification on ALL unclear items first
  • ✅ YAGNI check: grep for usage before implementing suggested "proper" features

Source Handling

  • Human partner: Trusted - implement after understanding, no performative agreement
  • External reviewers: Verify technically correct, check for breakage, push back if wrong

Full protocol: references/code-review-reception.md

Requesting Review Protocol

When to Request

  • After each task in subagent-driven development
  • After major feature completion
  • Before merge to main

Process

  1. Get git SHAs: BASE_SHA=$(git rev-parse HEAD~1) and HEAD_SHA=$(git rev-parse HEAD)
  2. Dispatch code-reviewer subagent via Task tool with: WHAT_WAS_IMPLEMENTED, PLAN_OR_REQUIREMENTS, BASE_SHA, HEAD_SHA, DESCRIPTION
  3. Act on feedback: Fix Critical immediately, Important before proceeding, note Minor for later

Full protocol: references/requesting-code-review.md

Verification Gates Protocol

The Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

Gate Function

IDENTIFY command → RUN full command → READ output → VERIFY confirms claim → THEN claim

Skip any step = lying, not verifying

Requirements

  • Tests pass: Test output shows 0 failures
  • Build succeeds: Build command exit 0
  • Bug fixed: Test original symptom passes
  • Requirements met: Line-by-line checklist verified

Red Flags - STOP

Using "should"/"probably"/"seems to", expressing satisfaction before verification, committing without verification, trusting agent reports, ANY wording implying success without running verification

Full protocol: references/verification-before-completion.md

Integration with Workflows

  • Subagent-Driven: Review after EACH task, verify before moving to next
  • Pull Requests: Verify tests pass, request code-reviewer review before merge
  • General: Apply verification gates before any status claims, push back on invalid feedback

Bottom Line

  1. Technical rigor over social performance - No performative agreement
  2. Systematic review processes - Use code-reviewer subagent
  3. Evidence before claims - Verification gates always

Verify. Question. Then implement. Evidence. Then claim.

GitHub リポジトリ

mrgoonie/claudekit-skills
パス: .claude/skills/code-review

関連スキル

content-collections

メタ

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

スキルを見る

creating-opencode-plugins

メタ

This skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.

スキルを見る

evaluating-llms-harness

テスト

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

スキルを見る

sglang

メタ

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

スキルを見る