MCP HubMCP Hub
返回技能列表

receiving-code-review

lifangda
更新于 Today
49 次查看
11
11
在 GitHub 上查看
设计design

关于

This skill helps developers process code review feedback by emphasizing technical verification over immediate agreement. It provides a structured workflow for understanding, evaluating, and responding to suggestions, especially when feedback is unclear or technically questionable. The core principle is to ensure technical correctness through rigorous analysis before implementing any changes.

技能文档

Code Review Reception

Overview

Code review requires technical evaluation, not emotional performance.

Core principle: Verify before implementing. Ask before assuming. Technical correctness over social comfort.

The Response Pattern

WHEN receiving code review feedback:

1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each

Forbidden Responses

NEVER:

  • "You're absolutely right!" (explicit CLAUDE.md violation)
  • "Great point!" / "Excellent feedback!" (performative)
  • "Let me implement that now" (before verification)

INSTEAD:

  • Restate the technical requirement
  • Ask clarifying questions
  • Push back with technical reasoning if wrong
  • Just start working (actions > words)

Handling Unclear Feedback

IF any item is unclear:
  STOP - do not implement anything yet
  ASK for clarification on unclear items

WHY: Items may be related. Partial understanding = wrong implementation.

Example:

your human partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.

❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."

Source-Specific Handling

From your human partner

  • Trusted - implement after understanding
  • Still ask if scope unclear
  • No performative agreement
  • Skip to action or technical acknowledgment

From External Reviewers

BEFORE implementing:
  1. Check: Technically correct for THIS codebase?
  2. Check: Breaks existing functionality?
  3. Check: Reason for current implementation?
  4. Check: Works on all platforms/versions?
  5. Check: Does reviewer understand full context?

IF suggestion seems wrong:
  Push back with technical reasoning

IF can't easily verify:
  Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"

IF conflicts with your human partner's prior decisions:
  Stop and discuss with your human partner first

your human partner's rule: "External feedback - be skeptical, but check carefully"

YAGNI Check for "Professional" Features

IF reviewer suggests "implementing properly":
  grep codebase for actual usage

  IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
  IF used: Then implement properly

your human partner's rule: "You and reviewer both report to me. If we don't need this feature, don't add it."

Implementation Order

FOR multi-item feedback:
  1. Clarify anything unclear FIRST
  2. Then implement in this order:
     - Blocking issues (breaks, security)
     - Simple fixes (typos, imports)
     - Complex fixes (refactoring, logic)
  3. Test each fix individually
  4. Verify no regressions

When To Push Back

Push back when:

  • Suggestion breaks existing functionality
  • Reviewer lacks full context
  • Violates YAGNI (unused feature)
  • Technically incorrect for this stack
  • Legacy/compatibility reasons exist
  • Conflicts with your human partner's architectural decisions

How to push back:

  • Use technical reasoning, not defensiveness
  • Ask specific questions
  • Reference working tests/code
  • Involve your human partner if architectural

Signal if uncomfortable pushing back out loud: "Strange things are afoot at the Circle K"

Acknowledging Correct Feedback

When feedback IS correct:

✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ [Just fix it and show in the code]

❌ "You're absolutely right!"
❌ "Great point!"
❌ "Thanks for catching that!"
❌ "Thanks for [anything]"
❌ ANY gratitude expression

Why no thanks: Actions speak. Just fix it. The code itself shows you heard the feedback.

If you catch yourself about to write "Thanks": DELETE IT. State the fix instead.

Gracefully Correcting Your Pushback

If you pushed back and were wrong:

✅ "You were right - I checked [X] and it does [Y]. Implementing now."
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."

❌ Long apology
❌ Defending why you pushed back
❌ Over-explaining

State the correction factually and move on.

Common Mistakes

MistakeFix
Performative agreementState requirement or just act
Blind implementationVerify against codebase first
Batch without testingOne at a time, test each
Assuming reviewer is rightCheck if breaks things
Avoiding pushbackTechnical correctness > comfort
Partial implementationClarify all items first
Can't verify, proceed anywayState limitation, ask for direction

Real Examples

Performative Agreement (Bad):

Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."

Technical Verification (Good):

Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"

YAGNI (Good):

Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"

Unclear Item (Good):

your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."

The Bottom Line

External feedback = suggestions to evaluate, not orders to follow.

Verify. Question. Then implement.

No performative agreement. Technical rigor always.

快速安装

/plugin add https://github.com/lifangda/claude-plugins/tree/main/receiving-code-review

在 Claude Code 中复制并粘贴此命令以安装该技能

GitHub 仓库

lifangda/claude-plugins
路径: cli-tool/skills-library/collaboration/receiving-code-review

相关推荐技能

langchain

LangChain是一个用于构建LLM应用程序的框架,支持智能体、链和RAG应用开发。它提供多模型提供商支持、500+工具集成、记忆管理和向量检索等核心功能。开发者可用它快速构建聊天机器人、问答系统和自主代理,适用于从原型验证到生产部署的全流程。

查看技能

project-structure

这个Skill为开发者提供全面的项目目录结构设计指南和最佳实践。它涵盖了多种项目类型包括monorepo、前后端框架、库和扩展的标准组织结构。帮助团队创建可扩展、易维护的代码架构,特别适用于新项目设计、遗留项目迁移和团队规范制定。

查看技能

issue-documentation

该Skill为开发者提供标准化的issue文档模板和指南,适用于创建bug报告、GitHub/Linear/Jira问题等场景。它能系统化地记录问题状况、复现步骤、根本原因、解决方案和影响范围,确保团队沟通清晰高效。通过实施主流问题跟踪系统的最佳实践,帮助开发者生成结构完整的故障排除文档和事件报告。

查看技能

llamaindex

LlamaIndex是一个专门构建RAG应用的开发框架,提供300多种数据连接器用于文档摄取、索引和查询。它具备向量索引、查询引擎和智能代理等核心功能,支持构建文档问答、知识检索和聊天机器人等数据密集型应用。开发者可用它快速搭建连接私有数据与LLM的RAG管道。

查看技能