MCP HubMCP Hub
スキル一覧に戻る

code-review-quality

proffesor-for-testing
更新日 Today
147 閲覧
99
21
99
GitHubで表示
その他code-reviewfeedbackqualitytestabilitymaintainabilitypr-review

について

このスキルは、セキュリティ、パフォーマンス、カバレッジ分析のための専門エージェントを使用し、品質、テスト容易性、保守性に焦点を当てた自動コードレビューを実施します。プルリクエスト時やレビュー慣行の確立時に、優先順位付けされた文脈に即したフィードバックを提供します。開発者はこれを使用して、主観的なスタイルの好みよりもバグと保守性を重視した、実行可能で構造化されたレビューを得ることができます。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/proffesor-for-testing/agentic-qe
Git クローン代替
git clone https://github.com/proffesor-for-testing/agentic-qe.git ~/.claude/skills/code-review-quality

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

Code Review Quality

<default_to_action> When reviewing code or establishing review practices:

  1. PRIORITIZE feedback: 🔴 Blocker (must fix) → 🟡 Major → 🟢 Minor → 💡 Suggestion
  2. FOCUS on: Bugs, security, testability, maintainability (not style preferences)
  3. ASK questions over commands: "Have you considered...?" > "Change this to..."
  4. PROVIDE context: Why this matters, not just what to change
  5. LIMIT scope: Review < 400 lines at a time for effectiveness

Quick Review Checklist:

  • Logic: Does it work correctly? Edge cases handled?
  • Security: Input validation? Auth checks? Injection risks?
  • Testability: Can this be tested? Is it tested?
  • Maintainability: Clear naming? Single responsibility? DRY?
  • Performance: O(n²) loops? N+1 queries? Memory leaks?

Critical Success Factors:

  • Review the code, not the person
  • Catching bugs > nitpicking style
  • Fast feedback (< 24h) > thorough feedback </default_to_action>

Quick Reference Card

When to Use

  • PR code reviews
  • Pair programming feedback
  • Establishing team review standards
  • Mentoring developers

Feedback Priority Levels

LevelIconMeaningAction
Blocker🔴Bug/security/crashMust fix before merge
Major🟡Logic issue/test gapShould fix before merge
Minor🟢Style/namingNice to fix
Suggestion💡Alternative approachConsider for future

Review Scope Limits

Lines ChangedRecommendation
< 200Single review session
200-400Review in chunks
> 400Request PR split

What to Focus On

✅ Review❌ Skip
Logic correctnessFormatting (use linter)
Security risksNaming preferences
Test coverageArchitecture debates
Performance issuesStyle opinions
Error handlingTrivial changes

Feedback Templates

Blocker (Must Fix)

🔴 **BLOCKER: SQL Injection Risk**

This query is vulnerable to SQL injection:
```javascript
db.query(`SELECT * FROM users WHERE id = ${userId}`)

Fix: Use parameterized queries:

db.query('SELECT * FROM users WHERE id = ?', [userId])

Why: User input directly in SQL allows attackers to execute arbitrary queries.


### Major (Should Fix)
```markdown
🟡 **MAJOR: Missing Error Handling**

What happens if `fetchUser()` throws? The error bubbles up unhandled.

**Suggestion:** Add try/catch with appropriate error response:
```javascript
try {
  const user = await fetchUser(id);
  return user;
} catch (error) {
  logger.error('Failed to fetch user', { id, error });
  throw new NotFoundError('User not found');
}

### Minor (Nice to Fix)
```markdown
🟢 **minor:** Variable name could be clearer

`d` doesn't convey meaning. Consider `daysSinceLastLogin`.

Suggestion (Consider)

💡 **suggestion:** Consider extracting this to a helper

This validation logic appears in 3 places. A `validateEmail()` helper would reduce duplication. Not blocking, but might be worth a follow-up PR.

Review Questions to Ask

Logic

  • What happens when X is null/empty/negative?
  • Is there a race condition here?
  • What if the API call fails?

Security

  • Is user input validated/sanitized?
  • Are auth checks in place?
  • Any secrets or PII exposed?

Testability

  • How would you test this?
  • Are dependencies injectable?
  • Is there a test for the happy path? Edge cases?

Maintainability

  • Will the next developer understand this?
  • Is this doing too many things?
  • Is there duplication we could reduce?

Agent-Assisted Reviews

// Comprehensive code review
await Task("Code Review", {
  prNumber: 123,
  checks: ['security', 'performance', 'testability', 'maintainability'],
  feedbackLevels: ['blocker', 'major', 'minor'],
  autoApprove: { maxBlockers: 0, maxMajor: 2 }
}, "qe-quality-analyzer");

// Security-focused review
await Task("Security Review", {
  prFiles: changedFiles,
  scanTypes: ['injection', 'auth', 'secrets', 'dependencies']
}, "qe-security-scanner");

// Test coverage review
await Task("Coverage Review", {
  prNumber: 123,
  requireNewTests: true,
  minCoverageDelta: 0
}, "qe-coverage-analyzer");

Agent Coordination Hints

Memory Namespace

aqe/code-review/
├── review-history/*     - Past review decisions
├── patterns/*           - Common issues by team/repo
├── feedback-templates/* - Reusable feedback
└── metrics/*            - Review turnaround time

Fleet Coordination

const reviewFleet = await FleetManager.coordinate({
  strategy: 'code-review',
  agents: [
    'qe-quality-analyzer',    // Logic, maintainability
    'qe-security-scanner',    // Security risks
    'qe-performance-tester',  // Performance issues
    'qe-coverage-analyzer'    // Test coverage
  ],
  topology: 'parallel'
});

Review Etiquette

✅ Do❌ Don't
"Have you considered...?""This is wrong"
Explain why it mattersJust say "fix this"
Acknowledge good codeOnly point out negatives
Suggest, don't demandBe condescending
Review < 400 linesReview 2000 lines at once

Related Skills


Remember

Prioritize feedback: 🔴 Blocker → 🟡 Major → 🟢 Minor → 💡 Suggestion. Focus on bugs and security, not style. Ask questions, don't command. Review < 400 lines at a time. Fast feedback (< 24h) beats thorough feedback.

With Agents: Agents automate security, performance, and coverage checks, freeing human reviewers to focus on logic and design. Use agents for consistent, fast initial review.

GitHub リポジトリ

proffesor-for-testing/agentic-qe
パス: .claude/skills/code-review-quality
agenticqeagenticsfoundationagentsquality-engineering

関連スキル

Verification & Quality Assurance

その他

This skill automatically verifies and scores the quality of code and agent outputs using a 0.95 accuracy threshold. It performs truth scoring, code correctness checks, and can instantly roll back changes that fail verification. Use it to ensure high-quality outputs and maintain codebase reliability in your development workflow.

スキルを見る

github-code-review

その他

This skill automates comprehensive GitHub code reviews using AI-powered swarm coordination, enabling multi-agent analysis of pull requests. It performs security and performance analysis while orchestrating specialized review agents to generate intelligent comments. Use it when you need automated PR management with quality gate enforcement beyond traditional static analysis.

スキルを見る

testability-scoring

その他

This skill provides AI-powered testability assessment for web applications using Playwright and optional Vibium integration. It evaluates applications against 10 principles of intrinsic testability including Observability, Controllability, and Stability. Use it when assessing software testability, identifying improvements, or generating testability reports.

スキルを見る

quick-quality-check

テスト

This Claude Skill performs rapid code quality checks by executing theater detection, linting, security scans, and basic tests in parallel. It provides developers with instant, actionable feedback on their code in under 30 seconds. Use it for fast, essential quality assurance during development to catch issues early.

スキルを見る