Back to Skills

requesting-code-review-skill

Eibon7
Updated Yesterday
23 views
0
View on GitHub
Metaaidesign

About

This skill dispatches a code reviewer to validate implementations against requirements before proceeding. It triggers after tasks, major features, or before merging, providing categorized feedback on critical, important, and minor issues. The review ensures work meets specifications and catches problems early in the development cycle.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/Eibon7/roastr-ai
Git CloneAlternative
git clone https://github.com/Eibon7/roastr-ai.git ~/.claude/skills/requesting-code-review-skill

Copy and paste this command in Claude Code to install this skill

Documentation


name: requesting-code-review-skill description: Use when completing tasks, implementing major features, or before merging to verify work meets requirements - dispatches code reviewer to review implementation against plan or requirements before proceeding triggers:

  • "code review"
  • "review needed"
  • "before merge"
  • "major feature" used_by:
  • orchestrator
  • all-agents steps:
  • paso1: "Get git SHAs (BASE_SHA, HEAD_SHA)"
  • paso2: "Dispatch code-reviewer with: What was implemented, plan/requirements, SHAs, description"
  • paso3: "Review feedback: Critical → fix immediately, Important → fix before proceeding"
  • paso4: "Note Minor issues for later"
  • paso5: "Push back if reviewer wrong (with reasoning)" output: "Review feedback: Strengths, Issues (Critical/Important/Minor), Assessment" when_to_use: "After each task, major feature, before merge, when stuck" integration: subagent_driven: "Review after EACH task" executing_plans: "Review after each batch (3 tasks)" adhoc: "Review before merge, when stuck" red_flags:
  • "Skip review because 'it's simple'"
  • "Ignore Critical issues"
  • "Proceed with unfixed Important issues"
  • "Argue with valid technical feedback" pushback:
  • "If reviewer wrong: Push back with technical reasoning"
  • "Show code/tests proving it works"
  • "Request clarification" referencias:
  • "Fuente: superpowers-skills/requesting-code-review"
  • "Roastr: Integra con code-review-skill y CodeRabbit"

GitHub Repository

Eibon7/roastr-ai
Path: .claude/skills/requesting-code-review-skill.md

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill