Back to Skills

review-workflow

romiluz13
Updated Today
24 views
15
3
15
View on GitHub
Metaaiautomation

About

This review workflow skill analyzes code by first understanding user, admin, and system flows before reviewing for functionality-impacting issues. It runs bundled analysis subagents to compile an evidence-backed report, focusing on real problems rather than generic code quality. Use it only when triggered by the cc10x-orchestrator, not directly.

Documentation

Review Workflow - Functionality First

The Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

CRITICAL: Before reviewing code, understand functionality. Before claiming completion, verify with fresh evidence.

Functionality First Mandate

BEFORE reviewing code, understand functionality:

  1. What functionality is being reviewed?

    • What are the user flows?
    • What are the admin flows?
    • What are the system flows?
  2. THEN review - Review code for issues affecting that functionality

  3. Use subagents - Apply review subagents AFTER functionality is understood


Orchestrates multi-dimensional code analysis with functionality-first approach.

Quick Start

Review code by first understanding functionality, then checking for issues affecting it.

Example:

  1. Understand functionality: File upload feature (User Flow: select → upload → confirm)
  2. Load domain skills: code-review-patterns (consolidates security-patterns, code-quality-patterns, performance-patterns), frontend-patterns (consolidates ux-patterns, ui-design, accessibility-patterns)
  3. Run subagents: code-reviewer subagent checks for issues affecting functionality
  4. Compile report: Evidence-backed findings with file:line citations
  5. Verify: Fresh evidence collected, functionality verified

Result: Comprehensive review focused on functionality-affecting issues.

Requirements

Dependencies:

  • cc10x-orchestrator - Must be activated through orchestrator (do not use directly)
  • Domain skills (code-review-patterns, frontend-patterns) - Loaded based on review scope
  • Analysis subagents - code-reviewer subagent (consolidates all review dimensions)

Prerequisites:

  • Phase 0 (Functionality Analysis) completed via orchestrator
  • Functionality flows understood (user flow, admin flow, system flow)

Tool Access:

  • Required tools: Read, Grep, Glob, Task, Bash
  • Task tool: Used to invoke analysis subagents

Review Scope:

  • Security - Checks security issues affecting functionality
  • Code Quality - Checks quality issues affecting maintainability
  • Performance - Checks performance issues affecting functionality
  • Accessibility - Checks accessibility issues blocking functionality

Process

For complete instructions, see plugins/cc10x/skills/cc10x-orchestrator/workflows/review.md.

Quick Reference

Decision Tree:

REVIEW NEEDED?
│
├─ Understand Functionality First
│  ├─ User/Admin/System flows identified? → Continue
│  └─ Not identified? → STOP, complete functionality analysis first
│
├─ Load Domain Skills
│  ├─ Skills loaded? → Continue
│  └─ Not loaded? → Load risk, security, performance, quality, UX, a11y skills
│
├─ Run Analysis Subagents
│  ├─ Subagents run sequentially? → Continue
│  └─ Parallel execution? → STOP, run sequentially
│
└─ Compile Report
   ├─ Evidence-backed findings? → Complete
   └─ Missing evidence? → STOP, gather evidence first
  • Loads: risk, security, performance, quality, UX, a11y skills
  • Runs: analysis subagents sequentially (no parallelism)
  • Outputs: severity-ranked findings with file:line evidence and a Verification Summary

Output Format (REQUIRED)

MANDATORY TEMPLATE - Use exact structure from orchestrator:

# Review Report

## Executive Summary

[2-3 sentences summarizing total issues by severity, go/no-go recommendation, and overall code health status]

## Actions Taken

- Skills loaded: [list]
- Subagents invoked: [list]
- Files reviewed: [list]
- Tools used: [list]

## Findings / Decisions

### Security Findings

- **CRITICAL**: [Issue] at [file:line] – [Impact] – [Fix] – [Evidence]
- **HIGH**: [Issue] at [file:line] – [Impact] – [Fix] – [Evidence]
- **MEDIUM**: [Issue] at [file:line] – [Impact] – [Fix] – [Evidence]
- **LOW**: [Issue] at [file:line] – [Impact] – [Fix] – [Evidence]

### Performance Findings

[Same format as Security]

### Code Quality Findings

[Same format as Security]

### UX Findings

[Same format as Security]

### Accessibility Findings

[Same format as Security]

## Verification Summary

Scope: <files reviewed>
Criteria: <list of what was verified>
Commands:

- <command> -> exit <code>
  Evidence:
- <cited file:line references>
- <tool output snippets if any>
  Outstanding Questions: <if clarification needed>

## Recommendations / Next Steps

[Prioritized: CRITICAL → HIGH → MEDIUM → LOW]

## Open Questions / Assumptions

[If any conflicts detected or clarification needed]

Troubleshooting

Common Issues:

  1. Functionality not understood before review

    • Symptom: Reviewing code without understanding what it should do
    • Cause: Skipped functionality analysis
    • Fix: Complete functionality analysis first, then review
    • Prevention: Always understand functionality before reviewing
  2. Missing evidence or file:line citations

    • Symptom: Findings without evidence or citations
    • Cause: Didn't capture evidence during review
    • Fix: Review again, capture evidence and citations
    • Prevention: Always include file:line citations and evidence
  3. Generic issues instead of functionality-focused

    • Symptom: Finding generic code quality issues, not functionality-affecting
    • Cause: Didn't focus on functionality-affecting issues
    • Fix: Refocus review on issues affecting functionality flows
    • Prevention: Always prioritize functionality-affecting issues

If issues persist:

  • Verify functionality analysis was completed first
  • Check that evidence was captured with file:line citations
  • Ensure review focuses on functionality-affecting issues
  • Review workflow instructions in workflows/review.md

Validation Checklist:

  • Executive Summary present (2-3 sentences)
  • All findings include file:line citations
  • Verification Summary includes commands with exit codes
  • Recommendations prioritized
  • All subagents/skills documented in Actions Taken

Quick Install

/plugin add https://github.com/romiluz13/cc10x/tree/main/review-workflow

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

romiluz13/cc10x
Path: plugins/cc10x/skills/review-workflow

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill