Back to Skills

moai-essentials-review

modu-ai
Updated Yesterday
21 views
424
78
424
View on GitHub
Metaaitestingautomation

About

This Claude Skill automates enterprise code reviews with AI-powered quality analysis across multiple languages. It enforces TRUST 5 principles while performing security scanning, performance analysis, and test coverage validation. Use it for comprehensive automated code review with Context7 integration and feedback generation.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/modu-ai/moai-adk
Git CloneAlternative
git clone https://github.com/modu-ai/moai-adk.git ~/.claude/skills/moai-essentials-review

Copy and paste this command in Claude Code to install this skill

Documentation

Enterprise Code Review Automation v4.0.0

Skill Metadata

FieldValue
Skill Namemoai-essentials-review
Version4.0.0 Enterprise (2025-11-12)
Core FrameworkTRUST 5 principles automation
AI Integration✅ Context7 MCP, AI quality analysis
Auto-loadOn code commit or PR creation
Languages25+ languages with specialized analysis
Lines of Content880+ with 16+ production examples
Progressive Disclosure3-level (automation, analysis, advanced)

What It Does

Automates comprehensive code review process with AI-powered quality checks, TRUST 5 principle validation, security vulnerability detection, performance analysis, test coverage verification, and detailed review feedback generation.


3-Phase Automated Review

Phase 1: Automated Checks (5 minutes)

Syntax & Linting:
  ✓ Run linters (pylint, eslint, golint, etc.)
  ✓ Check code formatting (black, prettier, gofmt)
  ✓ Type checking (mypy, TypeScript, go vet)

Security Scanning:
  ✓ Dependency vulnerabilities (safety, npm audit, cargo audit)
  ✓ Credential detection (git-secrets, detect-secrets)
  ✓ OWASP Top 10 checks

Test Coverage:
  ✓ Coverage ≥85%
  ✓ Critical paths covered
  ✓ Edge cases tested

Phase 2: AI Quality Analysis (15 minutes)

TRUST 5 Validation:
  ✓ T - Tests present and comprehensive
  ✓ R - Code readable and maintainable
  ✓ U - Unified with codebase patterns
  ✓ S - Security best practices

Design Analysis:
  ✓ SOLID principles
  ✓ Design patterns appropriate
  ✓ Scalability concerns
  ✓ Performance implications

Phase 3: Human Review (20 minutes)

Architectural Review:
  ✓ Does solution fit architecture?
  ✓ Any alternatives considered?
  ✓ Trade-offs documented?

Business Logic:
  ✓ Does it solve the problem?
  ✓ Any edge cases missed?
  ✓ User experience impact?

Documentation:
  ✓ README updated
  ✓ API docs current
  ✓ Examples provided

AI-Powered Quality Checks

Code Quality Metrics

class CodeQualityAnalyzer:
    """AI-powered code quality analysis."""
    
    async def analyze(self, code: str) -> QualityReport:
        metrics = {
            "complexity": calculate_cyclomatic(code),      # Should be <10
            "testability": assess_testability(code),        # Should be >0.85
            "maintainability": calculate_maintainability(code),  # Should be >80
            "readability": assess_readability(code),         # Should be clear
            "security_issues": scan_for_vulnerabilities(code),   # Should be 0
            "performance_concerns": detect_patterns(code),   # Should be minimal
        }
        
        return QualityReport(metrics)

TRUST 5 Automated Checks

T - Test First:
  ├─ Coverage ≥85%? ✓
  ├─ Happy path covered? ✓
  ├─ Edge cases tested? ✓
  └─ Error scenarios? ✓

R - Readable:
  ├─ Functions <50 lines? ✓
  ├─ Meaningful names? ✓
  ├─ Comments explain WHY? ✓
  └─ Complexity <10? ✓

U - Unified:
  ├─ Follows team patterns? ✓
  ├─ Consistent style? ✓
  ├─ Error handling aligned? ✓
  └─ Logging strategy consistent? ✓

S - Secured:
  ├─ Inputs validated? ✓
  ├─ No hardcoded secrets? ✓
  ├─ SQL injection prevention? ✓
  └─ XSS prevention? ✓

T - Trackable:
  ├─ SPEC referenced? ✓

Security Vulnerability Detection

Critical Checks:
  ✓ Hardcoded credentials (API keys, passwords)
  ✓ SQL injection vectors
  ✓ XSS vulnerabilities
  ✓ CSRF token absence
  ✓ Unsafe deserialization
  ✓ Privilege escalation paths

High Priority:
  ✓ Missing input validation
  ✓ Weak cryptography
  ✓ Insecure randomness
  ✓ Race conditions
  ✓ Dependency vulnerabilities

Medium Priority:
  ✓ Missing error messages
  ✓ Insufficient logging
  ✓ Memory leaks
  ✓ Resource exhaustion risks

Performance Analysis

Detection Patterns:
  ✓ O(n²) algorithms in O(n) context
  ✓ Unnecessary file I/O in loops
  ✓ Blocking operations in async code
  ✓ Memory allocations in hot paths
  ✓ Inefficient string concatenation
  ✓ Database queries without indexing

Optimization Suggestions:
  ✓ Use more efficient algorithm
  ✓ Cache results
  ✓ Batch operations
  ✓ Use async/await properly
  ✓ Index database columns

Automated Review Report

# Code Review Report

## Summary
✅ **Status**: APPROVED (with 2 minor notes)
- Test Coverage: 87% ✓
- Security: ✓ Clean
- Performance: ✓ No concerns
- Design: ✓ Good
- TRUST 5: All checks passed

## TRUST 5 Assessment

### T - Test First: ✓
Coverage: 87% (target ≥85%)
- Happy path: ✓ Covered
- Edge cases: ✓ 5 tests
- Error scenarios: ✓ 3 tests

### R - Readable: ✓
All functions <50 lines, clear names

### U - Unified: ✓
Consistent with team patterns

### S - Secured: ✓
- No credentials: ✓
- Input validation: ✓
- Error messages safe: ✓

### T - Trackable: ✓
- SPEC-042 referenced
- 5 tests linked
- Code linked to PR

## Detailed Findings

### Strengths
1. ✅ Excellent test coverage (87%)
2. ✅ Clean, readable code
3. ✅ Proper error handling
4. ✅ Security best practices followed

### Minor Notes
1. ⚠️ Function `calculate_discount` could use type hints
2. ⚠️ Consider adding cache for frequently called API

### Recommendations
1. Add type hints to improve IDE support
2. Consider Redis caching for API calls

## Approval
✅ **Ready to merge** - All TRUST 5 checks passed

Integration with Context7

Live Security Patterns: Get latest vulnerability detection from official databases
Performance Optimization: Context7 provides version-specific optimization patterns
Language Updates: Context7 includes latest language/framework best practices


Best Practices

DO

  • ✅ Run automated checks before human review
  • ✅ Provide specific, actionable feedback
  • ✅ Explain WHY improvements are needed
  • ✅ Link to official documentation
  • ✅ Flag security issues immediately
  • ✅ Enforce TRUST 5 consistently
  • ✅ Update based on new findings
  • ✅ Track metrics over time

DON'T

  • ❌ Block on automated issues alone (let linters handle)
  • ❌ Miss security vulnerabilities
  • ❌ Accept coverage <85%
  • ❌ Ignore deprecated patterns
  • ❌ Skip performance analysis
  • ❌ Approve without TRUST 5 validation
  • ❌ Add comments that code already explains

Related Skills

  • moai-alfred-code-reviewer (Manual review guidance)
  • moai-essentials-debug (Debugging techniques)

For detailed analysis guidelines: reference.md
For real-world examples: examples.md
Last Updated: 2025-11-12
Status: Production Ready (Enterprise v4.0.0)

GitHub Repository

modu-ai/moai-adk
Path: src/moai_adk/templates/.claude/skills/moai-essentials-review
agentic-aiagentic-codingagentic-workflowclaudeclaudecodevibe-coding

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill