Back to Skills

moai-alfred-clone-pattern

modu-ai
Updated Yesterday
25 views
424
78
424
View on GitHub
Designaiapidesign

About

This Claude Skill implements an enterprise Master-Clone pattern for managing complex multi-step tasks with full project context. It enables autonomous delegation, parallel processing, and intelligent task distribution across large-scale workflows. Use it for complex refactoring, architecture restructuring, multi-file transformations, and parallel exploration scenarios.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/modu-ai/moai-adk
Git CloneAlternative
git clone https://github.com/modu-ai/moai-adk.git ~/.claude/skills/moai-alfred-clone-pattern

Copy and paste this command in Claude Code to install this skill

Documentation

Enterprise Master-Clone Pattern Skill v4.0.0

Skill Metadata

FieldValue
Skill Namemoai-alfred-clone-pattern
Version4.0.0 Enterprise (2025-11-12)
Allowed toolsRead, Bash, Task
Auto-loadOn demand for complex multi-step tasks
TierAlfred (Orchestration)
Lines of Content900+ with 12+ enterprise examples
Progressive Disclosure3-level (quick-start, patterns, advanced)

What It Does

Provides comprehensive guidance for Alfred's Master-Clone pattern - a delegation mechanism where Alfred creates autonomous clones (Task-delegated agents) to handle complex multi-step tasks that don't require domain-specific expertise but benefit from:

  • Full project context and codebase understanding
  • Parallel processing capabilities
  • Independent decision-making
  • Comprehensive state tracking

When to Use (Decision Framework)

Use Clone Pattern when:

  • Task requires 5+ sequential steps OR affects 100+ files
  • No domain-specific expertise needed (not UI, Backend, DB, Security, ML)
  • Task is complex with high uncertainty
  • Parallel processing would be beneficial
  • Full project context is required for optimal decisions
  • Task can run autonomously without continuous user input

Examples:

  • Large-scale migrations (v0.14.0 → v0.15.2 affecting 200 files)
  • Refactoring across many files (100+ imports, API changes)
  • Parallel exploration/evaluation tasks
  • Complex architecture restructuring
  • Bulk file transformations with context-aware logic
  • Schema migrations affecting multiple services
  • Dependency upgrade cascades

DON'T use Clone Pattern when:

  • Domain expertise needed (use specialist agents instead)
  • Task < 5 steps (direct execution is faster)
  • Quick yes/no decision (use AskUserQuestion)
  • Single file modification (use tdd-implementer)

Key Concepts

Master-Clone Architecture

Master Agent (Alfred)
    ↓ Creates with Task()
Clone Agent #1          Clone Agent #2          Clone Agent #3
(Parallel execution with shared context)
    ↓                       ↓                       ↓
[Exploration]        [Analysis]             [Implementation]
    ↓                       ↓                       ↓
Results aggregation & synthesis by Master
    ↓
User presentation + next steps

Master Responsibilities:

  • Analyze task scope and complexity
  • Decompose into independent parallel work
  • Create clones with appropriate context
  • Aggregate and synthesize results
  • Present findings to user

Clone Responsibilities:

  • Execute assigned sub-task autonomously
  • Use full project context for intelligent decisions
  • Track state and progress
  • Report findings with evidence
  • Handle errors gracefully

3-Level Architecture

Level 1: Simple Parallel Task

Scenario: Explore multiple implementation approaches in parallel

// Master agent: Create parallel clones for exploration
const clones = [
  Task({
    description: "Explore PostgreSQL implementation for user persistence",
    prompt: "Analyze PostgreSQL libraries, schema design, migration strategy. Provide pros/cons and code examples."
  }),
  Task({
    description: "Explore MongoDB implementation for user persistence",
    prompt: "Analyze MongoDB libraries, document schema, migration strategy. Provide pros/cons and code examples."
  }),
  Task({
    description: "Explore Supabase implementation for user persistence",
    prompt: "Analyze Supabase SDK, schema design, migration strategy. Provide pros/cons and code examples."
  })
];

// Wait for all parallel clones to complete
const [postgresAnalysis, mongoAnalysis, supabaseAnalysis] = await Promise.all(clones);

// Master synthesizes results
const comparison = {
  options: [postgresAnalysis, mongoAnalysis, supabaseAnalysis],
  recommendation: selectBestOption(clones),
  tradeoffs: analyzeTradeoffs(clones)
};

Level 2: Sequential Dependent Tasks

Scenario: Complex migration where later steps depend on earlier analysis

// Step 1: Analyze current state
const analysisResult = await Task({
  description: "Analyze v0.14.0 codebase structure",
  prompt: "Scan project for all imports of 'old-api'. Document usage patterns, edge cases, and dependencies. Provide summary with file-by-file breakdown."
});

// Step 2: Plan migration strategy (depends on analysis)
const planResult = await Task({
  description: "Plan migration strategy from v0.14.0 to v0.15.2",
  prompt: `Using this analysis: ${analysisResult}\n\nCreate a step-by-step migration plan with:\n- Phased approach (phase 1, 2, 3)\n- Risk mitigation\n- Testing strategy\n- Rollback procedure`
});

// Step 3: Execute migration (depends on plan)
const migrationResult = await Task({
  description: "Execute v0.14.0 → v0.15.2 migration",
  prompt: `Using this plan: ${planResult}\n\nExecute the migration:\n- Update imports\n- Modify APIs\n- Update tests\n- Verify compatibility`
});

// Step 4: Validate results (depends on migration)
const validationResult = await Task({
  description: "Validate migration completeness",
  prompt: `Verify migration:\n- All imports updated\n- No breaking changes\n- Tests passing\n- Performance metrics maintained`
});

// Master reports final state
return {
  analysis: analysisResult,
  plan: planResult,
  migration: migrationResult,
  validation: validationResult,
  status: validationResult.passed ? "SUCCESS" : "NEEDS_REVIEW"
};

Level 3: Hybrid Parallel + Sequential (Advanced)

Scenario: Large refactoring with parallel analysis, synchronized implementation

// Phase 1: Parallel analysis clones
const [apiAnalysis, dbAnalysis, authAnalysis] = await Promise.all([
  Task({ description: "Analyze API layer usage...", prompt: "..." }),
  Task({ description: "Analyze DB layer usage...", prompt: "..." }),
  Task({ description: "Analyze Auth layer usage...", prompt: "..." })
]);

// Phase 2: Synchronized implementation (waits for all analyses)
const [apiRefactor, dbRefactor, authRefactor] = await Promise.all([
  Task({
    description: "Refactor API layer",
    prompt: `Based on analysis:\n${apiAnalysis}\n\nRefactor API with:\n- New patterns\n- Tests\n- Documentation`
  }),
  Task({
    description: "Refactor DB layer",
    prompt: `Based on analysis:\n${dbAnalysis}\n\nRefactor DB with:\n- Schema updates\n- Migration scripts\n- Tests`
  }),
  Task({
    description: "Refactor Auth layer",
    prompt: `Based on analysis:\n${authAnalysis}\n\nRefactor Auth with:\n- New strategy\n- Migration\n- Tests`
  })
]);

// Phase 3: Integration validation
const integrationResult = await Task({
  description: "Validate refactored layer integration",
  prompt: `Verify all refactored layers work together:\n- API uses new DB patterns\n- Auth integrates with API\n- No breaking changes\n- All tests passing`
});

return {
  phase1: { apiAnalysis, dbAnalysis, authAnalysis },
  phase2: { apiRefactor, dbRefactor, authRefactor },
  phase3: integrationResult,
  status: "COMPLETE"
};

Best Practices

DO

  • Define clear sub-tasks: Each clone should have a specific, measurable goal
  • Provide full context: Include project structure, existing patterns, constraints
  • Use sequential when needed: Depend on earlier results when necessary
  • Validate results: Always verify clone outputs before proceeding
  • Document findings: Track what each clone discovered
  • Handle failures gracefully: Plan for individual clone failures
  • Aggregate intelligently: Synthesize parallel results into coherent analysis

DON'T

  • Over-parallelize: Creating 20 clones is overkill (use 2-5)
  • Under-specify tasks: Vague descriptions lead to mediocre results
  • Ignore dependencies: Force sequential when tasks actually depend
  • Skip validation: Trust but verify clone outputs
  • Lose context: Always include relevant project information
  • Create circular dependencies: Avoid Task A waiting on Task B waiting on Task A

Implementation Patterns

Pattern 1: Exploration with Synthesis

// Create 3 clones exploring different approaches
const explorations = await Promise.all(approaches.map(approach =>
  Task({
    description: `Explore ${approach.name} approach`,
    prompt: `Research ${approach.name}...\nProvide: pros, cons, code example, learning curve`
  })
));

// Master synthesizes into comparison table
return {
  comparison: createComparisonTable(explorations),
  recommendation: selectBestApproach(explorations),
  decisionRationale: explainDecision(explorations)
};

Pattern 2: Phased Migration

// Phase 1: Analyze
const analysis = await analyzeCurrentState();

// Phase 2: Plan (depends on analysis)
const plan = await planMigration(analysis);

// Phase 3: Implement (depends on plan)
const implementation = await implementMigration(plan);

// Phase 4: Validate (depends on implementation)
const validation = await validateMigration(implementation);

return { analysis, plan, implementation, validation };

Pattern 3: Distributed Refactoring

// Split files into groups, refactor each group in parallel
const fileGroups = splitFilesIntoGroups(files, 5);
const refactorResults = await Promise.all(
  fileGroups.map(group =>
    Task({
      description: `Refactor files: ${group.join(", ")}`,
      prompt: `Refactor these files using new patterns:\n${group.join("\n")}`
    })
  )
);

// Master validates all refactored files work together
return validateIntegration(refactorResults);

When NOT to Use (Anti-Patterns)

ScenarioWhyUse Instead
Single file changeToo much overheadDirect tdd-implementer
2-3 quick stepsSequential simplerDirect execution
Domain expertise requiredNeeds specialistSpecialist agent (security, DB, etc.)
Real-time interactionClones run independentlyInteractive agent
Simple queryOverkill complexityDirect lookup

Related Skills

  • moai-alfred-agent-guide (Agent architecture & delegation)
  • moai-alfred-task-decomposition (Breaking down complex tasks)
  • moai-essentials-refactor (Refactoring patterns & examples)

For detailed API specifications: reference.md
For real-world examples: examples.md
Last Updated: 2025-11-12
Status: Production Ready (Enterprise v4.0.0)

GitHub Repository

modu-ai/moai-adk
Path: src/moai_adk/templates/.claude/skills/moai-alfred-clone-pattern
agentic-aiagentic-codingagentic-workflowclaudeclaudecodevibe-coding

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill