when-orchestrating-swarm-use-swarm-orchestration
关于
This skill provides advanced multi-agent swarm orchestration for complex workflows. It handles task decomposition, distributed execution across specialized agents, and result synthesis. Use it when you need to coordinate multiple AI agents to solve intricate problems requiring parallel processing.
技能文档
Swarm Orchestration SOP
Overview
This skill implements complex multi-agent swarm orchestration with intelligent task decomposition, distributed execution, progress monitoring, and result synthesis. It enables coordinated execution of complex workflows across multiple specialized agents.
Agents & Responsibilities
task-orchestrator
Role: Central orchestration and task decomposition Responsibilities:
- Decompose complex tasks into subtasks
- Assign tasks to appropriate agents
- Monitor execution progress
- Synthesize results from multiple agents
hierarchical-coordinator
Role: Hierarchical task delegation and coordination Responsibilities:
- Manage task hierarchy
- Coordinate parent-child task relationships
- Handle task dependencies
- Ensure proper execution order
adaptive-coordinator
Role: Dynamic workload balancing and optimization Responsibilities:
- Monitor agent workloads
- Rebalance task assignments
- Optimize resource allocation
- Adapt to changing conditions
Phase 1: Plan Orchestration
Objective
Analyze complex task requirements and create detailed decomposition plan with dependency mapping.
Evidence-Based Validation
- Task decomposition tree created
- Dependencies mapped
- Agent assignments planned
- Execution strategy defined
Scripts
# Analyze task complexity
npx claude-flow@alpha task analyze --task "Build full-stack application" --output task-analysis.json
# Generate decomposition tree
npx claude-flow@alpha task decompose \
--task "Build full-stack application" \
--max-depth 3 \
--output decomposition.json
# Visualize decomposition
npx claude-flow@alpha task visualize --input decomposition.json --output task-tree.png
# Store decomposition in memory
npx claude-flow@alpha memory store \
--key "orchestration/decomposition" \
--file decomposition.json
# Identify dependencies
npx claude-flow@alpha task dependencies \
--input decomposition.json \
--output dependencies.json
# Plan agent assignments
npx claude-flow@alpha task plan \
--decomposition decomposition.json \
--available-agents 12 \
--output execution-plan.json
Task Decomposition Strategy
Level 1: High-Level Goals
{
"task": "Build full-stack application",
"subtasks": [
"Design architecture",
"Implement backend",
"Implement frontend",
"Setup infrastructure",
"Testing and QA"
]
}
Level 2: Component Tasks
{
"task": "Implement backend",
"subtasks": [
"Design API endpoints",
"Implement authentication",
"Setup database",
"Create business logic",
"API documentation"
]
}
Level 3: Atomic Tasks
{
"task": "Implement authentication",
"subtasks": [
"Setup JWT library",
"Create user model",
"Implement login endpoint",
"Implement registration endpoint",
"Add password hashing",
"Create auth middleware"
]
}
Memory Patterns
# Store orchestration plan
npx claude-flow@alpha memory store \
--key "orchestration/plan" \
--value '{
"totalTasks": 45,
"levels": 3,
"estimatedDuration": "2h 30m",
"requiredAgents": 12
}'
# Store dependency graph
npx claude-flow@alpha memory store \
--key "orchestration/dependencies" \
--value '{
"task-003": ["task-001", "task-002"],
"task-008": ["task-003", "task-004"],
"task-012": ["task-008", "task-009"]
}'
Validation Criteria
- Task tree depth ≤ 3 levels
- All tasks have clear success criteria
- Dependencies correctly identified
- No circular dependencies
- Agent capacity sufficient for load
Phase 2: Initialize Swarm
Objective
Setup swarm infrastructure with appropriate topology and coordinator agents.
Evidence-Based Validation
- Swarm initialized successfully
- Topology optimized for workload
- Coordinator agents active
- Memory coordination established
Scripts
# Determine optimal topology
TASK_COUNT=$(jq '.totalTasks' decomposition.json)
if [ "$TASK_COUNT" -gt 30 ]; then
TOPOLOGY="mesh"
elif [ "$TASK_COUNT" -gt 15 ]; then
TOPOLOGY="hierarchical"
else
TOPOLOGY="star"
fi
# Initialize swarm with optimal topology
npx claude-flow@alpha swarm init \
--topology $TOPOLOGY \
--max-agents 15 \
--strategy adaptive
# Spawn task orchestrator
npx claude-flow@alpha agent spawn \
--type coordinator \
--role "task-orchestrator" \
--capabilities "task-decomposition,assignment,synthesis"
# Spawn hierarchical coordinator
npx claude-flow@alpha agent spawn \
--type coordinator \
--role "hierarchical-coordinator" \
--capabilities "hierarchy-management,delegation"
# Spawn adaptive coordinator
npx claude-flow@alpha agent spawn \
--type coordinator \
--role "adaptive-coordinator" \
--capabilities "workload-balancing,optimization"
# Verify swarm status
npx claude-flow@alpha swarm status --show-agents --show-topology
MCP Integration
// Initialize swarm
mcp__claude-flow__swarm_init({
topology: "hierarchical",
maxAgents: 15,
strategy: "adaptive"
})
// Spawn coordinators
mcp__claude-flow__agent_spawn({
type: "coordinator",
name: "task-orchestrator",
capabilities: ["task-decomposition", "assignment", "synthesis"]
})
mcp__claude-flow__agent_spawn({
type: "coordinator",
name: "hierarchical-coordinator",
capabilities: ["hierarchy-management", "delegation"]
})
mcp__claude-flow__agent_spawn({
type: "coordinator",
name: "adaptive-coordinator",
capabilities: ["workload-balancing", "optimization"]
})
Memory Patterns
# Store swarm configuration
npx claude-flow@alpha memory store \
--key "orchestration/swarm" \
--value '{
"swarmId": "swarm-12345",
"topology": "hierarchical",
"maxAgents": 15,
"coordinators": ["task-orchestrator", "hierarchical-coordinator", "adaptive-coordinator"]
}'
Validation Criteria
- Swarm operational
- All coordinators active
- Topology matches requirements
- Memory coordination functional
- Health checks passing
Phase 3: Orchestrate Execution
Objective
Coordinate distributed task execution across swarm agents with proper dependency handling.
Evidence-Based Validation
- All tasks assigned to agents
- Dependencies respected
- Execution in progress
- Progress tracked continuously
Scripts
# Spawn specialized agents based on task requirements
npx claude-flow@alpha agent spawn --type researcher --count 2
npx claude-flow@alpha agent spawn --type coder --count 5
npx claude-flow@alpha agent spawn --type reviewer --count 2
npx claude-flow@alpha agent spawn --type tester --count 2
# Orchestrate task execution
npx claude-flow@alpha task orchestrate \
--plan execution-plan.json \
--strategy adaptive \
--max-agents 12 \
--priority high
# Alternative: Orchestrate with MCP
# mcp__claude-flow__task_orchestrate({
# task: "Execute full-stack application build",
# strategy: "adaptive",
# maxAgents: 12,
# priority: "high"
# })
# Monitor orchestration status
npx claude-flow@alpha task status --detailed --json > task-status.json
# Track individual task progress
npx claude-flow@alpha task list --filter "in_progress" --show-timing
# Monitor agent workloads
npx claude-flow@alpha agent metrics --metric tasks --format table
Task Assignment Algorithm
#!/bin/bash
# assign-tasks.sh
# Read decomposition
TASKS=$(jq -r '.tasks[] | @json' decomposition.json)
for TASK in $TASKS; do
TASK_ID=$(echo $TASK | jq -r '.id')
TASK_TYPE=$(echo $TASK | jq -r '.type')
DEPENDENCIES=$(echo $TASK | jq -r '.dependencies[]')
# Check if dependencies completed
DEPS_COMPLETE=true
for DEP in $DEPENDENCIES; do
DEP_STATUS=$(npx claude-flow@alpha task status --task-id $DEP --format json | jq -r '.status')
if [ "$DEP_STATUS" != "completed" ]; then
DEPS_COMPLETE=false
break
fi
done
# Assign task if dependencies complete
if [ "$DEPS_COMPLETE" = true ]; then
# Find least loaded agent of required type
AGENT_ID=$(npx claude-flow@alpha agent list \
--filter "type=$TASK_TYPE" \
--sort-by load \
--format json | jq -r '.[0].id')
# Assign task
npx claude-flow@alpha task assign \
--task-id $TASK_ID \
--agent-id $AGENT_ID
echo "Assigned task $TASK_ID to agent $AGENT_ID"
fi
done
Memory Patterns
# Store task assignments
npx claude-flow@alpha memory store \
--key "orchestration/assignments" \
--value '{
"task-001": {"agent": "agent-researcher-1", "status": "in_progress", "started": "2025-10-30T10:00:00Z"},
"task-002": {"agent": "agent-coder-1", "status": "in_progress", "started": "2025-10-30T10:01:00Z"}
}'
# Store execution timeline
npx claude-flow@alpha memory store \
--key "orchestration/timeline" \
--value '{
"started": "2025-10-30T10:00:00Z",
"tasksCompleted": 12,
"tasksInProgress": 8,
"tasksPending": 25
}'
Validation Criteria
- All agents assigned tasks
- No dependency violations
- Task execution progressing
- No agent overload
- Error handling active
Phase 4: Monitor Progress
Objective
Track execution progress, identify blockers, and maintain real-time visibility.
Evidence-Based Validation
- Progress metrics collected
- Blockers identified quickly
- Agents responding properly
- Timeline on track
Scripts
# Start continuous monitoring
npx claude-flow@alpha swarm monitor \
--interval 10 \
--duration 3600 \
--output orchestration-monitor.log &
# Track task completion rate
while true; do
COMPLETED=$(npx claude-flow@alpha task list --filter "completed" | wc -l)
TOTAL=$(npx claude-flow@alpha task list | wc -l)
PROGRESS=$((COMPLETED * 100 / TOTAL))
echo "Progress: $PROGRESS% ($COMPLETED/$TOTAL tasks)"
npx claude-flow@alpha memory store \
--key "orchestration/progress" \
--value "{\"completed\": $COMPLETED, \"total\": $TOTAL, \"percentage\": $PROGRESS}"
sleep 30
done &
# Monitor for blocked tasks
npx claude-flow@alpha task detect-blocked \
--threshold 300 \
--notify-on-block
# Monitor agent health
npx claude-flow@alpha agent health-check --all --interval 60
# Generate progress report
npx claude-flow@alpha orchestration report \
--include-timeline \
--include-agent-metrics \
--output progress-report.md
Progress Visualization
# Generate Gantt chart
npx claude-flow@alpha task gantt \
--input task-status.json \
--output gantt-chart.png
# Generate network diagram
npx claude-flow@alpha task network \
--show-dependencies \
--show-progress \
--output network-diagram.png
Memory Patterns
# Store progress snapshots
npx claude-flow@alpha memory store \
--key "orchestration/snapshot-$(date +%s)" \
--value '{
"timestamp": "'$(date -Iseconds)'",
"completed": 18,
"inProgress": 12,
"pending": 15,
"blocked": 0,
"failed": 0
}'
# Store blocker information
npx claude-flow@alpha memory store \
--key "orchestration/blockers" \
--value '{
"task-015": {"reason": "dependency-failed", "since": "2025-10-30T10:15:00Z"},
"task-022": {"reason": "agent-unresponsive", "since": "2025-10-30T10:20:00Z"}
}'
Validation Criteria
- Progress tracking accurate
- Blockers detected within 5 minutes
- No stalled tasks unnoticed
- Agent failures handled
- Progress reports generated
Phase 5: Synthesize Results
Objective
Aggregate and synthesize results from all completed tasks into coherent outputs.
Evidence-Based Validation
- All task results collected
- Results synthesized successfully
- Output validated
- Final report generated
Scripts
# Collect all task results
npx claude-flow@alpha task results --all --format json > all-results.json
# Synthesize results by category
npx claude-flow@alpha task synthesize \
--input all-results.json \
--group-by category \
--output synthesized-results.json
# Generate final outputs
npx claude-flow@alpha orchestration finalize \
--results synthesized-results.json \
--output final-output/
# Validate outputs
npx claude-flow@alpha orchestration validate \
--output final-output/ \
--criteria validation-criteria.json
# Generate final report
npx claude-flow@alpha orchestration report \
--type final \
--include-metrics \
--include-timeline \
--include-outputs \
--output final-orchestration-report.md
# Archive orchestration data
npx claude-flow@alpha orchestration archive \
--output orchestration-archive-$(date +%Y%m%d-%H%M%S).tar.gz
MCP Integration
// Get task results
mcp__claude-flow__task_results({
taskId: "all",
format: "detailed"
})
// Check final status
mcp__claude-flow__task_status({
detailed: true
})
Result Synthesis Strategy
1. Collect Results:
# Get results from each agent type
RESEARCHER_RESULTS=$(npx claude-flow@alpha task results --agent-type researcher --format json)
CODER_RESULTS=$(npx claude-flow@alpha task results --agent-type coder --format json)
REVIEWER_RESULTS=$(npx claude-flow@alpha task results --agent-type reviewer --format json)
2. Aggregate by Phase:
# Architecture phase results
ARCHITECTURE=$(jq '[.[] | select(.phase=="architecture")]' all-results.json)
# Implementation phase results
IMPLEMENTATION=$(jq '[.[] | select(.phase=="implementation")]' all-results.json)
# Testing phase results
TESTING=$(jq '[.[] | select(.phase=="testing")]' all-results.json)
3. Synthesize Final Output:
# Combine all results
jq -s '{
architecture: .[0],
implementation: .[1],
testing: .[2],
metadata: {
totalTasks: (.[0] + .[1] + .[2] | length),
duration: "'$(date -Iseconds)'",
successRate: 0.98
}
}' \
<(echo "$ARCHITECTURE") \
<(echo "$IMPLEMENTATION") \
<(echo "$TESTING") \
> final-synthesis.json
Memory Patterns
# Store final results
npx claude-flow@alpha memory store \
--key "orchestration/results/final" \
--file final-synthesis.json
# Store performance metrics
npx claude-flow@alpha memory store \
--key "orchestration/metrics/final" \
--value '{
"totalTasks": 45,
"completed": 44,
"failed": 1,
"duration": "2h 18m",
"avgTaskTime": "3m 5s",
"throughput": "0.32 tasks/min"
}'
Validation Criteria
- All task results accounted for
- Synthesis logic correct
- Outputs validated successfully
- No data loss
- Final report comprehensive
Success Criteria
Overall Validation
- Task decomposition accurate
- Swarm orchestration successful
- All tasks completed (≥95%)
- Results synthesized correctly
- Performance targets met
Performance Targets
- Task success rate: ≥95%
- Average task completion time: Within estimates ±20%
- Agent utilization: 70-90%
- Coordination overhead: <15%
- Result synthesis time: <5 minutes
Common Issues & Solutions
Issue: Task Dependencies Not Resolved
Symptoms: Tasks blocked waiting for dependencies Solution: Verify dependency graph, check for circular dependencies
Issue: Agent Overload
Symptoms: Some agents at 100% utilization, others idle Solution: Rebalance task assignments, spawn additional agents
Issue: Task Execution Stalled
Symptoms: Tasks remain in-progress indefinitely Solution: Implement timeout mechanism, restart stuck agents
Issue: Result Synthesis Incomplete
Symptoms: Missing results in final output Solution: Verify all tasks completed, check result collection logic
Best Practices
- Clear Decomposition: Break tasks into atomic units
- Explicit Dependencies: Document all task dependencies
- Progress Tracking: Monitor continuously
- Error Handling: Implement retry logic
- Result Validation: Verify outputs at each phase
- Memory Coordination: Use shared memory for state
- Agent Specialization: Assign tasks to appropriate agents
- Performance Monitoring: Track metrics throughout
Integration Points
With Other Skills
- advanced-swarm: For topology optimization
- performance-analysis: For bottleneck detection
- cascade-orchestrator: For workflow chaining
- hive-mind: For collective decision-making
With External Systems
- CI/CD pipelines for automated execution
- Project management tools for tracking
- Monitoring systems for observability
- Storage systems for result archival
Next Steps
After completing this skill:
- Analyze orchestration metrics
- Optimize task decomposition strategy
- Experiment with different topologies
- Implement custom synthesis logic
- Create reusable orchestration templates
References
- Claude Flow Documentation
- Task Decomposition Patterns
- Multi-Agent Orchestration Theory
- Distributed Systems Coordination
快速安装
/plugin add https://github.com/DNYoussef/ai-chrome-extension/tree/main/when-orchestrating-swarm-use-swarm-orchestration在 Claude Code 中复制并粘贴此命令以安装该技能
GitHub 仓库
相关推荐技能
when-using-advanced-swarm-use-swarm-advanced
其他该Skill为开发者提供复杂多智能体系统的高级集群协调能力,支持动态拓扑切换和自组织行为。它适用于需要自适应拓扑选择和智能协调的复杂集群编排场景,可输出动态拓扑配置和性能优化报告。开发者需具备集群拓扑理解和多代理协调经验才能有效使用。
when-chaining-agent-pipelines-use-stream-chain
其他该Skill用于构建链式Agent流水线,支持将上游Agent的输出作为下游Agent的输入,实现数据流的编排。它提供顺序和并行两种执行模式,适用于复杂数据处理和工作流协调场景。开发者可通过配置流水线架构来优化数据流转和任务执行效率。
sparc-methodology
开发SPARC Methodology为开发者提供了一套从需求分析到部署监控的17种开发模式,通过多智能体协作实现系统化软件开发。它支持TDD工作流和架构设计,适用于复杂项目的全生命周期管理。开发者可以按需调用特定模式来提升代码质量和开发效率。
when-bridging-web-cli-use-web-cli-teleport
其他该Skill帮助开发者构建Web界面与CLI工作流的双向桥梁,实现Web应用触发CLI命令和CLI工具显示Web界面。它提供REST API和WebSocket实时通信能力,适用于需要将传统命令行工具与现代Web应用集成的场景。通过此Skill可建立安全的双向通信通道,实现状态同步和跨平台工作流整合。
