Back to Skills

Dispatching Parallel Agents

bobmatnyc
Updated Yesterday
29 views
22
3
22
View on GitHub
Otherai

About

This skill enables parallel execution of multiple Claude agents to investigate and fix independent problems concurrently. It's designed for scenarios with 3+ unrelated failures across different test files or subsystems that have no shared state dependencies. Developers can dispatch focused agents to work in parallel, then review their summaries and integrate the fixes.

Documentation

Dispatching Parallel Agents

Overview

When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.

Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.

When to Use This Skill

Activate this skill when you're facing:

  • 3+ test files failing with different root causes
  • Multiple subsystems broken independently
  • Each problem is self-contained - can be understood without context from others
  • No shared state between investigations
  • Clear domain boundaries - fixing one won't affect others

Don't use when:

  • Failures are related (fix one might fix others)
  • Need to understand full system state first
  • Agents would interfere with each other (editing same files)
  • Exploratory debugging (don't know what's broken yet)

The Iron Law

One agent, one problem domain, one clear outcome.
Never overlap scopes. Never share state. Always integrate consciously.

Core Principles

Independence is Key

Problems must be truly independent - no shared files, no related root causes, no dependencies between fixes.

Focus Over Breadth

Each agent gets narrow scope: one test file, one subsystem, one clear goal. Broad tasks lead to confusion.

Clear Output Required

Every agent must return a summary: what was found, what was fixed, what changed. No silent fixes.

Conscious Integration

Don't blindly merge agent work. Review summaries, check conflicts, run full suite, verify compatibility.

Quick Start

1. Identify Independent Domains

Group failures by what's broken:

File A tests: Tool approval flow
File B tests: Batch completion behavior
File C tests: Abort functionality

Each domain is independent - fixing tool approval doesn't affect abort tests.

2. Create Focused Agent Tasks

Each agent gets:

  • Specific scope: One test file or subsystem
  • Clear goal: Make these tests pass
  • Constraints: Don't change other code
  • Expected output: Summary of what you found and fixed

agent-prompts.md for prompt templates and examples

3. Dispatch in Parallel

// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently

coordination-patterns.md for dispatch strategies

4. Review and Integrate

When agents return:

  • Read each summary - understand what changed
  • Verify fixes don't conflict - check for same file edits
  • Run full test suite - ensure compatibility
  • Spot check changes - agents can make systematic errors

troubleshooting.md for conflict resolution

Decision Tree

Multiple failures?
  └→ Are they independent?
      ├→ NO (related) → Single agent investigates all
      └→ YES → Can they work in parallel?
          ├→ NO (shared state) → Sequential agents
          └→ YES → Parallel dispatch ✓

Key Benefits

  1. Parallelization - Multiple investigations happen simultaneously
  2. Focus - Each agent has narrow scope, less context to track
  3. Independence - Agents don't interfere with each other
  4. Speed - N problems solved in time of 1

Navigation

Pattern Reference

Agent Management

  • Agent Prompts - Prompt structure, templates, common mistakes, constraints

Learning Resources

  • Examples - Real-world scenarios, case studies, time savings analysis

Problem Solving

  • Troubleshooting - Conflict resolution, verification strategies, common pitfalls

Related Skills

Key Reminders

  1. Independence is mandatory - Related failures need single-agent investigation
  2. Focus beats breadth - Narrow scope per agent prevents confusion
  3. Always verify integration - Don't blindly merge agent work
  4. Clear outputs required - Every agent returns summary of changes
  5. Parallelization has overhead - Only worth it for 3+ independent problems

Red Flags - STOP

STOP immediately if:

  • Agents are editing the same files (scope overlap)
  • Fixes from one agent break another's work (hidden dependencies)
  • You can't clearly separate problem domains (not independent)
  • Agents return no summary (can't verify changes)
  • Integration requires major refactoring (conflicts)

When in doubt: Start with one agent, understand the landscape, then dispatch if truly independent.

Integration with Other Skills

Prerequisite: Basic understanding of problem domains and test structure Complementary: pm-workflow for coordinating multiple agents Domain-specific: Testing skills for understanding test failures

Real-World Impact

From debugging session (2025-10-03):

  • 6 failures across 3 test files
  • 3 agents dispatched in parallel
  • All investigations completed concurrently
  • Zero conflicts between agent changes
  • Time saved: 3 problems solved in parallel vs sequentially

examples.md for detailed case study

Quick Install

/plugin add https://github.com/bobmatnyc/claude-mpm/tree/main/dispatching-parallel-agents

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

bobmatnyc/claude-mpm
Path: src/claude_mpm/skills/bundled/collaboration/dispatching-parallel-agents

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill