Back to Skills

Writing Plans

bobmatnyc
Updated Yesterday
24 views
22
3
22
View on GitHub
Metaai

About

This Claude Skill creates detailed implementation plans for engineers who lack codebase context. It generates bite-sized tasks with specific file references, code examples, and testing instructions. Use it when designs are complete to provide comprehensive implementation guidance to unfamiliar engineers.

Documentation

Writing Plans

Overview

Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.

Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.

Announce at start: "I'm using the Writing Plans skill to create the implementation plan."

Context: This should be run in a dedicated worktree (created by brainstorming skill).

Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md

Quick Reference

Plan header template: See Plan Structure & Templates

Task template: See Plan Structure & Templates

Granularity guide: Each step = 2-5 minutes. See Best Practices

Core Principles

  • Exact file paths always - Not "in the user module" but "src/models/user.py"
  • Complete code in plan - Not "add validation" but show the validation code
  • Exact commands with expected output - "pytest tests/file.py -v" with what you'll see
  • Reference relevant skills - Use @ syntax: @skills/category/skill-name
  • DRY, YAGNI, TDD, frequent commits - Every task follows this pattern

For detailed guidance: Best Practices & Guidelines

Execution Handoff

After saving the plan, offer execution choice:

"Plan complete and saved to docs/plans/<filename>.md. Two execution options:

1. Subagent-Driven (this session) - I dispatch fresh subagent per task, review between tasks, fast iteration

2. Parallel Session (separate) - Open new session with executing-plans, batch execution with checkpoints

Which approach?"

If Subagent-Driven chosen:

  • Use @skills/collaboration/subagent-driven-development
  • Stay in this session
  • Fresh subagent per task + code review

If Parallel Session chosen:

  • Guide them to open new session in worktree
  • New session uses @skills/collaboration/executing-plans

Remember

  • Write for zero-context engineers (specify everything)
  • Complete code blocks, not instructions
  • Exact commands with expected output
  • Test first, then implement, then commit
  • Reference existing patterns in codebase
  • Keep tasks bite-sized (2-5 minutes each)

Need examples? See Plan Structure & Templates for complete task examples.

Need patterns? See Best Practices for error handling, logging, test design, and more.

Quick Install

/plugin add https://github.com/bobmatnyc/claude-mpm/tree/main/writing-plans

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

bobmatnyc/claude-mpm
Path: src/claude_mpm/skills/bundled/collaboration/writing-plans

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill