writing-plans
について
writing-plansスキルは、設計完了後のエンジニア向けに、特にコードベースのコンテキストが不足している場合に、詳細な実装計画を作成します。このスキルは、正確なファイルパス、完全なコード例、検証手順を含む包括的なタスクを生成します。最小限のドメイン知識を前提とし、TDD、頻繁なコミット、小さな単位での実装タスクを重視しています。
クイックインストール
Claude Code
推奨/plugin add https://github.com/LerianStudio/ringgit clone https://github.com/LerianStudio/ring.git ~/.claude/skills/writing-plansこのコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします
ドキュメント
Writing Plans
Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md
Zero-Context Test
Before finalizing ANY plan, verify:
Can someone execute this if they:
□ Never saw our codebase
□ Don't know our framework
□ Only have this document
□ Have no context about our domain
If NO to any → Add more detail
Every task must be executable in isolation.
Bite-Sized Task Granularity
Each step is one action (2-5 minutes):
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
Plan Document Header
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use ring:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
**Global Prerequisites:**
- Environment: [OS, runtime versions]
- Tools: [Exact commands to verify: `python --version`, `npm --version`]
- Access: [Any API keys, services that must be running]
- State: [Branch to work from, any required setup]
**Verification before starting:**
```bash
# Run ALL these commands and verify output:
python --version # Expected: Python 3.8+
npm --version # Expected: 7.0+
git status # Expected: clean working tree
pytest --version # Expected: 7.0+
## Task Structure
```markdown
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Prerequisites:**
- Tools: pytest v7.0+, Python 3.8+
- Files must exist: `src/config.py`, `tests/conftest.py`
- Environment: `TESTING=true` must be set
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
Step 2: Run test to verify it fails
Run: pytest tests/path/test.py::test_name -v
Expected output:
FAILED tests/path/test.py::test_name - NameError: name 'function' is not defined
If you see different error: Check file paths and imports
Step 3: Write minimal implementation
def function(input):
return expected
Step 4: Run test to verify it passes
Run: pytest tests/path/test.py::test_name -v
Expected output:
PASSED tests/path/test.py::test_name
Step 5: Commit
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
**If Task Fails:**
1. **Test won't run:**
- Check: `ls tests/path/` (file exists?)
- Fix: Create missing directories first
- Rollback: `git checkout -- .`
2. **Implementation breaks other tests:**
- Run: `pytest` (check what broke)
- Rollback: `git reset --hard HEAD`
- Revisit: Design may conflict with existing code
3. **Can't recover:**
- Document: What failed and why
- Stop: Return to human partner
- Don't: Try to fix without understanding
## Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
## Required Patterns
This skill uses these universal patterns:
- **State Tracking:** See `skills/shared-patterns/state-tracking.md`
- **Failure Recovery:** See `skills/shared-patterns/failure-recovery.md`
- **Exit Criteria:** See `skills/shared-patterns/exit-criteria.md`
- **TodoWrite:** See `skills/shared-patterns/todowrite-integration.md`
Apply ALL patterns when using this skill.
## Execution Handoff
After saving the plan, offer execution choice:
**"Plan complete and saved to `docs/plans/<filename>.md`. Two execution options:**
**1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
**Which approach?"**
**If Subagent-Driven chosen:**
- **REQUIRED SUB-SKILL:** Use ring:subagent-driven-development
- Stay in this session
- Fresh subagent per task + code review
**If Parallel Session chosen:**
- Guide them to open new session in worktree
- **REQUIRED SUB-SKILL:** New session uses ring:executing-plans
GitHub リポジトリ
関連スキル
content-collections
メタThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
メタThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
evaluating-llms-harness
テストThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
sglang
メタSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
