skill-builder
About
This Claude Skill helps developers create, refine, and validate production-ready Claude Code skills by following official Anthropic best practices and the amplihack philosophy. It automatically activates when you mention building, creating, or designing a new skill. The skill orchestrates the development process using specialized agents to clarify requirements and design the structure.
Documentation
Skill Builder
Purpose
Helps users create production-ready Claude Code skills that follow best practices from official Anthropic documentation and amplihack's ruthless simplicity philosophy.
When I Activate
I automatically load when you mention:
- "build a skill" or "create a skill"
- "generate a skill" or "make a skill"
- "design a skill" or "develop a skill"
- "skill builder" or "new skill"
- "skill for [purpose]"
What I Do
I orchestrate the skill creation process using amplihack's specialized agents:
-
Clarify Requirements (prompt-writer agent)
- Understand skill purpose and scope
- Define target users and use cases
- Identify skill type (agent, command, scenario)
-
Design Structure (architect agent)
- Plan YAML frontmatter fields
- Design skill organization (single vs multi-file)
- Calculate token budget allocation
- Choose appropriate templates
-
Generate Skill (builder agent)
- Create SKILL.md with proper YAML frontmatter
- Write clear instructions and examples
- Include supporting files if needed
- Follow progressive disclosure pattern
-
Validate Quality (reviewer agent)
- Check YAML frontmatter syntax
- Verify token budget (<5,000 tokens core)
- Ensure philosophy compliance (>85% score)
- Test description quality for discovery
-
Create Tests (tester agent)
- Define activation test cases
- Create edge case validations
- Document expected behaviors
Skill Types Supported
- skill: Claude Code skills in
.claude/skills/(auto-discovery) - agent: Specialized agents in
.claude/agents/amplihack/specialized/ - command: Slash commands in
.claude/commands/amplihack/ - scenario: Production tools in
.claude/scenarios/
See examples.md for detailed examples of each type.
Command Interface
For explicit invocation:
/amplihack:skill-builder <skill-name> <skill-type> <description>
Examples in examples.md.
Documentation
Supporting Files (progressive disclosure):
- reference.md: Architecture, patterns, YAML spec, best practices
- examples.md: Real-world usage, testing, troubleshooting
Original Documentation Sources (embedded in reference.md):
- Official Claude Code Skills: https://code.claude.com/docs/en/skills
- Anthropic Agent SDK Skills: https://docs.claude.com/en/docs/agent-sdk/skills
- Agent Skills Engineering Blog: https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills
- Claude Cookbooks - Skills: https://github.com/anthropics/claude-cookbooks/tree/main/skills
- Skills Custom Development Notebook: https://github.com/anthropics/claude-cookbooks/blob/main/skills/notebooks/03_skills_custom_development.ipynb
- metaskills/skill-builder (Reference): https://github.com/metaskills/skill-builder
All documentation is embedded in reference.md for offline access. Links provided for updates and verification.
Note: This skill automatically loads when Claude detects skill building intent. For explicit control, use /amplihack:skill-builder.
Quick Install
/plugin add https://github.com/rysweet/MicrosoftHackathon2025-AgenticCoding/tree/main/skill-builderCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
