Back to Skills

commands

vinnie357
Updated Yesterday
19 views
0
View on GitHub
Metaaidesign

About

This skill provides a guide for developers to create custom slash commands that extend Claude Code functionality. It enables the creation of markdown-based command files containing static prompts, dynamic content with arguments, and multi-step workflows. Use this skill when building, organizing, or debugging custom commands like `/commit` or `/review` for specific development workflows.

Documentation

Claude Code Commands

Guide for creating custom slash commands that extend Claude Code functionality.

When to Use This Skill

Activate this skill when:

  • Creating new custom slash commands
  • Understanding command structure and syntax
  • Organizing commands for plugins
  • Implementing command workflows
  • Debugging command execution

What Are Commands?

Commands are custom slash commands (like /commit, /review) that users can invoke to trigger specific workflows or expand prompts. They are markdown files that can contain:

  • Static prompt text
  • Dynamic content based on arguments
  • Multi-step workflows
  • Integration with tools and scripts

Command File Structure

Location

Commands are defined in markdown files located in:

  • Plugin: <plugin-root>/commands/
  • User-level: .claude/commands/

File Naming

  • Use kebab-case: my-command.md
  • File name becomes the command name: my-command.md/my-command
  • Avoid conflicts with built-in commands

Basic Command Format

Simple Static Command

# /my-command

This is the prompt that will be expanded when the user types /my-command.

The entire content of this file will replace the slash command in the conversation.

Command with Description

<!--
description: Brief description of what this command does
-->

# /my-command

Command prompt goes here...

Command Arguments

Commands can accept arguments that users provide when invoking the command.

Single Argument

# /greet

Hello, {{arg}}! Welcome to the project.

Usage: /greet Alice → "Hello, Alice! Welcome to the project."

Multiple Arguments

# /create-file

Create a new file at {{arg1}} with the following content:

{{arg2}}

Usage: /create-file src/main.rs "fn main() {}"

Named Arguments

# /deploy

Deploy {{environment}} environment to {{region}}.

Configuration:
- Environment: {{environment}}
- Region: {{region}}
- Branch: {{branch}}

Usage: /deploy --environment=production --region=us-east-1 --branch=main

Advanced Features

Conditional Content

# /analyze

Analyze the {{language}} codebase.

{{#if verbose}}
Provide detailed analysis including:
- Code complexity metrics
- Dependency analysis
- Security vulnerabilities
{{else}}
Provide a summary analysis.
{{/if}}

Including Files

Reference other files or command outputs:

# /context

Here is the current project structure:

{{file:PROJECT_STRUCTURE.md}}

And the current git status:

{{shell:git status}}

Multi-Step Workflows

# /full-review

I'll perform a comprehensive code review:

1. First, let me check the git diff:
{{shell:git diff}}

2. Now analyzing code quality...

3. Checking for security issues...

4. Final recommendations:

Best Practices

Clear Command Names

  • Use descriptive, action-oriented names
  • /analyze-security not /sec
  • /create-component not /comp

Provide Context

Always include what the command will do:

# /commit

I'll analyze the current git changes and create a conventional commit message.

Current changes:
{{shell:git diff --staged}}

Based on these changes, here's my suggested commit message:

Handle Edge Cases

# /deploy

{{#if staging}}
Deploying to staging environment (safe for testing)
{{else if production}}
⚠️ WARNING: Deploying to PRODUCTION
Are you sure you want to continue? This will affect live users.
{{else}}
Error: Unknown environment. Please specify --staging or --production
{{/if}}

Document Arguments

<!--
description: Deploy application to specified environment
usage: /deploy [--environment=<env>] [--region=<region>]
arguments:
  - environment: Target environment (staging, production)
  - region: AWS region (us-east-1, eu-west-1, etc.)
-->

# /deploy

Command Organization

Plugin Commands

In plugin.json:

{
  "commands": [
    "./commands/deploy.md",
    "./commands/analyze.md",
    "./commands/review.md"
  ]
}

Directory-Based Commands

{
  "commands": ["./commands"]
}

This loads all .md files in the commands/ directory.

Namespaced Commands

Organize related commands in subdirectories:

commands/
├── git/
│   ├── commit.md
│   ├── review.md
│   └── cleanup.md
├── deploy/
│   ├── staging.md
│   └── production.md

Common Command Patterns

Git Commit Message Generator

# /gcm

I'll analyze the staged changes and generate a conventional commit message.

{{shell:git diff --staged}}

Based on these changes, here's my commit message:

Code Review Command

# /review-pr

I'll review the pull request changes.

PR Number: {{pr_number}}

{{shell:gh pr diff {{pr_number}}}}

Review checklist:
- [ ] Code quality and style
- [ ] Security considerations
- [ ] Test coverage
- [ ] Documentation updates

Project Scaffolding

# /new-component

Creating a new {{component_type}} component named {{name}}.

I'll create:
1. Component file at src/components/{{name}}.tsx
2. Test file at src/components/{{name}}.test.tsx
3. Storybook file at src/components/{{name}}.stories.tsx

Testing Commands

Manual Testing

  1. Install the plugin locally
  2. Reload Claude Code
  3. Type your command in the chat
  4. Verify the expansion is correct

Debugging

If a command doesn't work:

  1. Check file location matches plugin.json
  2. Verify markdown syntax
  3. Test argument substitution
  4. Check for conflicts with existing commands

Command Templates

Analysis Command Template

<!--
description: Analyze {{target}} for {{criteria}}
-->

# /analyze-{{target}}

I'll analyze the {{target}} codebase for {{criteria}}.

{{shell:find {{target}} -type f -name "*.{{extension}}"}}

Analysis results:

Workflow Command Template

<!--
description: Execute {{workflow}} workflow
-->

# /{{workflow}}

Starting {{workflow}} workflow...

Step 1: {{step1_description}}
{{step1_action}}

Step 2: {{step2_description}}
{{step2_action}}

Workflow complete!

Integration with Skills

Commands can reference skills:

# /elixir-review

I'll review this Elixir code using my Phoenix and OTP knowledge.

Please provide the code to review, and I'll check for:
- Phoenix best practices
- OTP design patterns
- Elixir anti-patterns
- Performance considerations

Security Considerations

Avoid Sensitive Data

Never hardcode:

  • API keys
  • Passwords
  • Tokens
  • Private URLs

Validate Input

# /deploy

{{#unless environment}}
Error: --environment is required
{{/unless}}

{{#if (validate_environment environment)}}
Proceeding with deployment...
{{else}}
Error: Invalid environment. Must be staging or production.
{{/if}}

Safe Shell Commands

Be cautious with shell command execution:

# /safe-deploy

<!-- Only allow whitelisted commands -->
{{shell:./scripts/deploy.sh {{environment}}}}

References

For more information about Claude Code commands:

Quick Install

/plugin add https://github.com/vinnie357/claude-skills/tree/main/commands

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

vinnie357/claude-skills
Path: claude-code/skills/commands

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill