Back to Skills

hooks

vinnie357
Updated Yesterday
14 views
0
View on GitHub
Metaaidesign

About

This skill enables developers to create hooks that trigger shell commands in response to Claude Code events like tool calls and user prompts. It's designed for event-driven automations, custom validations, and integrating with external tools. Key capabilities include tool call hooks, lifecycle hooks, and user prompt hooks that execute automatically during development workflows.

Documentation

Claude Code Hooks

Guide for creating hooks that execute shell commands or scripts in response to Claude Code events and tool calls.

When to Use This Skill

Activate this skill when:

  • Creating event-driven automations
  • Implementing custom validation or formatting
  • Integrating with external tools and services
  • Setting up project-specific workflows
  • Responding to tool execution events

What Are Hooks?

Hooks are shell commands that execute automatically in response to specific events:

  • Tool Call Hooks: Trigger before/after specific tool calls
  • Lifecycle Hooks: Trigger on plugin install/uninstall
  • User Prompt Hooks: Trigger when users submit prompts
  • Custom Events: Application-specific trigger points

Hook Configuration

Location

Hooks are configured in:

  • Plugin: <plugin-root>/.claude-plugin/hooks.json
  • User-level: .claude/hooks.json
  • Plugin manifest: Inline in plugin.json

File Structure

Standalone hooks.json:

{
  "onToolCall": {
    "Write": {
      "before": ["./hooks/format-check.sh"],
      "after": ["./hooks/lint.sh"]
    },
    "Bash": {
      "before": ["./hooks/validate-command.sh"]
    }
  },
  "onInstall": ["./hooks/setup.sh"],
  "onUninstall": ["./hooks/cleanup.sh"],
  "onUserPromptSubmit": ["./hooks/log-prompt.sh"]
}

Inline in plugin.json:

{
  "hooks": {
    "onToolCall": {
      "Write": {
        "after": ["prettier --write {{file_path}}"]
      }
    }
  }
}

Hook Types

Tool Call Hooks

Execute before or after specific tool calls.

Available Tools:

  • Read, Write, Edit, MultiEdit
  • Bash, BashOutput
  • Glob, Grep
  • Task, Skill, SlashCommand
  • TodoWrite
  • WebFetch, WebSearch
  • AskUserQuestion

Example:

{
  "onToolCall": {
    "Write": {
      "before": [
        "echo 'Writing file: {{file_path}}'",
        "./hooks/backup.sh {{file_path}}"
      ],
      "after": [
        "prettier --write {{file_path}}",
        "git add {{file_path}}"
      ]
    },
    "Edit": {
      "after": ["eslint --fix {{file_path}}"]
    }
  }
}

Lifecycle Hooks

Execute during plugin installation/uninstallation.

{
  "onInstall": [
    "./hooks/setup-dependencies.sh",
    "npm install",
    "echo 'Plugin installed successfully'"
  ],
  "onUninstall": [
    "./hooks/cleanup.sh",
    "echo 'Plugin uninstalled'"
  ]
}

User Prompt Submit Hook

Execute when user submits a prompt:

{
  "onUserPromptSubmit": [
    "./hooks/log-interaction.sh '{{prompt}}'",
    "./hooks/check-context.sh"
  ]
}

Hook Variables

Hooks have access to context-specific variables using {{variable}} syntax.

Tool Call Variables

Different tools provide different variables:

Write Tool:

  • {{file_path}}: Path to file being written
  • {{content}}: Content being written (before hooks only)

Edit Tool:

  • {{file_path}}: Path to file being edited
  • {{old_string}}: String being replaced
  • {{new_string}}: Replacement string

Bash Tool:

  • {{command}}: Command being executed

Read Tool:

  • {{file_path}}: Path to file being read

Global Variables

Available in all hooks:

  • {{cwd}}: Current working directory
  • {{timestamp}}: Current Unix timestamp
  • {{user}}: Current user
  • {{plugin_root}}: Plugin installation directory

User Prompt Variables

  • {{prompt}}: User's submitted prompt text

Hook Examples

Auto-Format on Write

{
  "onToolCall": {
    "Write": {
      "after": [
        "prettier --write {{file_path}}",
        "eslint --fix {{file_path}}"
      ]
    }
  }
}

Pre-Commit Validation

{
  "onToolCall": {
    "Bash": {
      "before": ["./hooks/validate-git-command.sh '{{command}}'"]
    }
  }
}

validate-git-command.sh:

#!/bin/bash

COMMAND="$1"

# Block force push to main/master
if [[ "$COMMAND" =~ "git push --force" ]] && [[ "$COMMAND" =~ "main|master" ]]; then
  echo "ERROR: Force push to main/master is not allowed"
  exit 1
fi

exit 0

Automatic Backups

{
  "onToolCall": {
    "Write": {
      "before": ["cp {{file_path}} {{file_path}}.backup"]
    },
    "Edit": {
      "before": ["cp {{file_path}} {{file_path}}.backup"]
    }
  }
}

Logging and Analytics

{
  "onToolCall": {
    "Write": {
      "after": ["./hooks/log-file-change.sh {{file_path}}"]
    }
  },
  "onUserPromptSubmit": ["./hooks/log-prompt.sh '{{prompt}}'"]
}

log-file-change.sh:

#!/bin/bash

FILE="$1"
TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

echo "$TIMESTAMP - Modified: $FILE" >> .claude/file-changes.log

Integration with External Tools

{
  "onToolCall": {
    "Write": {
      "after": [
        "notify-send 'File Updated' 'Modified {{file_path}}'",
        "curl -X POST https://api.example.com/notify -d 'file={{file_path}}'"
      ]
    }
  }
}

Hook Execution

Execution Order

Multiple hooks execute in array order:

{
  "onToolCall": {
    "Write": {
      "after": [
        "echo 'Step 1'",  // Runs first
        "echo 'Step 2'",  // Runs second
        "echo 'Step 3'"   // Runs third
      ]
    }
  }
}

Exit Codes

Before Hooks:

  • Exit code 0: Continue with tool execution
  • Exit code non-zero: Block tool execution, show error to user

After Hooks:

  • Exit codes are logged but don't affect tool execution
  • Tool has already completed

Error Handling

#!/bin/bash

# Before hook - blocks tool on error
if [[ ! -f "$1" ]]; then
  echo "ERROR: File does not exist"
  exit 1  # Blocks tool execution
fi

# Validation passed
exit 0

Best Practices

Keep Hooks Fast

Hooks block execution - keep them lightweight:

{
  "onToolCall": {
    "Write": {
      // ✅ Fast linter
      "after": ["eslint --fix {{file_path}}"]

      // ❌ Slow test suite
      // "after": ["npm test"]
    }
  }
}

Use Absolute Paths

Reference scripts with paths relative to plugin:

{
  "onInstall": ["${CLAUDE_PLUGIN_ROOT}/hooks/setup.sh"]
}

Validate Input

Always validate hook variables:

#!/bin/bash

FILE="$1"

if [[ -z "$FILE" ]]; then
  echo "ERROR: No file path provided"
  exit 1
fi

if [[ ! -f "$FILE" ]]; then
  echo "ERROR: File does not exist: $FILE"
  exit 1
fi

Provide Clear Feedback

#!/bin/bash

echo "Running pre-commit checks..."

if ! npm run lint; then
  echo "❌ Linting failed. Please fix errors before committing."
  exit 1
fi

echo "✅ All checks passed"
exit 0

Handle Edge Cases

#!/bin/bash

# Handle files with spaces in names
FILE="$1"

# Validate file type
if [[ ! "$FILE" =~ \.(js|ts|jsx|tsx)$ ]]; then
  # Skip non-JavaScript files silently
  exit 0
fi

# Run formatter
prettier --write "$FILE"

Security Considerations

Validate Commands

Before hooks can block dangerous operations:

{
  "onToolCall": {
    "Bash": {
      "before": ["./hooks/validate-command.sh '{{command}}'"]
    }
  }
}

validate-command.sh:

#!/bin/bash

COMMAND="$1"

# Block dangerous patterns
DANGEROUS_PATTERNS=(
  "rm -rf /"
  "dd if="
  "mkfs"
  "> /dev/sda"
)

for pattern in "${DANGEROUS_PATTERNS[@]}"; do
  if [[ "$COMMAND" =~ $pattern ]]; then
    echo "ERROR: Dangerous command blocked: $pattern"
    exit 1
  fi
done

exit 0

Limit Hook Scope

Only hook necessary tools:

{
  // ✅ Specific tools only
  "onToolCall": {
    "Write": { "after": ["./format.sh {{file_path}}"] }
  }

  // ❌ Don't hook everything unnecessarily
}

Sanitize Variables

#!/bin/bash

# Sanitize file path
FILE=$(realpath "$1")

# Ensure file is within project
if [[ ! "$FILE" =~ ^$(pwd) ]]; then
  echo "ERROR: File outside project directory"
  exit 1
fi

Debugging Hooks

Enable Verbose Output

{
  "onToolCall": {
    "Write": {
      "before": ["set -x; ./hooks/debug.sh {{file_path}}; set +x"]
    }
  }
}

Log Hook Execution

#!/bin/bash

LOG_FILE=".claude/hooks.log"
TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

echo "$TIMESTAMP - Hook: $0, Args: $@" >> "$LOG_FILE"

# Rest of hook logic...

Test Hooks Manually

# Test hook with sample data
./hooks/format.sh "src/main.js"

# Check exit code
echo $?

Common Hook Patterns

Auto-Format Pipeline

{
  "onToolCall": {
    "Write": {
      "after": [
        "prettier --write {{file_path}}",
        "eslint --fix {{file_path}}"
      ]
    },
    "Edit": {
      "after": [
        "prettier --write {{file_path}}",
        "eslint --fix {{file_path}}"
      ]
    }
  }
}

Test on Write

{
  "onToolCall": {
    "Write": {
      "after": ["./hooks/run-relevant-tests.sh {{file_path}}"]
    }
  }
}

Git Integration

{
  "onToolCall": {
    "Write": {
      "after": ["git add {{file_path}}"]
    },
    "Edit": {
      "after": ["git add {{file_path}}"]
    }
  }
}

Troubleshooting

Hook Not Executing

  • Check hook file has execute permissions: chmod +x hooks/script.sh
  • Verify path is correct relative to plugin root
  • Check JSON syntax in hooks.json
  • Look for errors in Claude Code logs

Hook Blocking Tool

  • Check exit code of before hooks
  • Add debug logging
  • Test hook script manually
  • Verify validation logic

Variables Not Substituting

  • Check variable name spelling: {{file_path}} not {{filepath}}
  • Verify variable is available for that tool
  • Quote variables in bash: "{{file_path}}"

References

For more information:

Quick Install

/plugin add https://github.com/vinnie357/claude-skills/tree/main/hooks

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

vinnie357/claude-skills
Path: claude-code/skills/hooks

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill