managing-snapshot-tests
について
このスキルは、スナップショットテストの失敗を管理するために、差分をインテリジェントに分析して意図的な変更と回帰を区別します。選択的なスナップショット更新を可能にし、Jest、Vitest、Playwright、Storybookフレームワークをサポートしています。スナップショットテストの失敗に対処する場合や、スナップショットを更新する必要がある場合にご利用ください。
クイックインストール
Claude Code
推奨/plugin add https://github.com/jeremylongshore/claude-code-plugins-plusgit clone https://github.com/jeremylongshore/claude-code-plugins-plus.git ~/.claude/skills/managing-snapshot-testsこのコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします
ドキュメント
Overview
This skill empowers Claude to efficiently manage snapshot tests by analyzing differences, selectively updating snapshots based on intentional changes, and identifying potential regressions. It provides a streamlined approach to maintain snapshot test suites across various JavaScript testing frameworks.
How It Works
- Analyzing Failures: Reviews failed snapshot diffs, highlighting intentional and unintentional changes with side-by-side comparisons.
- Selective Updating: Updates specific snapshots that reflect intentional UI or code changes, while preserving snapshots that have caught regressions.
- Batch Processing: Allows for batch updating of related snapshots to streamline the update process.
When to Use This Skill
This skill activates when you need to:
- Analyze snapshot test failures after code changes.
- Update snapshot tests to reflect intentional UI changes.
- Identify and preserve snapshots that are catching regressions.
Examples
Example 1: Updating Snapshots After UI Changes
User request: "I've made some UI changes and now my snapshot tests are failing. Can you update the snapshots?"
The skill will:
- Analyze the snapshot failures, identifying the diffs caused by the UI changes.
- Update the relevant snapshot files to reflect the new UI.
Example 2: Investigating Unexpected Snapshot Changes
User request: "My snapshot tests are failing, but I don't expect any UI changes. Can you help me figure out what's going on?"
The skill will:
- Analyze the snapshot failures, highlighting the unexpected diffs.
- Present the diffs to the user for review, indicating potential regressions.
Best Practices
- Clear Communication: Clearly state the intention behind updating or analyzing snapshots.
- Framework Awareness: Specify the testing framework (Jest, Vitest, etc.) if known for more accurate analysis.
- Selective Updates: Avoid blindly updating all snapshots. Focus on intentional changes and investigate unexpected diffs.
Integration
This skill works independently but can be used in conjunction with other code analysis and testing tools to provide a comprehensive testing workflow.
GitHub リポジトリ
関連スキル
content-collections
メタThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
メタThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
evaluating-llms-harness
テストThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
sglang
メタSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
