managing-database-partitions
について
このスキルは、開発者がデータベーステーブルのパーティショニング戦略を設計・管理し、クエリパフォーマンスを最適化し、大規模データセットを効率的に扱うことを支援します。100GBを超えるテーブル、時系列データの処理、またはデータアーカイブソリューションが必要な場合に発動します。本スキルはパーティションのメンテナンスを自動化し、データベースのベストプラクティスに沿った本番環境対応の実装を提供します。
クイックインストール
Claude Code
推奨/plugin add https://github.com/jeremylongshore/claude-code-plugins-plusgit clone https://github.com/jeremylongshore/claude-code-plugins-plus.git ~/.claude/skills/managing-database-partitionsこのコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします
ドキュメント
Overview
This skill automates the design, implementation, and management of database table partitioning strategies. It helps optimize query performance, manage time-series data, and reduce maintenance windows for massive datasets.
How It Works
- Analyze Requirements: Claude analyzes the user's request to understand the specific partitioning needs, including data size, query patterns, and maintenance requirements.
- Design Partitioning Strategy: Based on the analysis, Claude designs an appropriate partitioning strategy (e.g., range, list, hash) and determines the optimal partition key.
- Implement Partitioning: Claude generates the necessary SQL scripts or configuration files to implement the partitioning strategy on the target database.
- Optimize Queries: Claude provides guidance on optimizing queries to take advantage of the partitioning scheme, including suggestions for partition pruning and index creation.
When to Use This Skill
This skill activates when you need to:
- Manage tables exceeding 100GB with slow query performance.
- Implement time-series data archival strategies (IoT, logs, metrics).
- Optimize queries that filter by date ranges or specific values.
- Reduce database maintenance windows.
Examples
Example 1: Optimizing Time-Series Data
User request: "Create database partitions for my IoT sensor data to improve query performance."
The skill will:
- Analyze the data schema and query patterns for the IoT sensor data.
- Design a range-based partitioning strategy using the timestamp column as the partition key.
- Generate SQL scripts to create partitioned tables and indexes.
Example 2: Managing Large Order History Table
User request: "Implement table partitioning for my order history table to reduce maintenance window."
The skill will:
- Analyze the size and growth rate of the order history table.
- Design a list-based partitioning strategy based on order status or region.
- Generate SQL scripts to create partitioned tables and migrate existing data.
Best Practices
- Partition Key Selection: Choose a partition key that is frequently used in queries and evenly distributes data across partitions.
- Partition Size: Determine the optimal partition size based on query patterns and storage capacity.
- Maintenance: Implement automated partition maintenance tasks, such as creating new partitions and archiving old partitions.
Integration
This skill can be integrated with other database management tools for monitoring partition performance and managing data lifecycle. It can also work with data migration tools to efficiently move data between partitions.
GitHub リポジトリ
関連スキル
content-collections
メタThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
メタThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
evaluating-llms-harness
テストThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
sglang
メタSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
