Back to Skills

analyzing-query-performance

jeremylongshore
Updated Today
23 views
712
74
712
View on GitHub
Metaaidata

About

This skill analyzes database EXPLAIN plans to identify performance bottlenecks like slow queries or missing indexes. It provides specific optimization recommendations when users discuss query performance issues or share execution plans. Use it to improve query execution speed and database resource utilization.

Documentation

Overview

This skill empowers Claude to act as a database performance expert. By analyzing EXPLAIN plans and query metrics, Claude can pinpoint inefficiencies and recommend targeted improvements to database queries.

How It Works

  1. Receive Input: The user provides an EXPLAIN plan, a slow query, or a description of a performance problem.
  2. Analyze Performance: The query-performance-analyzer plugin analyzes the provided information, identifying potential bottlenecks, such as full table scans, missing indexes, or inefficient join operations.
  3. Provide Recommendations: The plugin generates specific optimization recommendations, including suggesting new indexes, rewriting queries, or adjusting database configuration parameters.

When to Use This Skill

This skill activates when you need to:

  • Analyze the EXPLAIN plan of a slow-running query.
  • Identify performance bottlenecks in a database query.
  • Obtain recommendations for optimizing database query performance.

Examples

Example 1: Analyzing a Slow Query

User request: "Here's the EXPLAIN plan for my slow query. Can you help me optimize it? EXPLAIN SELECT * FROM orders WHERE customer_id = 123 AND order_date > '2023-01-01';"

The skill will:

  1. Analyze the provided EXPLAIN plan using the query-performance-analyzer plugin.
  2. Identify potential issues, such as a missing index on customer_id or order_date, and suggest creating appropriate indexes.

Example 2: Identifying a Bottleneck

User request: "My query is taking a long time. It's a simple SELECT statement, but it's still slow. What could be the problem?"

The skill will:

  1. Prompt the user to provide the EXPLAIN plan for the query.
  2. Analyze the EXPLAIN plan and identify potential bottlenecks, such as a full table scan or an inefficient join. It might suggest creating an index or rewriting the query to use a more efficient join algorithm.

Best Practices

  • Provide Complete Information: Include the full EXPLAIN plan and the query itself for the most accurate analysis.
  • Describe the Problem: Clearly articulate the performance issue you're experiencing (e.g., slow query, high CPU usage).
  • Test Recommendations: After implementing the suggested optimizations, re-run the EXPLAIN plan to verify the improvements.

Integration

This skill integrates well with other database tools and plugins within the Claude Code ecosystem. For example, it can be used in conjunction with a database schema explorer to identify potential indexing opportunities or with a query builder to rewrite inefficient queries.

Quick Install

/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/query-performance-analyzer

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

jeremylongshore/claude-code-plugins-plus
Path: backups/skills-batch-20251204-000554/plugins/database/query-performance-analyzer/skills/query-performance-analyzer
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill