detecting-performance-regressions
About
This skill automatically detects performance regressions in CI/CD pipelines by analyzing metrics like response time and throughput. It compares current performance against baselines or thresholds and performs statistical significance analysis to identify degradation. Use it to catch performance issues early when users mention regression detection, baseline comparison, or performance budget violations.
Documentation
Overview
This skill automates the detection of performance regressions within a CI/CD pipeline. It utilizes various methods, including baseline comparison, statistical analysis, and threshold violation checks, to identify performance degradation. The skill provides insights into potential performance bottlenecks and helps maintain application performance.
How It Works
- Analyze Performance Data: The plugin gathers performance metrics from the CI/CD environment.
- Detect Regressions: It employs methods like baseline comparison, statistical analysis, and threshold checks to detect regressions.
- Report Findings: The plugin generates a report summarizing the detected performance regressions and their potential impact.
When to Use This Skill
This skill activates when you need to:
- Identify performance regressions in a CI/CD pipeline.
- Analyze performance metrics for potential degradation.
- Compare current performance against historical baselines.
Examples
Example 1: Identifying a Response Time Regression
User request: "Detect performance regressions in the latest build. Specifically, check for increases in response time."
The skill will:
- Analyze response time metrics from the latest build.
- Compare the response times against a historical baseline.
- Report any statistically significant increases in response time that exceed a defined threshold.
Example 2: Detecting Throughput Degradation
User request: "Analyze throughput for performance regressions after the recent code merge."
The skill will:
- Gather throughput data (requests per second) from the post-merge CI/CD run.
- Compare the throughput to pre-merge values, looking for statistically significant drops.
- Generate a report highlighting any throughput degradation, indicating a potential performance regression.
Best Practices
- Define Baselines: Establish clear and representative performance baselines for accurate comparison.
- Set Thresholds: Configure appropriate thresholds for identifying significant performance regressions.
- Monitor Key Metrics: Focus on monitoring critical performance metrics relevant to the application's behavior.
Integration
This skill can be integrated with other CI/CD tools to automatically trigger regression detection upon new builds or code merges. It can also be combined with reporting plugins to generate detailed performance reports.
Quick Install
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/performance-regression-detectorCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
