Creating APM Dashboards
About
This skill helps developers create Application Performance Monitoring (APM) dashboards for platforms like Grafana and Datadog. It is triggered by requests to set up or expand monitoring solutions, assisting in defining key metrics and visualizations. The skill covers essential areas including golden signals, resource utilization, database/cache metrics, and error tracking.
Quick Install
Claude Code
Recommended/plugin add https://github.com/jeremylongshore/claude-code-plugins-plusgit clone https://github.com/jeremylongshore/claude-code-plugins-plus.git ~/.claude/skills/Creating APM DashboardsCopy and paste this command in Claude Code to install this skill
Documentation
Overview
This skill automates the creation of Application Performance Monitoring (APM) dashboards, providing a structured approach to visualizing critical application metrics. By defining key performance indicators and generating dashboard configurations, this skill simplifies the process of monitoring application health and performance.
How It Works
- Identify Requirements: Determine the specific metrics and visualizations needed for the APM dashboard based on the user's request.
- Define Dashboard Components: Select relevant components such as golden signals (latency, traffic, errors, saturation), request metrics, resource utilization, database metrics, cache metrics, business metrics, and error tracking.
- Generate Configuration: Create the dashboard configuration file based on the selected components and user preferences.
- Deploy Dashboard: Deploy the generated configuration to the target monitoring platform (e.g., Grafana, Datadog).
When to Use This Skill
This skill activates when you need to:
- Create a new APM dashboard for an application.
- Define key metrics and visualizations for monitoring application performance.
- Generate dashboard configurations for Grafana, Datadog, or other monitoring platforms.
Examples
Example 1: Creating a Grafana Dashboard
User request: "Create a Grafana dashboard for monitoring my web application's performance."
The skill will:
- Identify the need for a Grafana dashboard focused on web application performance.
- Define dashboard components including request rate, response times, error rates, and resource utilization (CPU, memory).
- Generate a Grafana dashboard configuration file with pre-defined visualizations for these metrics.
Example 2: Setting up a Datadog Dashboard
User request: "Set up a Datadog dashboard to track the golden signals for my microservice."
The skill will:
- Identify the need for a Datadog dashboard focused on golden signals.
- Define dashboard components including latency, traffic, errors, and saturation metrics.
- Generate a Datadog dashboard configuration file with pre-defined visualizations for these metrics.
Best Practices
- Specificity: Provide detailed information about the application and metrics to be monitored.
- Platform Selection: Clearly specify the target monitoring platform (Grafana, Datadog, etc.) to ensure compatibility.
- Iteration: Review and refine the generated dashboard configuration to meet specific monitoring needs.
Integration
This skill can be integrated with other plugins that manage infrastructure or application deployment to automatically create APM dashboards as part of the deployment process. It can also work with alerting plugins to define alert rules based on the metrics displayed in the generated dashboards.
GitHub Repository
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
