MCP HubMCP Hub
スキル一覧に戻る

quality-metrics

proffesor-for-testing
更新日 Today
155 閲覧
99
21
99
GitHubで表示
その他metricsdoraquality-gatesdashboardskpismeasurement

について

品質指標スキルは、DORAメトリクスなどの成果ベースの測定に焦点を当てることで、開発者が実践的な品質ダッシュボードとKPIを確立することを支援します。このスキルは、虚栄のメトリクスを避け、効果的な品質ゲートを設定し、経時的な傾向を追跡するようユーザーを導きます。この最適化されたスキルは、テスト効果の評価と主要業績評価指標の定義に最適です。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/proffesor-for-testing/agentic-qe
Git クローン代替
git clone https://github.com/proffesor-for-testing/agentic-qe.git ~/.claude/skills/quality-metrics

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

Quality Metrics

<default_to_action> When measuring quality or building dashboards:

  1. MEASURE outcomes (bug escape rate, MTTD) not activities (test count)
  2. FOCUS on DORA metrics: Deployment frequency, Lead time, MTTD, MTTR, Change failure rate
  3. AVOID vanity metrics: 100% coverage means nothing if tests don't catch bugs
  4. SET thresholds that drive behavior (quality gates block bad code)
  5. TREND over time: Direction matters more than absolute numbers

Quick Metric Selection:

  • Speed: Deployment frequency, lead time for changes
  • Stability: Change failure rate, MTTR
  • Quality: Bug escape rate, defect density, test effectiveness
  • Process: Code review time, flaky test rate

Critical Success Factors:

  • Metrics without action are theater
  • What you measure is what you optimize
  • Trends matter more than snapshots </default_to_action>

Quick Reference Card

When to Use

  • Building quality dashboards
  • Defining quality gates
  • Evaluating testing effectiveness
  • Justifying quality investments

Meaningful vs Vanity Metrics

✅ Meaningful❌ Vanity
Bug escape rateTest case count
MTTD (detection)Lines of test code
MTTR (recovery)Test executions
Change failure rateCoverage % (alone)
Lead time for changesRequirements traced

DORA Metrics

MetricEliteHighMediumLow
Deploy FrequencyOn-demandWeeklyMonthlyYearly
Lead Time< 1 hour< 1 week< 1 month> 6 months
Change Failure Rate< 5%< 15%< 30%> 45%
MTTR< 1 hour< 1 day< 1 week> 1 month

Quality Gate Thresholds

MetricBlocking ThresholdWarning
Test pass rate100%-
Critical coverage> 80%> 70%
Security critical0-
Performance p95< 200ms< 500ms
Flaky tests< 2%< 5%

Core Metrics

Bug Escape Rate

Bug Escape Rate = (Production Bugs / Total Bugs Found) × 100

Target: < 10% (90% caught before production)

Test Effectiveness

Test Effectiveness = (Bugs Found by Tests / Total Bugs) × 100

Target: > 70%

Defect Density

Defect Density = Defects / KLOC

Good: < 1 defect per KLOC

Mean Time to Detect (MTTD)

MTTD = Time(Bug Reported) - Time(Bug Introduced)

Target: < 1 day for critical, < 1 week for others

Dashboard Design

// Agent generates quality dashboard
await Task("Generate Dashboard", {
  metrics: {
    delivery: ['deployment-frequency', 'lead-time', 'change-failure-rate'],
    quality: ['bug-escape-rate', 'test-effectiveness', 'defect-density'],
    stability: ['mttd', 'mttr', 'availability'],
    process: ['code-review-time', 'flaky-test-rate', 'coverage-trend']
  },
  visualization: 'grafana',
  alerts: {
    critical: { bug_escape_rate: '>20%', mttr: '>24h' },
    warning: { coverage: '<70%', flaky_rate: '>5%' }
  }
}, "qe-quality-analyzer");

Quality Gate Configuration

{
  "qualityGates": {
    "commit": {
      "coverage": { "min": 80, "blocking": true },
      "lint": { "errors": 0, "blocking": true }
    },
    "pr": {
      "tests": { "pass": "100%", "blocking": true },
      "security": { "critical": 0, "blocking": true },
      "coverage_delta": { "min": 0, "blocking": false }
    },
    "release": {
      "e2e": { "pass": "100%", "blocking": true },
      "performance_p95": { "max_ms": 200, "blocking": true },
      "bug_escape_rate": { "max": "10%", "blocking": false }
    }
  }
}

Agent-Assisted Metrics

// Calculate quality trends
await Task("Quality Trend Analysis", {
  timeframe: '90d',
  metrics: ['bug-escape-rate', 'mttd', 'test-effectiveness'],
  compare: 'previous-90d',
  predictNext: '30d'
}, "qe-quality-analyzer");

// Evaluate quality gate
await Task("Quality Gate Evaluation", {
  buildId: 'build-123',
  environment: 'staging',
  metrics: currentMetrics,
  policy: qualityPolicy
}, "qe-quality-gate");

Agent Coordination Hints

Memory Namespace

aqe/quality-metrics/
├── dashboards/*         - Dashboard configurations
├── trends/*             - Historical metric data
├── gates/*              - Gate evaluation results
└── alerts/*             - Triggered alerts

Fleet Coordination

const metricsFleet = await FleetManager.coordinate({
  strategy: 'quality-metrics',
  agents: [
    'qe-quality-analyzer',         // Trend analysis
    'qe-test-executor',            // Test metrics
    'qe-coverage-analyzer',        // Coverage data
    'qe-production-intelligence',  // Production metrics
    'qe-quality-gate'              // Gate decisions
  ],
  topology: 'mesh'
});

Common Traps

TrapProblemSolution
Coverage worship100% coverage, bugs still escapeMeasure bug escape rate instead
Test count focusMany tests, slow feedbackMeasure execution time
Activity metricsBusy work, no outcomesMeasure outcomes (MTTD, MTTR)
Point-in-timeSnapshot without contextTrack trends over time

Related Skills


Remember

Measure outcomes, not activities. Bug escape rate > test count. MTTD/MTTR > coverage %. Trends > snapshots. Set gates that block bad code. What you measure is what you optimize.

With Agents: Agents track metrics automatically, analyze trends, trigger alerts, and make gate decisions. Use agents to maintain continuous quality visibility.

GitHub リポジトリ

proffesor-for-testing/agentic-qe
パス: .claude/skills/quality-metrics
agenticqeagenticsfoundationagentsquality-engineering

関連スキル

Verification & Quality Assurance

その他

This skill automatically verifies and scores the quality of code and agent outputs using a 0.95 accuracy threshold. It performs truth scoring, code correctness checks, and can instantly roll back changes that fail verification. Use it to ensure high-quality outputs and maintain codebase reliability in your development workflow.

スキルを見る

performance-analysis

その他

This skill provides comprehensive performance analysis for Claude Flow swarms, detecting bottlenecks and profiling operations. It generates detailed reports and offers AI-powered optimization recommendations to improve swarm performance. Use it when you need to monitor, analyze, and optimize the efficiency of your Claude Flow implementations.

スキルを見る

performance-analysis

その他

This skill provides comprehensive performance analysis and bottleneck detection for Claude Flow swarms. It identifies issues across communication, processing, memory, and network layers while offering AI-powered optimization recommendations. Use it for real-time monitoring, profiling swarm operations, and generating detailed performance reports.

スキルを見る

test-reporting-analytics

その他

This skill provides advanced test reporting and analytics dashboards for quality engineering metrics, including predictive analytics and trend analysis. It's designed for communicating quality status, tracking trends, and supporting data-driven decisions about software quality. Developers should use it when building automated reports or dashboards that highlight key metrics and actionable insights for teams or executives.

スキルを見る