production-readiness
关于
This Claude Skill performs comprehensive pre-deployment validation to ensure code is production-ready. It runs a complete audit pipeline including security scans, performance benchmarks, and documentation checks. Use it as a final deployment gate to generate a deployment checklist and verify all production requirements are met.
技能文档
Production Readiness
Purpose
Comprehensive pre-deployment validation to ensure code is production-ready.
Specialist Agent
I am a production readiness specialist ensuring deployment safety.
Methodology (Deployment Gate Pattern):
- Complete quality audit (theater → functionality → style)
- Security deep-dive (vulnerabilities, secrets, unsafe patterns)
- Performance benchmarking (load testing, bottlenecks)
- Documentation validation (README, API docs, deployment docs)
- Dependency audit (outdated, vulnerable packages)
- Configuration check (environment variables, secrets management)
- Monitoring setup (logging, metrics, alerts)
- Rollback plan verification
- Generate deployment checklist
- Final go/no-go decision
Quality Gates (all must pass):
- ✅ All tests passing (100%)
- ✅ Code quality ≥ 85/100
- ✅ Test coverage ≥ 80%
- ✅ Zero critical security issues
- ✅ Zero high-severity bugs
- ✅ Performance within SLAs
- ✅ Documentation complete
- ✅ Rollback plan documented
Input Contract
input:
target_path: string (directory to validate, required)
environment: enum[staging, production] (default: production)
skip_performance: boolean (default: false)
strict_mode: boolean (default: true)
Output Contract
output:
ready_for_deployment: boolean
quality_gates: object
tests_passing: boolean
code_quality: number
test_coverage: number
security_clean: boolean
performance_ok: boolean
docs_complete: boolean
blocking_issues: array[issue]
warnings: array[warning]
deployment_checklist: array[task]
rollback_plan: markdown
Execution Flow
#!/bin/bash
set -e
TARGET_PATH="${1:-./}"
ENVIRONMENT="${2:-production}"
SKIP_PERFORMANCE="${3:-false}"
READINESS_DIR="production-readiness-$(date +%s)"
mkdir -p "$READINESS_DIR"
echo "================================================================"
echo "Production Readiness Check"
echo "Environment: $ENVIRONMENT"
echo "================================================================"
# Initialize quality gates
declare -A GATES
GATES[tests]=0
GATES[quality]=0
GATES[coverage]=0
GATES[security]=0
GATES[performance]=0
GATES[docs]=0
# GATE 1: Complete Quality Audit
echo "[1/10] Running complete quality audit..."
npx claude-flow audit-pipeline "$TARGET_PATH" \
--phase all \
--model codex-auto \
--output "$READINESS_DIR/quality-audit.json"
# Check tests
TESTS_PASSED=$(cat "$READINESS_DIR/quality-audit.json" | jq '.functionality_audit.all_passed')
if [ "$TESTS_PASSED" = "true" ]; then
GATES[tests]=1
echo "✅ GATE 1: Tests passing"
else
echo "❌ GATE 1: Tests failing"
fi
# Check code quality
QUALITY_SCORE=$(cat "$READINESS_DIR/quality-audit.json" | jq '.style_audit.quality_score')
if [ "$QUALITY_SCORE" -ge 85 ]; then
GATES[quality]=1
echo "✅ GATE 2: Code quality $QUALITY_SCORE/100"
else
echo "❌ GATE 2: Code quality too low: $QUALITY_SCORE/100 (need ≥85)"
fi
# Check test coverage
TEST_COVERAGE=$(cat "$READINESS_DIR/quality-audit.json" | jq '.functionality_audit.coverage_percent')
if [ "$TEST_COVERAGE" -ge 80 ]; then
GATES[coverage]=1
echo "✅ GATE 3: Test coverage $TEST_COVERAGE%"
else
echo "❌ GATE 3: Test coverage too low: $TEST_COVERAGE% (need ≥80%)"
fi
# GATE 2: Security Deep-Dive
echo "[2/10] Running security deep-dive..."
npx claude-flow security-scan "$TARGET_PATH" \
--deep true \
--check-secrets true \
--check-dependencies true \
--output "$READINESS_DIR/security-scan.json"
CRITICAL_SECURITY=$(cat "$READINESS_DIR/security-scan.json" | jq '.critical_issues')
HIGH_SECURITY=$(cat "$READINESS_DIR/security-scan.json" | jq '.high_issues')
if [ "$CRITICAL_SECURITY" -eq 0 ] && [ "$HIGH_SECURITY" -eq 0 ]; then
GATES[security]=1
echo "✅ GATE 4: Security scan clean"
else
echo "❌ GATE 4: Security issues found (Critical: $CRITICAL_SECURITY, High: $HIGH_SECURITY)"
fi
# GATE 3: Performance Benchmarking
if [ "$SKIP_PERFORMANCE" != "true" ]; then
echo "[3/10] Running performance benchmarks..."
# Baseline performance
npx claude-flow analysis performance-report \
--detailed true \
--export "$READINESS_DIR/performance-baseline.json"
# Bottleneck detection
npx claude-flow bottleneck detect \
--threshold 10 \
--export "$READINESS_DIR/bottlenecks.json"
# Check SLA compliance
AVG_RESPONSE_TIME=$(cat "$READINESS_DIR/performance-baseline.json" | jq '.avg_response_time')
P95_RESPONSE_TIME=$(cat "$READINESS_DIR/performance-baseline.json" | jq '.p95_response_time')
# SLAs: avg < 200ms, p95 < 500ms
if [ "$AVG_RESPONSE_TIME" -lt 200 ] && [ "$P95_RESPONSE_TIME" -lt 500 ]; then
GATES[performance]=1
echo "✅ GATE 5: Performance within SLAs"
else
echo "❌ GATE 5: Performance exceeds SLAs (avg: ${AVG_RESPONSE_TIME}ms, p95: ${P95_RESPONSE_TIME}ms)"
fi
else
echo "[3/10] Skipping performance benchmarks (--skip-performance)"
GATES[performance]=1 # Pass if skipped
fi
# GATE 4: Documentation Validation
echo "[4/10] Validating documentation..."
# Check for required docs
DOCS_COMPLETE=true
if [ ! -f "README.md" ]; then
echo "⚠️ Missing README.md"
DOCS_COMPLETE=false
fi
if [ ! -f "docs/deployment.md" ] && [ ! -f "DEPLOYMENT.md" ]; then
echo "⚠️ Missing deployment documentation"
DOCS_COMPLETE=false
fi
if [ "$ENVIRONMENT" = "production" ]; then
if [ ! -f "docs/rollback.md" ] && [ ! -f "ROLLBACK.md" ]; then
echo "⚠️ Missing rollback plan"
DOCS_COMPLETE=false
fi
fi
if [ "$DOCS_COMPLETE" = "true" ]; then
GATES[docs]=1
echo "✅ GATE 6: Documentation complete"
else
echo "❌ GATE 6: Documentation incomplete"
fi
# GATE 5: Dependency Audit
echo "[5/10] Auditing dependencies..."
if command -v npm &> /dev/null && [ -f "package.json" ]; then
npm audit --json > "$READINESS_DIR/npm-audit.json" 2>&1 || true
VULNERABLE_DEPS=$(cat "$READINESS_DIR/npm-audit.json" | jq '.metadata.vulnerabilities.high + .metadata.vulnerabilities.critical')
if [ "$VULNERABLE_DEPS" -gt 0 ]; then
echo "⚠️ Found $VULNERABLE_DEPS vulnerable dependencies"
else
echo "✅ No vulnerable dependencies"
fi
fi
# GATE 6: Configuration Check
echo "[6/10] Checking configuration..."
# Check for .env.example
if [ ! -f ".env.example" ] && [ -f ".env" ]; then
echo "⚠️ Missing .env.example file"
fi
# Check for hardcoded secrets
echo "Scanning for hardcoded secrets..."
grep -r "api_key\|password\|secret\|token" "$TARGET_PATH" --include="*.js" --include="*.ts" \
| grep -v "test" | grep -v "example" || echo "✅ No obvious hardcoded secrets"
# GATE 7: Monitoring Setup
echo "[7/10] Validating monitoring setup..."
# Check for logging
if grep -r "logger\|console.log\|winston\|pino" "$TARGET_PATH" --include="*.js" --include="*.ts" > /dev/null; then
echo "✅ Logging detected"
else
echo "⚠️ No logging framework detected"
fi
# GATE 8: Error Handling
echo "[8/10] Checking error handling..."
# Check for try-catch blocks
TRYCATCH_COUNT=$(grep -r "try {" "$TARGET_PATH" --include="*.js" --include="*.ts" | wc -l)
echo "Found $TRYCATCH_COUNT try-catch blocks"
# GATE 9: Load Testing (if not skipped)
if [ "$SKIP_PERFORMANCE" != "true" ] && [ "$ENVIRONMENT" = "production" ]; then
echo "[9/10] Running load tests..."
# Placeholder for load testing
echo "⚠️ Manual load testing required"
else
echo "[9/10] Skipping load tests"
fi
# GATE 10: Generate Deployment Checklist
echo "[10/10] Generating deployment checklist..."
cat > "$READINESS_DIR/DEPLOYMENT-CHECKLIST.md" <<EOF
# Deployment Checklist: $ENVIRONMENT
**Generated**: $(date -Iseconds)
## Quality Gates Status
| Gate | Status | Score/Details |
|------|--------|---------------|
| Tests Passing | $([ ${GATES[tests]} -eq 1 ] && echo "✅" || echo "❌") | $([ "$TESTS_PASSED" = "true" ] && echo "All tests passing" || echo "Tests failing") |
| Code Quality | $([ ${GATES[quality]} -eq 1 ] && echo "✅" || echo "❌") | $QUALITY_SCORE/100 (need ≥85) |
| Test Coverage | $([ ${GATES[coverage]} -eq 1 ] && echo "✅" || echo "❌") | $TEST_COVERAGE% (need ≥80%) |
| Security | $([ ${GATES[security]} -eq 1 ] && echo "✅" || echo "❌") | Critical: $CRITICAL_SECURITY, High: $HIGH_SECURITY |
| Performance | $([ ${GATES[performance]} -eq 1 ] && echo "✅" || echo "❌") | SLA compliance |
| Documentation | $([ ${GATES[docs]} -eq 1 ] && echo "✅" || echo "❌") | All required docs present |
## Pre-Deployment Checklist
### Code Quality
- [ ] All tests passing (100%)
- [ ] Code quality ≥ 85/100
- [ ] Test coverage ≥ 80%
- [ ] No linting errors
- [ ] No TypeScript errors
### Security
- [ ] No critical or high-severity vulnerabilities
- [ ] Dependencies up to date
- [ ] Secrets in environment variables (not hardcoded)
- [ ] Security headers configured
- [ ] Authentication/authorization tested
### Performance
- [ ] Response times within SLAs
- [ ] No performance bottlenecks
- [ ] Database queries optimized
- [ ] Caching configured
- [ ] Load tested
### Documentation
- [ ] README.md up to date
- [ ] API documentation complete
- [ ] Deployment guide available
- [ ] Rollback plan documented
- [ ] Environment variables documented
### Monitoring & Observability
- [ ] Logging configured
- [ ] Error tracking setup
- [ ] Metrics collection enabled
- [ ] Alerts configured
- [ ] Dashboard created
### Infrastructure
- [ ] Environment variables configured
- [ ] Database migrations ready
- [ ] Backup strategy verified
- [ ] Scaling configuration reviewed
- [ ] SSL certificates valid
### Rollback Plan
- [ ] Rollback procedure documented
- [ ] Previous version backed up
- [ ] Rollback tested
- [ ] Rollback SLA defined
## Deployment Steps
1. **Pre-deployment**
- Create deployment branch
- Final code review
- Merge to main/master
2. **Staging Deployment**
- Deploy to staging
- Run smoke tests
- Verify functionality
3. **Production Deployment**
- Create database backup
- Deploy to production
- Run health checks
- Monitor for errors
4. **Post-deployment**
- Verify functionality
- Monitor metrics
- Check error rates
- Document any issues
## Rollback Procedure
If deployment fails:
1. Stop deployment immediately
2. Execute rollback: \`./scripts/rollback.sh\`
3. Verify previous version restored
4. Investigate root cause
5. Fix issues before retry
## Sign-off
- [ ] **Development Lead**: Code review approved
- [ ] **QA Lead**: Testing complete
- [ ] **Security Team**: Security review approved
- [ ] **DevOps**: Infrastructure ready
- [ ] **Product Owner**: Features approved
---
🤖 Generated by Claude Code Production Readiness Check
EOF
# Calculate overall readiness
GATES_PASSED=$((${GATES[tests]} + ${GATES[quality]} + ${GATES[coverage]} + ${GATES[security]} + ${GATES[performance]} + ${GATES[docs]}))
TOTAL_GATES=6
READY_FOR_DEPLOYMENT="false"
if [ "$GATES_PASSED" -eq "$TOTAL_GATES" ]; then
READY_FOR_DEPLOYMENT="true"
fi
# Generate summary
echo ""
echo "================================================================"
echo "Production Readiness Assessment"
echo "================================================================"
echo ""
echo "Environment: $ENVIRONMENT"
echo "Gates Passed: $GATES_PASSED/$TOTAL_GATES"
echo ""
echo "Quality Gates:"
echo " Tests: $([ ${GATES[tests]} -eq 1 ] && echo "✅" || echo "❌")"
echo " Quality: $([ ${GATES[quality]} -eq 1 ] && echo "✅" || echo "❌") ($QUALITY_SCORE/100)"
echo " Coverage: $([ ${GATES[coverage]} -eq 1 ] && echo "✅" || echo "❌") ($TEST_COVERAGE%)"
echo " Security: $([ ${GATES[security]} -eq 1 ] && echo "✅" || echo "❌") (Critical: $CRITICAL_SECURITY, High: $HIGH_SECURITY)"
echo " Performance: $([ ${GATES[performance]} -eq 1 ] && echo "✅" || echo "❌")"
echo " Documentation: $([ ${GATES[docs]} -eq 1 ] && echo "✅" || echo "❌")"
echo ""
if [ "$READY_FOR_DEPLOYMENT" = "true" ]; then
echo "🚀 READY FOR DEPLOYMENT!"
echo ""
echo "Next steps:"
echo "1. Review deployment checklist: $READINESS_DIR/DEPLOYMENT-CHECKLIST.md"
echo "2. Get required sign-offs"
echo "3. Schedule deployment window"
echo "4. Execute deployment"
else
echo "🚫 NOT READY FOR DEPLOYMENT"
echo ""
echo "Blocking issues must be resolved before deployment."
echo "See detailed reports in: $READINESS_DIR/"
exit 1
fi
Integration Points
Cascades
- Final stage in
/feature-dev-completecascade - Part of
/release-preparationcascade - Used by
/deploy-to-productioncascade
Commands
- Uses:
/audit-pipeline,/security-scan,/performance-report - Uses:
/bottleneck-detect,/test-coverage
Other Skills
- Invokes:
quick-quality-check,code-review-assistant - Output to:
deployment-automation,rollback-planner
Usage Example
# Check production readiness
production-readiness . production
# Staging environment
production-readiness ./dist staging
# Skip performance tests
production-readiness . production --skip-performance
Failure Modes
- Tests failing: Block deployment, fix tests
- Security issues: Block deployment, fix vulnerabilities
- Poor quality: Block deployment, improve code
- Missing docs: Warning, but can proceed with approval
- Performance issues: Warning for staging, blocking for production
快速安装
/plugin add https://github.com/DNYoussef/ai-chrome-extension/tree/main/production-readiness在 Claude Code 中复制并粘贴此命令以安装该技能
GitHub 仓库
相关推荐技能
smart-bug-fix
测试这是一个智能调试工具,通过根因分析定位问题本质,利用多模型推理生成修复方案,并使用Codex自动实现代码修复。它包含完整的测试验证流程,确保修复有效且不引入回归问题。开发者可用它系统化地解决复杂bug,提高调试效率。
sglang
元SGLang是一个专为LLM设计的高性能推理框架,特别适用于需要结构化输出的场景。它通过RadixAttention前缀缓存技术,在处理JSON、正则表达式、工具调用等具有重复前缀的复杂工作流时,能实现极速生成。如果你正在构建智能体或多轮对话系统,并追求远超vLLM的推理性能,SGLang是理想选择。
langchain
元LangChain是一个用于构建LLM应用程序的框架,支持智能体、链和RAG应用开发。它提供多模型提供商支持、500+工具集成、记忆管理和向量检索等核心功能。开发者可用它快速构建聊天机器人、问答系统和自主代理,适用于从原型验证到生产部署的全流程。
go-test
元go-test Skill为Go开发者提供全面的测试指导,涵盖单元测试、性能基准测试和集成测试的最佳实践。它能帮助您正确实现表驱动测试、子测试组织、mock接口和竞态检测,同时指导测试覆盖率分析和性能基准测试。当您编写_test.go文件、设计测试用例或优化测试策略时,这个Skill能确保您遵循Go语言的标准测试惯例。
