Back to Skills

twelve-factor-app

majiayu000
Updated 2 days ago
58
9
58
View on GitHub
Metaaiapidesign

About

This Claude Skill provides the Twelve-Factor App methodology, offering 51 rules across 12 categories for building portable, scalable cloud-native applications. Use it when designing backend services, APIs, or microservices to guide decisions on configuration, deployment, logging, and infrastructure. It's particularly helpful for containerization, setting up CI/CD, and planning scaling strategies.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/twelve-factor-app

Copy and paste this command in Claude Code to install this skill

Documentation

Community Cloud-Native Applications Best Practices

Comprehensive methodology for building modern software-as-a-service applications that are portable, scalable, and maintainable. Contains 51 rules across 12 categories, covering the entire application lifecycle from codebase management to production operations.

When to Apply

Reference these guidelines when:

  • Designing new backend services or APIs
  • Containerizing applications for Kubernetes or Docker
  • Setting up CI/CD pipelines
  • Managing configuration across environments
  • Implementing logging and monitoring
  • Planning application scaling strategy
  • Debugging deployment or environment issues

Rule Categories by Priority

PriorityCategoryImpactPrefix
1Codebase & Version ControlCRITICALcode-
2DependenciesCRITICALdep-
3ConfigurationCRITICALconfig-
4Backing ServicesHIGHsvc-
5Build, Release, RunHIGHbuild-
6Processes & StateHIGHproc-
7Concurrency & ScalingHIGHscale-
8DisposabilityHIGHdisp-
9Port BindingMEDIUMport-
10Dev/Prod ParityMEDIUMparity-
11LoggingMEDIUMlog-
12Admin ProcessesMEDIUMadmin-

Quick Reference

1. Codebase & Version Control (CRITICAL)

2. Dependencies (CRITICAL)

3. Configuration (CRITICAL)

4. Backing Services (HIGH)

5. Build, Release, Run (HIGH)

6. Processes & State (HIGH)

7. Concurrency & Scaling (HIGH)

8. Disposability (HIGH)

9. Port Binding (MEDIUM)

10. Dev/Prod Parity (MEDIUM)

11. Logging (MEDIUM)

12. Admin Processes (MEDIUM)

How to Use

Read individual reference files for detailed explanations and code examples:

Reference Files

FileDescription
references/_sections.mdCategory definitions and ordering
assets/templates/_template.mdTemplate for new rules
metadata.jsonVersion and reference information

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/data/12-factor-app

Related Skills

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

creating-opencode-plugins

Meta

This skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill