grey-haven-deployment-cloudflare
About
This skill deploys TanStack Start applications to Cloudflare Workers/Pages using GitHub Actions with integrated secrets management via Doppler. It handles database migrations and includes rollback procedures for reliable deployments. Use this specifically when deploying Grey Haven applications to Cloudflare's edge network.
Quick Install
Claude Code
Recommended/plugin add https://github.com/greyhaven-ai/claude-code-configgit clone https://github.com/greyhaven-ai/claude-code-config.git ~/.claude/skills/grey-haven-deployment-cloudflareCopy and paste this command in Claude Code to install this skill
Documentation
Grey Haven Cloudflare Deployment
Deploy TanStack Start applications to Cloudflare Workers using GitHub Actions, Doppler for secrets, and Wrangler CLI.
Deployment Architecture
TanStack Start on Cloudflare Workers
- SSR: Server-side rendering with TanStack Start server functions
- Edge Runtime: Global deployment on Cloudflare's edge network
- Database: PostgreSQL (PlanetScale) with connection pooling
- Cache: Cloudflare KV for sessions, R2 for file uploads
- Secrets: Managed via Doppler, injected in GitHub Actions
Core Infrastructure
- Workers: Edge compute (TanStack Start)
- KV Storage: Session management
- R2 Storage: File uploads and assets
- D1 Database: Edge data (optional)
- Queues: Background jobs (optional)
Wrangler Configuration
Basic wrangler.toml
name = "grey-haven-app"
main = "dist/server/index.js"
compatibility_date = "2025-01-15"
node_compat = true
[vars]
ENVIRONMENT = "production"
DATABASE_POOL_MIN = "2"
DATABASE_POOL_MAX = "10"
# KV namespace for session storage
[[kv_namespaces]]
binding = "SESSIONS"
id = "your-kv-namespace-id"
preview_id = "your-preview-kv-namespace-id"
# R2 bucket for file uploads
[[r2_buckets]]
binding = "UPLOADS"
bucket_name = "grey-haven-uploads"
preview_bucket_name = "grey-haven-uploads-preview"
# Routes
routes = [
{ pattern = "app.greyhaven.studio", zone_name = "greyhaven.studio" }
]
Environment-Specific Configs
- Development:
wrangler.tomlwithENVIRONMENT = "development" - Staging:
wrangler.staging.tomlwith staging routes - Production:
wrangler.production.tomlwith production routes
Doppler Integration
Required GitHub Secrets
DOPPLER_TOKEN: Doppler service token for CI/CDCLOUDFLARE_API_TOKEN: Wrangler deployment token
Required Doppler Secrets (Production)
# Application
BETTER_AUTH_SECRET=<random-secret>
BETTER_AUTH_URL=https://app.greyhaven.studio
JWT_SECRET_KEY=<random-secret>
# Database (PlanetScale)
DATABASE_URL=postgresql://user:pass@host/db
DATABASE_URL_ADMIN=postgresql://admin:pass@host/db
# Redis (Upstash)
REDIS_URL=redis://user:pass@host:port
# Email (Resend)
RESEND_API_KEY=re_...
# OAuth Providers
GOOGLE_CLIENT_ID=...
GOOGLE_CLIENT_SECRET=...
GITHUB_CLIENT_ID=...
GITHUB_CLIENT_SECRET=...
# Cloudflare
CLOUDFLARE_ACCOUNT_ID=...
CLOUDFLARE_API_TOKEN=...
# Monitoring (optional)
SENTRY_DSN=https://[email protected]/...
AXIOM_TOKEN=xaat-...
GitHub Actions Deployment
Production Deployment Flow
# .github/workflows/deploy-production.yml
- Checkout code
- Setup Node.js 22 with cache
- Install dependencies (npm ci)
- Install Doppler CLI
- Run tests (doppler run --config test)
- Build (doppler run --config production)
- Run database migrations
- Deploy to Cloudflare Workers
- Inject secrets from Doppler
- Run smoke tests
- Rollback on failure
Key Deployment Commands
# Build with Doppler secrets
doppler run --config production -- npm run build
# Run migrations before deployment
doppler run --config production -- npm run db:migrate
# Deploy to Cloudflare
npx wrangler deploy --config wrangler.production.toml
# Inject secrets into Workers
doppler secrets download --config production --format json > secrets.json
cat secrets.json | jq -r 'to_entries | .[] | "\(.key)=\(.value)"' | while read -r line; do
key=$(echo "$line" | cut -d= -f1)
value=$(echo "$line" | cut -d= -f2-)
echo "$value" | npx wrangler secret put "$key"
done
rm secrets.json
Database Migrations
Drizzle Migrations (TanStack Start)
// scripts/migrate.ts
import { drizzle } from "drizzle-orm/node-postgres";
import { migrate } from "drizzle-orm/node-postgres/migrator";
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL_ADMIN,
});
const db = drizzle(pool);
async function main() {
console.log("Running migrations...");
await migrate(db, { migrationsFolder: "./drizzle/migrations" });
console.log("Migrations complete!");
await pool.end();
}
main().catch((err) => {
console.error("Migration failed:", err);
process.exit(1);
});
package.json scripts:
{
"scripts": {
"db:migrate": "tsx scripts/migrate.ts",
"db:migrate:production": "doppler run --config production -- tsx scripts/migrate.ts"
}
}
Rollback Procedures
Wrangler Rollback
# List recent deployments
npx wrangler deployments list --config wrangler.production.toml
# Rollback to previous deployment
npx wrangler rollback --config wrangler.production.toml
# Rollback to specific deployment ID
npx wrangler rollback --deployment-id abc123 --config wrangler.production.toml
Database Rollback
# Drizzle - rollback last migration
doppler run --config production -- drizzle-kit migrate:rollback
# Alembic - rollback one migration
doppler run --config production -- alembic downgrade -1
Emergency Rollback Playbook
- Identify issue: Check Cloudflare Workers logs, Sentry
- Rollback Workers:
npx wrangler rollback - Rollback database (if needed):
drizzle-kit migrate:rollback - Verify rollback: Run smoke tests
- Notify team: Update Linear issue
- Root cause analysis: Create postmortem
Cloudflare Resources Setup
KV Namespace (Session Storage)
# Create KV namespace
npx wrangler kv:namespace create "SESSIONS" --config wrangler.production.toml
npx wrangler kv:namespace create "SESSIONS" --preview --config wrangler.production.toml
# List KV namespaces
npx wrangler kv:namespace list
R2 Bucket (File Uploads)
# Create R2 bucket
npx wrangler r2 bucket create grey-haven-uploads
npx wrangler r2 bucket create grey-haven-uploads-preview
# List R2 buckets
npx wrangler r2 bucket list
Monitoring
Wrangler Tail (Real-time Logs)
# Stream production logs
npx wrangler tail --config wrangler.production.toml
# Filter by status code
npx wrangler tail --status error --config wrangler.production.toml
Sentry Integration (Error Tracking)
import * as Sentry from "@sentry/browser";
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.ENVIRONMENT,
tracesSampleRate: 1.0,
});
Local Development
Wrangler Dev (Local Workers)
# Run Workers locally with Doppler
doppler run --config dev -- npx wrangler dev
# Run with remote mode (uses production KV/R2)
doppler run --config dev -- npx wrangler dev --remote
Supporting Documentation
All supporting files are under 500 lines per Anthropic best practices:
-
examples/ - Complete deployment examples
- github-actions-workflow.md - Full CI/CD workflows
- wrangler-config.md - Complete wrangler.toml examples
- doppler-secrets.md - Secret management patterns
- migrations.md - Database migration examples
- INDEX.md - Examples navigation
-
reference/ - Deployment references
- rollback-procedures.md - Rollback strategies
- monitoring.md - Monitoring and alerting
- troubleshooting.md - Common issues and fixes
- INDEX.md - Reference navigation
-
templates/ - Copy-paste ready templates
- wrangler.toml - Basic wrangler config
- deploy-production.yml - GitHub Actions workflow
-
checklists/ - Deployment checklists
- deployment-checklist.md - Pre-deployment validation
When to Apply This Skill
Use this skill when:
- Deploying TanStack Start to Cloudflare Workers
- Setting up CI/CD with GitHub Actions
- Configuring Doppler multi-environment secrets
- Running database migrations in production
- Rolling back failed deployments
- Setting up KV namespaces or R2 buckets
- Troubleshooting deployment failures
- Configuring monitoring and alerting
Template Reference
These patterns are from Grey Haven's production templates:
- cvi-template: TanStack Start + Cloudflare Workers
- cvi-backend-template: FastAPI + Python Workers
Critical Reminders
- Doppler for ALL secrets: Never commit secrets to git
- Migrations BEFORE deployment: Run
db:migratebeforewrangler deploy - Smoke tests AFTER deployment: Validate production after deploy
- Automated rollback: GitHub Actions rolls back on failure
- Connection pooling: Match wrangler.toml pool settings with database
- Environment-specific configs: Separate wrangler files per environment
- KV/R2 bindings: Configure in wrangler.toml, create with CLI
- Custom domains: Use Cloudflare Proxy for DDoS protection
- Monitoring: Set up Sentry + Axiom + Wrangler tail
- Emergency playbook: Know how to rollback both Workers and database
GitHub Repository
Related Skills
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
hybrid-cloud-networking
MetaThis skill configures secure hybrid cloud networking between on-premises infrastructure and cloud platforms like AWS, Azure, and GCP. Use it when connecting data centers to the cloud, building hybrid architectures, or implementing secure cross-premises connectivity. It supports key capabilities such as VPNs and dedicated connections like AWS Direct Connect for high-performance, reliable setups.
llamaindex
MetaLlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.
csv-data-summarizer
MetaThis skill automatically analyzes CSV files to generate comprehensive statistical summaries and visualizations using Python's pandas and matplotlib/seaborn. It should be triggered whenever a user uploads or references CSV data without prompting for analysis preferences. The tool provides immediate insights into data structure, quality, and patterns through automated analysis and visualization.
