fairdb-backup-manager
About
This skill automatically manages PostgreSQL backups for FairDB using pgBackRest and Wasabi S3 storage. It handles configuration, scheduling, restoration, and integrity verification of backups. Use it when you need to implement or manage data protection and disaster recovery for FairDB databases.
Quick Install
Claude Code
Recommended/plugin add https://github.com/jeremylongshore/claude-code-plugins-plusgit clone https://github.com/jeremylongshore/claude-code-plugins-plus.git ~/.claude/skills/fairdb-backup-managerCopy and paste this command in Claude Code to install this skill
Documentation
FairDB Backup Manager
Purpose
I automatically handle all backup-related operations for FairDB PostgreSQL databases, including scheduling, verification, restoration, and monitoring of pgBackRest backups with Wasabi S3 storage.
Activation Triggers
I activate when you:
- Mention "backup", "restore", "pgbackrest", or "recovery" in context of FairDB
- Work with PostgreSQL backup configurations
- Need to verify backup integrity
- Discuss disaster recovery or data protection
- Experience data loss or corruption issues
Core Capabilities
Backup Operations
- Configure pgBackRest with Wasabi S3
- Execute full, differential, and incremental backups
- Manage backup schedules and retention policies
- Compress and encrypt backup data
- Monitor backup health and success rates
Restore Operations
- Perform point-in-time recovery (PITR)
- Restore specific databases or tables
- Test restore procedures without impacting production
- Validate restored data integrity
- Document recovery time objectives (RTO)
Monitoring & Verification
- Check backup completion status
- Verify backup integrity with test restores
- Monitor backup size and growth trends
- Alert on backup failures or delays
- Generate backup compliance reports
Automated Workflows
When activated, I will:
-
Assess Current State
- Check existing backup configuration
- Review backup history and success rate
- Identify any failed or missing backups
- Analyze storage usage and costs
-
Optimize Configuration
- Adjust retention policies based on requirements
- Configure optimal compression settings
- Set up parallel backup processes
- Implement incremental backup strategies
-
Execute Operations
- Run scheduled backups automatically
- Perform test restores monthly
- Clean up old backups per retention policy
- Monitor and alert on issues
-
Document & Report
- Maintain backup/restore runbooks
- Generate compliance reports
- Track metrics and trends
- Document recovery procedures
Integration with FairDB Commands
I work seamlessly with these FairDB commands:
/fairdb-setup-backup- Initial configuration/fairdb-onboard-customer- Customer-specific backups/fairdb-emergency-response- Disaster recovery/fairdb-health-check- Backup health monitoring
Best Practices I Enforce
Backup Strategy
- Full backups weekly (Sunday 2 AM)
- Differential backups daily
- Incremental backups hourly during business hours
- WAL archiving for point-in-time recovery
- Geographical redundancy with Wasabi regions
Security
- AES-256 encryption for all backups
- Secure key management
- Access control and audit logging
- Encrypted transport to S3
- Immutable backup storage
Testing
- Monthly restore tests
- Quarterly disaster recovery drills
- Automated integrity verification
- Performance benchmarking
- Documentation updates
Proactive Monitoring
I continuously monitor for:
- Backup failures or delays
- Storage capacity issues
- Unusual backup sizes
- Performance degradation
- Compliance violations
Emergency Response
During data loss incidents, I:
- Assess the extent of data loss
- Identify the best recovery point
- Execute restore procedures
- Verify data integrity
- Document incident and recovery
Reporting
I generate these reports automatically:
- Daily backup status summary
- Weekly storage utilization report
- Monthly compliance audit
- Quarterly DR test results
- Annual backup strategy review
Cost Optimization
I help reduce backup costs by:
- Implementing intelligent retention policies
- Using compression effectively
- Cleaning up unnecessary backups
- Optimizing transfer methods
- Selecting appropriate storage tiers
Example Scenarios
Scenario: New Customer Onboarding
User: "Setting up database for new customer Acme Corp"
Me: I'll configure automated backups for Acme Corp:
- Creating dedicated pgBackRest stanza
- Setting up daily full backups with 30-day retention
- Configuring WAL archiving for PITR
- Scheduling test restores monthly
- Adding monitoring for backup success
Scenario: Backup Failure Alert
System: "Backup failed for customer_db"
Me: I'm investigating the backup failure:
- Checking disk space: OK
- Verifying S3 connectivity: Failed
- Issue: Wasabi credentials expired
- Action: Rotating credentials and retrying backup
- Result: Backup completed successfully
Scenario: Recovery Request
User: "Need to restore CustomerX database to yesterday 3 PM"
Me: Initiating point-in-time recovery:
- Target time: 2024-03-14 15:00:00
- Latest backup: Full backup from 2024-03-14 02:00:00
- Applying WAL logs to target time
- Restoration complete in 12 minutes
- Data verified and customer notified
Success Metrics
I track and optimize:
- Backup success rate (target: >99.9%)
- Recovery time objective (target: <1 hour)
- Recovery point objective (target: <5 minutes)
- Storage efficiency (compression ratio >3:1)
- Cost per GB backed up
Continuous Improvement
I learn from each operation to:
- Refine backup schedules
- Improve recovery procedures
- Optimize resource usage
- Enhance monitoring alerts
- Update documentation
GitHub Repository
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
