data-privacy-guardian
について
このスキルは、開発者がPIIの検出、機密データのマスキング、同意管理や暗号化の実装を通じて、データプライバシー対策を実装することを支援します。セキュリティ要件やデータ保護基準への準拠が必要なシステムを設計・更新する際にご活用ください。適切な個人情報取り扱いを保証するための構造化された計画と成果物を提供します。
クイックインストール
Claude Code
推奨/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/data-privacy-guardianこのコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします
ドキュメント
Data Privacy Guardian
Purpose
- Detect PII, mask data, and manage consent and encryption.
Preconditions
- Access to system context (repos, infra, environments)
- Confirmed requirements and constraints
- Required approvals for security, compliance, or governance
Inputs
- Problem statement and scope
- Current architecture or system constraints
- Non-functional requirements (performance, security, compliance)
- Target stack and environment
Outputs
- Design or implementation plan
- Required artifacts (diagrams, configs, specs, checklists)
- Validation steps and acceptance criteria
Detailed Step-by-Step Procedures
- Clarify scope, constraints, and success metrics.
- Review current system state, dependencies, and integration points.
- Select patterns, tools, and architecture options that match constraints.
- Produce primary artifacts (docs/specs/configs/code stubs).
- Validate against requirements and known risks.
- Provide rollout and rollback guidance.
Decision Trees and Conditional Logic
- If compliance or regulatory scope applies -> add required controls and audit steps.
- If latency budget is strict -> choose low-latency storage and caching.
- Else -> prefer cost-optimized storage and tiering.
- If data consistency is critical -> prefer transactional boundaries and strong consistency.
- Else -> evaluate eventual consistency or async processing.
Error Handling and Edge Cases
- Partial failures across dependencies -> isolate blast radius and retry with backoff.
- Data corruption or loss risk -> enable backups and verify restore path.
- Limited access to systems -> document gaps and request access early.
- Legacy dependencies with limited change tolerance -> use adapters and phased rollout.
Tool Requirements and Dependencies
- CLI and SDK tooling for the target stack
- Credentials or access tokens for required environments
- Diagramming or spec tooling when producing docs
Stack Profiles
- Use Profile A, B, or C from
skills/STACK_PROFILES.md. - Note selected profile in outputs for traceability.
Validation
- Requirements coverage check
- Security and compliance review
- Performance and reliability review
- Peer or stakeholder sign-off
Rollback Procedures
- Revert config or deployment to last known good state.
- Roll back database migrations if applicable.
- Verify service health, data integrity, and error rates after rollback.
Success Metrics
- Measurable outcomes (latency, error rate, uptime, cost)
- Acceptance thresholds defined with stakeholders
Example Workflows and Use Cases
- Minimal: apply the skill to a small service or single module.
- Production: apply the skill to a multi-service or multi-tenant system.
GitHub リポジトリ
関連スキル
content-collections
メタThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
polymarket
メタThis skill enables developers to build applications with the Polymarket prediction markets platform, including API integration for trading and market data. It also provides real-time data streaming via WebSocket to monitor live trades and market activity. Use it for implementing trading strategies or creating tools that process live market updates.
hybrid-cloud-networking
メタThis skill configures secure hybrid cloud networking between on-premises infrastructure and cloud platforms like AWS, Azure, and GCP. Use it when connecting data centers to the cloud, building hybrid architectures, or implementing secure cross-premises connectivity. It supports key capabilities such as VPNs and dedicated connections like AWS Direct Connect for high-performance, reliable setups.
llamaindex
メタLlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.
