Back to Skills

detecting-infrastructure-drift

jeremylongshore
Updated Today
18 views
712
74
712
View on GitHub
Metaaiautomation

About

This skill enables Claude to detect discrepancies between your current infrastructure and its desired state, as defined by tools like Terraform. It's triggered by commands for drift detection, changes, or reports, using a `drift-detect` command to identify configuration mismatches. Use it in DevOps workflows to maintain infrastructure consistency and prevent errors.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus
Git CloneAlternative
git clone https://github.com/jeremylongshore/claude-code-plugins-plus.git ~/.claude/skills/detecting-infrastructure-drift

Copy and paste this command in Claude Code to install this skill

Documentation

Overview

This skill empowers Claude to identify and report on deviations between the current state of your infrastructure and its defined desired state. By leveraging the drift-detect command, it provides insights into configuration inconsistencies, helping maintain infrastructure integrity and prevent unexpected issues.

How It Works

  1. Invocation: The user requests drift detection.
  2. Drift Analysis: Claude executes the drift-detect command.
  3. Report Generation: The command analyzes the infrastructure and identifies any deviations from the defined configuration.
  4. Result Presentation: Claude presents a report detailing the detected drift, including affected resources and configuration differences.

When to Use This Skill

This skill activates when you need to:

  • Identify infrastructure drift in your environment.
  • Ensure that your infrastructure configuration matches the desired state.
  • Generate a report detailing discrepancies between the current and desired infrastructure configurations.

Examples

Example 1: Checking for Infrastructure Drift

User request: "Check for infrastructure drift in my production environment."

The skill will:

  1. Execute the drift-detect command.
  2. Present a report detailing any detected drift, including resource changes and configuration differences.

Example 2: Identifying Configuration Changes

User request: "Are there any configuration changes that haven't been applied to my infrastructure?"

The skill will:

  1. Execute the drift-detect command.
  2. Provide a summary of configuration changes that are present in the desired state but not reflected in the current infrastructure.

Best Practices

  • Regular Monitoring: Schedule regular drift detection checks to proactively identify and address configuration inconsistencies.
  • Version Control: Ensure your infrastructure-as-code configurations are version-controlled to track changes and facilitate rollbacks.
  • Automated Remediation: Implement automated remediation workflows to automatically correct detected drift and maintain infrastructure consistency.

Integration

This skill can be integrated with other DevOps tools and plugins to automate infrastructure management workflows. For example, it can be used in conjunction with configuration management tools like Ansible or Puppet to automatically remediate detected drift. It also complements infrastructure-as-code tools like Terraform by providing a mechanism for verifying that the deployed infrastructure matches the defined configuration.

GitHub Repository

jeremylongshore/claude-code-plugins-plus
Path: backups/skills-migration-20251108-070147/plugins/devops/infrastructure-drift-detector/skills/infrastructure-drift-detector
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill