Back to Skills

Building CI/CD Pipelines

jeremylongshore
Updated Yesterday
59 views
409
51
409
View on GitHub
Metaaitestingautomationdesign

About

This skill enables Claude to generate CI/CD pipeline configurations for platforms like GitHub Actions, GitLab CI, and Jenkins. It automates software delivery by creating pipelines for stages such as testing, building, security scanning, and deployment. Use it when you need to set up or automate a CI/CD workflow for multi-environment deployments.

Documentation

Overview

This skill empowers Claude to build production-ready CI/CD pipelines, automating software development workflows. It supports multiple platforms and incorporates best practices for testing, building, security, and deployment.

How It Works

  1. Receiving User Request: Claude receives a request for a CI/CD pipeline, including the target platform and desired stages.
  2. Generating Configuration: Claude generates the CI/CD pipeline configuration file (e.g., YAML for GitHub Actions or GitLab CI, Groovy for Jenkins).
  3. Presenting Configuration: Claude presents the generated configuration to the user for review and deployment.

When to Use This Skill

This skill activates when you need to:

  • Create a CI/CD pipeline for a software project.
  • Generate a CI/CD configuration file for GitHub Actions, GitLab CI, or Jenkins.
  • Automate testing, building, security scanning, and deployment processes.

Examples

Example 1: Creating a GitHub Actions Pipeline

User request: "Create a GitHub Actions pipeline with test, build, and deploy stages."

The skill will:

  1. Generate a github-actions.yml file with defined test, build, and deploy stages.
  2. Present the generated YAML configuration to the user.

Example 2: Generating a GitLab CI Configuration

User request: "Generate a GitLab CI configuration that includes security scanning."

The skill will:

  1. Generate a .gitlab-ci.yml file with test, build, security, and deploy stages, including vulnerability scanning.
  2. Present the generated YAML configuration to the user.

Best Practices

  • Security: Integrate static and dynamic analysis tools into the pipeline to identify vulnerabilities early.
  • Testing: Include unit, integration, and end-to-end tests to ensure code quality.
  • Deployment: Use infrastructure-as-code tools to automate infrastructure provisioning and deployment.

Integration

This skill can be used in conjunction with other plugins to automate infrastructure provisioning, security scanning, and deployment processes. For example, it can work with a cloud deployment plugin to automatically deploy applications to AWS, Azure, or GCP after the CI/CD pipeline successfully builds and tests the code.

Quick Install

/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/skill-adapter

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

jeremylongshore/claude-code-plugins-plus
Path: backups/plugin-enhancements/plugin-backups/ci-cd-pipeline-builder_20251020_065941/skills/skill-adapter
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill