Back to Skills

automating-api-testing

jeremylongshore
Updated 3 days ago
26 views
712
74
712
View on GitHub
Metaaitestingapiautomationdesign

About

This skill automatically generates and executes comprehensive test suites for REST and GraphQL APIs. It validates endpoints against OpenAPI specifications, covering CRUD operations, authentication, and security. Use it when you need automated API testing, contract validation, or OpenAPI compliance checks.

Documentation

Overview

This skill empowers Claude to automatically generate and execute comprehensive API tests for REST and GraphQL endpoints. It ensures thorough validation, covers various authentication methods, and performs contract testing against OpenAPI specifications.

How It Works

  1. Analyze API Definition: The skill parses the provided API definition (e.g., OpenAPI/Swagger file, code files) or infers it from usage.
  2. Generate Test Cases: Based on the API definition, it creates test cases covering different scenarios, including CRUD operations, authentication, and error handling.
  3. Execute Tests and Validate Responses: The skill executes the generated tests and validates the responses against expected status codes, headers, and body structures.

When to Use This Skill

This skill activates when you need to:

  • Generate comprehensive API tests for REST endpoints.
  • Create GraphQL API tests covering queries, mutations, and subscriptions.
  • Validate API contracts against OpenAPI/Swagger specifications.
  • Test API authentication flows, including login, refresh, and protected endpoints.

Examples

Example 1: Generating REST API Tests

User request: "Generate API tests for the user management endpoints in src/routes/users.js"

The skill will:

  1. Analyze the user management endpoints in the specified file.
  2. Generate a test suite covering CRUD operations (create, read, update, delete) for user resources.

Example 2: Creating GraphQL API Tests

User request: "Create GraphQL API tests for the product queries and mutations"

The skill will:

  1. Analyze the product queries and mutations in the GraphQL schema.
  2. Generate tests to verify the functionality and data integrity of these operations.

Best Practices

  • API Definition: Provide a clear and accurate API definition (e.g., OpenAPI/Swagger file) for optimal test generation.
  • Authentication Details: Specify the authentication method and credentials required to access the API endpoints.
  • Contextual Information: Provide context about the API's purpose and usage to guide the test generation process.

Integration

This skill can integrate with other plugins to retrieve API definitions from various sources, such as code repositories or API gateways. It can also be combined with reporting tools to generate detailed test reports and dashboards.

Quick Install

/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/api-test-automation

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

jeremylongshore/claude-code-plugins-plus
Path: backups/skills-batch-20251204-000554/plugins/testing/api-test-automation/skills/api-test-automation
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill