Back to Skills

automating-api-testing

jeremylongshore
Updated Today
20 views
712
74
712
View on GitHub
Metatestingapiautomation

About

This Claude Skill automates API testing by generating requests and validating responses for both REST and GraphQL endpoints. It's designed for developers to test API contracts, validate OpenAPI specifications, and ensure endpoint reliability. Trigger it with phrases like "test the API" or "generate API tests" to streamline your testing workflow.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus
Git CloneAlternative
git clone https://github.com/jeremylongshore/claude-code-plugins-plus.git ~/.claude/skills/automating-api-testing

Copy and paste this command in Claude Code to install this skill

Documentation

Prerequisites

Before using this skill, ensure you have:

  • API definition files (OpenAPI/Swagger, GraphQL schema, or endpoint documentation)
  • Base URL for the API service (development, staging, or test environment)
  • Authentication credentials or API keys if endpoints require authorization
  • Testing framework installed (Jest, Mocha, Supertest, or equivalent)
  • Network connectivity to the target API service

Instructions

Step 1: Analyze API Definition

Examine the API structure and endpoints:

  1. Use Read tool to load OpenAPI/Swagger specifications from {baseDir}/api-specs/
  2. Identify all available endpoints, HTTP methods, and request/response schemas
  3. Document authentication requirements and rate limiting constraints
  4. Note any deprecated endpoints or breaking changes

Step 2: Generate Test Cases

Create comprehensive test coverage:

  1. Generate CRUD operation tests (Create, Read, Update, Delete)
  2. Add authentication flow tests (login, token refresh, logout)
  3. Include edge case tests (invalid inputs, boundary conditions, malformed requests)
  4. Create contract validation tests against OpenAPI schemas
  5. Add performance tests for critical endpoints

Step 3: Execute Test Suite

Run automated API tests:

  1. Use Bash(test:api-*) to execute test framework with generated test files
  2. Validate HTTP status codes match expected responses (200, 201, 400, 401, 404, 500)
  3. Verify response headers (Content-Type, Cache-Control, CORS headers)
  4. Validate response body structure against schemas using JSON Schema validation
  5. Test authentication token expiration and renewal flows

Step 4: Generate Test Report

Document results in {baseDir}/test-reports/api/:

  • Test execution summary with pass/fail counts
  • Coverage metrics by endpoint and HTTP method
  • Failed test details with request/response payloads
  • Performance benchmarks (response times, throughput)
  • Contract violation details if schema mismatches detected

Output

The skill generates structured API test artifacts:

Test Suite Files

Generated test files organized by resource:

  • {baseDir}/tests/api/users.test.js - User endpoint tests
  • {baseDir}/tests/api/products.test.js - Product endpoint tests
  • {baseDir}/tests/api/auth.test.js - Authentication flow tests

Test Coverage Report

  • Endpoint coverage percentage (target: 100% for critical paths)
  • HTTP method coverage per endpoint (GET, POST, PUT, PATCH, DELETE)
  • Authentication scenario coverage (authenticated vs. unauthenticated)
  • Error condition coverage (4xx and 5xx responses)

Contract Validation Results

  • OpenAPI schema compliance status for each endpoint
  • Breaking changes detected between specification versions
  • Undocumented endpoints or parameters found in implementation
  • Response schema violations with diff details

Performance Metrics

  • Average response time per endpoint
  • 95th and 99th percentile latencies
  • Requests per second throughput measurements
  • Timeout occurrences and slow endpoint identification

Error Handling

Common issues and solutions:

Connection Refused

  • Error: Cannot connect to API service at specified base URL
  • Solution: Verify service is running using Bash(test:api-healthcheck); check network connectivity and firewall rules

Authentication Failures

  • Error: 401 Unauthorized or 403 Forbidden on protected endpoints
  • Solution: Verify API keys are valid and not expired; ensure bearer token format is correct; check scope permissions

Schema Validation Errors

  • Error: Response does not match OpenAPI schema definition
  • Solution: Update OpenAPI specification to match actual API behavior; file bug if API implementation is incorrect

Timeout Errors

  • Error: Request exceeded configured timeout threshold
  • Solution: Increase timeout for slow endpoints; investigate performance issues on API server; add retry logic for transient failures

Resources

API Testing Frameworks

  • Supertest for Node.js HTTP assertion testing
  • REST-assured for Java API testing
  • Postman/Newman for collection-based API testing
  • Pact for contract testing and consumer-driven contracts

Validation Libraries

  • Ajv for JSON Schema validation
  • OpenAPI Schema Validator for spec compliance
  • Joi for Node.js schema validation
  • GraphQL Schema validation tools

Best Practices

  • Test against non-production environments to avoid data corruption
  • Use test data factories to create consistent test fixtures
  • Implement proper test isolation with database cleanup between tests
  • Version control test suites alongside API specifications
  • Run tests in CI/CD pipeline for continuous validation

GitHub Repository

jeremylongshore/claude-code-plugins-plus
Path: plugins/testing/api-test-automation/skills/api-test-automation
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill

Algorithmic Art Generation

Meta

This skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.

View skill