Back to Skills

pytest Test Framework

FortiumPartners
Updated Today
16 views
5
1
5
View on GitHub
Metatesting

About

This Claude Skill generates and executes pytest tests for Python projects. It supports key testing features like fixtures, parametrization, and mocking capabilities. Use it to quickly create test files from templates and run tests with structured JSON output.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/FortiumPartners/ai-mesh
Git CloneAlternative
git clone https://github.com/FortiumPartners/ai-mesh.git ~/.claude/skills/pytest Test Framework

Copy and paste this command in Claude Code to install this skill

Documentation

pytest Test Framework

Purpose

Provide pytest test execution and generation for Python projects, supporting:

  • Test file generation from templates
  • Test execution with structured output
  • Fixtures and parametrized tests
  • Mock and monkeypatch support

Usage

Generate Test File

python generate-test.py \
  --source src/calculator.py \
  --output tests/test_calculator.py \
  --type unit \
  --description "Calculator fails to handle division by zero"

Execute Tests

python run-test.py \
  --file tests/test_calculator.py \
  --config pytest.ini

Output Format

Test Generation

{
  "success": true,
  "testFile": "tests/test_calculator.py",
  "testCount": 3,
  "template": "unit-test"
}

Test Execution

{
  "success": false,
  "passed": 2,
  "failed": 1,
  "total": 3,
  "duration": 0.234,
  "failures": [
    {
      "test": "test_divide_by_zero",
      "error": "AssertionError: Expected ZeroDivisionError",
      "file": "tests/test_calculator.py",
      "line": 15
    }
  ]
}

Integration

Used by deep-debugger for Python project testing:

  1. Invoke test-detector to identify pytest
  2. Invoke generate-test.py to create failing test
  3. Invoke run-test.py to validate test fails
  4. Re-run after fix to verify passing

GitHub Repository

FortiumPartners/ai-mesh
Path: skills/pytest-test

Related Skills

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

webapp-testing

Testing

This Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.

View skill

finishing-a-development-branch

Testing

This skill helps developers complete finished work by verifying tests pass and then presenting structured integration options. It guides the workflow for merging, creating PRs, or cleaning up branches after implementation is done. Use it when your code is ready and tested to systematically finalize the development process.

View skill