Back to Skills

Test Framework Detector

FortiumPartners
Updated Yesterday
18 views
5
1
5
View on GitHub
Testingtestingautomation

About

This skill automatically detects test frameworks like Jest, pytest, RSpec, and xUnit by analyzing project configuration files and dependencies. It's designed for agents like deep-debugger to determine the correct framework for test generation or execution. The tool scans package manifests, configuration files, and directory structures to identify frameworks with a confidence score.

Documentation

Test Framework Detector

Purpose

Automatically identify which test framework(s) a project uses by examining:

  • Package manifests (package.json, requirements.txt, Gemfile, *.csproj)
  • Test configuration files (jest.config.js, pytest.ini, spec_helper.rb, xunit.runner.json)
  • Directory structure and naming conventions

Usage

This skill is invoked by agents (like deep-debugger) when they need to determine which test framework to use for test generation or execution.

Detection Script

Run the detection script with the project path:

node detect-framework.js /path/to/project

Output Format

The script returns a JSON object with detected frameworks:

{
  "detected": true,
  "frameworks": [
    {
      "name": "jest",
      "confidence": 0.95,
      "version": "29.7.0",
      "configFiles": ["jest.config.js", "package.json"],
      "testDirectory": "tests/",
      "testPattern": "**/*.test.js"
    }
  ],
  "primary": "jest"
}

Detection Patterns

The skill uses pattern-based detection defined in framework-patterns.json. Each framework has:

  • Package indicators: Dependencies that suggest framework presence
  • Config files: Framework-specific configuration files
  • Test file patterns: Common test file naming conventions

Supported Frameworks

  1. Jest (JavaScript/TypeScript)

    • Config: jest.config.js, jest.config.ts, package.json (jest section)
    • Dependencies: jest, @types/jest, ts-jest
    • Patterns: .test.js, .spec.js, tests/
  2. pytest (Python)

    • Config: pytest.ini, pyproject.toml, setup.cfg, tox.ini
    • Dependencies: pytest in requirements.txt or pyproject.toml
    • Patterns: test_.py, _test.py, tests/
  3. RSpec (Ruby)

    • Config: .rspec, spec/spec_helper.rb
    • Dependencies: rspec in Gemfile
    • Patterns: _spec.rb, spec/*
  4. xUnit (C#/.NET)

    • Config: xunit.runner.json, *.csproj
    • Dependencies: xunit, xunit.runner.visualstudio in *.csproj
    • Patterns: Tests.cs, Test.cs, Tests/

Confidence Scoring

Confidence scores (0.0-1.0) are calculated based on:

  • Config file presence: +0.4
  • Package dependency found: +0.3
  • Test directory exists: +0.2
  • Test files found: +0.1

Multiple frameworks may be detected (e.g., Jest + pytest in monorepos).

Example Invocations

Detect framework in current directory:

node skills/test-detector/detect-framework.js .

Detect framework with verbose output:

DEBUG=true node skills/test-detector/detect-framework.js /path/to/project

Integration with Agents

Agents should invoke this skill before test generation:

1. Invoke test-detector skill with project path
2. Parse JSON output to get primary framework
3. Invoke appropriate test framework skill (jest-test, pytest-test, etc.)
4. Generate or execute tests using framework-specific patterns

Error Handling

If no framework is detected:

{
  "detected": false,
  "frameworks": [],
  "primary": null,
  "message": "No test framework detected. Please specify framework manually."
}

Quick Install

/plugin add https://github.com/FortiumPartners/ai-mesh/tree/main/test-detector

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

FortiumPartners/ai-mesh
Path: skills/test-detector

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

Algorithmic Art Generation

Meta

This skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.

View skill

webapp-testing

Testing

This Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.

View skill