Back to Skills

testing-prpm-cli

pr-pm
Updated Today
94 views
62
9
62
View on GitHub
Metawordaitestingautomationdesign

About

This skill helps developers test PRPM CLI commands locally by building the package, setting up the environment, and executing workflows against a local registry. It includes contract testing to verify documented behavior matches implementation and comprehensive cross-format conversion testing. Use it for debugging CLI behavior, testing new features, or validating format converters before committing changes.

Documentation

Testing PRPM CLI

Overview

Test PRPM CLI commands against a local registry by building the package, setting the registry URL, and invoking the CLI directly. Includes comprehensive cross-format conversion testing patterns and contract testing to verify documented behavior matches implementation.

When to Use

  • Testing new CLI commands or features
  • Debugging CLI behavior
  • Verifying CLI changes before committing
  • Testing against local registry data
  • Validating cross-format conversions
  • Testing new format converters
  • Contract testing: verifying documented behavior matches implementation

Quick Reference

StepCommand
Build CLInpm run build --workspace=packages/cli
Start local registrynpm run dev --workspace=packages/registry
Set local registryexport PRPM_REGISTRY_URL=http://127.0.0.1:3111
Run CLI directlynode /Users/khaliqgant/Projects/prpm/app/packages/cli/dist/index.js <command>
Run via npm linkprpm <command> (after linking)

Workflow

1. Build the CLI (Required First Step)

Always rebuild before testing to ensure you're testing current code:

npm run build --workspace=packages/cli

2. Start Local Registry

Start the registry server in background:

npm run dev --workspace=packages/registry &
sleep 3
lsof -i :3111  # Verify it's running

3. Configure Local Registry

Point CLI to local registry instead of production:

export PRPM_REGISTRY_URL=http://127.0.0.1:3111

Verify it's set:

echo $PRPM_REGISTRY_URL

4. Run CLI Commands

Option A: Direct invocation (recommended for testing)

node /Users/khaliqgant/Projects/prpm/app/packages/cli/dist/index.js search typescript
node /Users/khaliqgant/Projects/prpm/app/packages/cli/dist/index.js install some-package

Option B: npm link (for interactive testing)

cd packages/cli
npm link
prpm search typescript

Comprehensive Conversion Testing

Use Self-Improving Skill to Find Test Packages

Before testing conversions, use the self-improving skill to download a diverse set of packages:

# Search for packages of different types
node $CLI search "claude" --limit 10
node $CLI search "cursor" --limit 10
node $CLI search "agent" --limit 10
node $CLI search "skill" --limit 10

Create Test Directory and Install Diverse Packages

mkdir -p /tmp/prpm-conversion-tests
cd /tmp/prpm-conversion-tests
export PRPM_REGISTRY_URL=http://127.0.0.1:3111
CLI="/Users/khaliqgant/Projects/prpm/app/packages/cli/dist/index.js"

# Install packages of different subtypes
node $CLI install @prpm/agent-builder-skill --as claude      # Skill
node $CLI install @prpm/creating-cursor-rules --as cursor    # Cursor Rule
node $CLI install @camoneart/context-engineering-agent --as claude  # Agent

Supported Formats (CLI_SUPPORTED_FORMATS)

Test conversions across ALL supported formats:

FormatDescription
cursorCursor IDE rules (.mdc)
claudeClaude Code (skills, agents, commands)
windsurfWindsurf rules
continueContinue rules
copilotGitHub Copilot instructions
kiroKiro steering files
agents.mdAgents.md format
geminiGemini CLI extensions
rulerRuler format
zedZed editor extensions
opencodeOpenCode rules
aiderAider conventions
traeTrae rules
replitReplit agent rules
zencoderZenCoder rules
droidFactory/Droid rules

Conversion Test Matrix

Run comprehensive conversion tests:

cd /tmp/prpm-conversion-tests
mkdir -p /tmp/conversions
CLI="/Users/khaliqgant/Projects/prpm/app/packages/cli/dist/index.js"

# Claude Skill → All formats
for format in cursor windsurf kiro gemini zed continue copilot opencode aider trae replit zencoder droid; do
  node $CLI convert .claude/skills/*/SKILL.md --to $format -o /tmp/conversions/skill-to-$format.md 2>&1
done

# Cursor Rule → Multiple formats
for format in claude windsurf gemini zed; do
  node $CLI convert .cursor/rules/*.mdc --to $format -o /tmp/conversions/cursor-to-$format.md 2>&1
done

# Claude Agent → Multiple formats
for format in cursor gemini windsurf; do
  node $CLI convert .claude/agents/*.md --to $format -o /tmp/conversions/agent-to-$format.md 2>&1
done

Round-Trip Testing

Verify content preservation through round-trip conversions:

# Claude → Cursor → Claude
node $CLI convert .claude/skills/*/SKILL.md --to cursor -o /tmp/conversions/step1-cursor.mdc
node $CLI convert /tmp/conversions/step1-cursor.mdc --to claude -o /tmp/conversions/step2-claude.md

# Compare file sizes (expect some reduction but not dramatic)
wc -c .claude/skills/*/SKILL.md /tmp/conversions/step1-cursor.mdc /tmp/conversions/step2-claude.md

Validation Checklist

For each conversion, verify:

  • Command succeeds - Exit code 0, no errors
  • Output file created - File exists at specified path
  • Content preserved - Core markdown structure intact
  • Format-specific frontmatter - Correct fields for target format
  • File size reasonable - Not truncated (compare with source)

Expected File Sizes

Conversion TypeExpected Size Ratio
Claude → Cursor~95-100%
Claude → Windsurf~95-100%
Claude → Gemini~95-100%
Claude → OpenCodeMay be smaller (format limits)
Claude → DroidMay be smaller (format limits)
Round-trip~50-70% (metadata loss expected)

Test Report Template

Document results in this format:

## Conversion Test Report

### Test Setup
- Registry: Local (port 3111)
- Test packages: [list installed packages]

### Results

| Source | Target | Status | Output Size |
|--------|--------|--------|-------------|
| Claude Skill | Gemini | Pass/Fail | X bytes |
| ... | ... | ... | ... |

### Observations
- [Note any issues, truncations, or unexpected behavior]

Contract Testing (CRITICAL)

Contract testing ensures documented behavior matches implementation. This is the most important type of testing for features with configurable behavior.

Why Contract Testing Matters

The eager/lazy loading bug is a case study: documentation described a precedence chain (CLI > file > package > default), but implementation only handled CLI flags. Tests passed because they only tested the CLI flag path. Contract testing would have caught this.

Contract Testing Principles

  1. Test EVERY documented behavior path, not just the happy path
  2. Test precedence chains completely - if docs say "A > B > C > default", test all 4 cases
  3. Test behavior WITHOUT flags - verify defaults and configuration-driven behavior
  4. Test WITH and WITHOUT explicit settings - don't assume flag presence

Contract Testing Checklist

For ANY feature with documented behavior:

  • Read the documentation first - what does it claim to do?
  • List all behavior paths - every if/else/default mentioned
  • Create test for each path - one test per documented behavior
  • Test WITHOUT user flags - verify package/config defaults work
  • Test precedence - verify higher-priority overrides lower
  • Verify error messages match - documented errors should occur

Example: Eager/Lazy Loading Contract Tests

Documentation states: "Precedence: CLI flag > package-level > default (lazy)"

Required tests:

# Setup test directory
mkdir -p /tmp/prpm-contract-tests
cd /tmp/prpm-contract-tests
export PRPM_REGISTRY_URL=http://127.0.0.1:3111
CLI="/Users/khaliqgant/Projects/prpm/app/packages/cli/dist/index.js"

# Test 1: CLI --eager flag (highest priority)
rm -rf .openskills AGENTS.md
node $CLI install @prpm/some-skill --as agents.md --eager
# VERIFY: AGENTS.md contains activation="eager" or priority="0"
grep -q 'activation="eager"\|priority="0"' AGENTS.md && echo "PASS: CLI --eager works" || echo "FAIL: CLI --eager"

# Test 2: CLI --lazy flag overrides package eager
rm -rf .openskills AGENTS.md
# Install a package that has eager:true in prpm.json with --lazy flag
node $CLI install @prpm/eager-package --as agents.md --lazy
# VERIFY: Should be lazy despite package setting
grep -q 'activation="lazy"\|priority="1"' AGENTS.md && echo "PASS: CLI --lazy overrides" || echo "FAIL: CLI --lazy"

# Test 3: Package-level eager (NO CLI flag) - THIS IS THE BUG THAT WAS MISSED
rm -rf .openskills AGENTS.md
# Install a package that has eager:true in its prpm.json WITHOUT --eager flag
node $CLI install @prpm/eager-package --as agents.md
# VERIFY: Should be eager based on package setting
grep -q 'activation="eager"\|priority="0"' AGENTS.md && echo "PASS: Package eager works" || echo "FAIL: Package eager"

# Test 4: Default (lazy) when no flags and no package setting
rm -rf .openskills AGENTS.md
node $CLI install @prpm/normal-skill --as agents.md
# VERIFY: Should be lazy by default
grep -q 'activation="lazy"\|priority="1"' AGENTS.md && echo "PASS: Default lazy works" || echo "FAIL: Default"

Contract Test Template

For any new feature, create tests in this format:

## Contract Tests for [Feature Name]

### Documentation Claims:
1. [Claim 1 from docs]
2. [Claim 2 from docs]
3. [Precedence/default behavior from docs]

### Test Cases:

| # | Documented Behavior | Test Setup | Expected Result | Pass/Fail |
|---|--------------------|-----------| ----------------|-----------|
| 1 | [Claim 1] | [How to test] | [What to verify] | |
| 2 | [Claim 2] | [How to test] | [What to verify] | |
| 3 | Default behavior | [No flags/config] | [Default result] | |

### Results:
- All tests must pass before feature is considered complete
- Document any deviations between docs and implementation

Anti-Patterns to Avoid

Anti-PatternWhy It FailsCorrect Approach
Only testing with flagsMisses config/default pathsTest without flags first
Assuming documentation is implementationDocs may describe intent, not realityVerify each claim with test
Testing happy path onlyMisses precedence bugsTest all documented paths
Skipping default behavior testDefaults often brokenAlways test "no config" case
Not reading docs before testingMiss documented behaviorsRead docs, list claims, test each

Common Mistakes

MistakeSymptomFix
Forgetting to buildOld behavior, changes not reflectedRun npm run build --workspace=packages/cli
Missing registry envCommands hit production registrySet PRPM_REGISTRY_URL before running
Stale npm linkWrong version runningRe-run npm link after rebuilding
Local registry not runningConnection refused errorsStart registry: npm run dev --workspace=packages/registry
Testing single format onlyMiss format-specific bugsTest ALL formats in CLI_SUPPORTED_FORMATS
No round-trip testingMiss content loss bugsAlways verify round-trip preservation
No contract testingDocs ≠ implementationTest EVERY documented behavior
Only testing with CLI flagsMiss config/default bugsTest without flags to verify defaults

One-Liner Setup

npm run build --workspace=packages/cli && export PRPM_REGISTRY_URL=http://127.0.0.1:3111

Then test commands:

node packages/cli/dist/index.js --help

Unit Test Commands

Run converter unit tests:

# All converter tests
npm run test --workspace=packages/converters

# Specific test files
npm run test --workspace=packages/converters -- --testPathPattern="file-references"
npm run test --workspace=packages/converters -- --testPathPattern="security"
npm run test --workspace=packages/converters -- --testPathPattern="cross-format"
npm run test --workspace=packages/converters -- --testPathPattern="zed"

Quick Install

/plugin add https://github.com/pr-pm/prpm/tree/main/testing-prpm-cli

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

pr-pm/prpm
Path: .claude/skills/testing-prpm-cli
claudeclaude-codecursorcursor-ai-editcursorrulespackage-manager

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill