Back to Skills

qwen_cli_refactor

Foundup
Updated Today
17 views
4
2
4
View on GitHub
Otherai

About

This skill refactors monolithic CLI applications by extracting command modules from large main() functions using Qwen 1.5B for strategic analysis. It's designed for CLI files exceeding 1,000 lines and reduces main() size by over 70% while preserving functionality. The process includes Gemma validation to ensure pattern fidelity above 90%.

Documentation

Qwen CLI Refactoring Skill

Agent: Qwen 1.5B (strategic analysis + code extraction) Validation: Gemma 270M (pattern fidelity check) Token Budget: 1,300 tokens (800 extraction + 400 refactoring + 100 validation)


Skill Purpose

Refactor monolithic CLI files (>1,000 lines) by extracting logical command modules while preserving all functionality. Uses Qwen for strategic analysis and module extraction, with Gemma validation for pattern fidelity.

Trigger Source: Manual invocation by 0102 when CLI files exceed WSP 49 limits

Success Criteria:

  • Reduce main() function size by >70%
  • Extract 5+ independent command modules
  • Zero regressions (all flags work identically)
  • Pattern fidelity >90% (Gemma validation)

Input Context

{
  "file_path": "path/to/cli.py",
  "current_lines": 1470,
  "main_function_lines": 1144,
  "target_reduction_percent": 70,
  "preserve_flags": ["--search", "--index", "--all-67-flags"],
  "output_directory": "path/to/cli/commands/"
}

Micro Chain-of-Thought Steps

Step 1: Analyze CLI Structure (200 tokens)

Qwen Analysis Task: Read cli.py and identify:

  1. Command-line argument groups (search, index, holodae, etc.)
  2. Logical sections in main() function
  3. Shared dependencies between sections
  4. Natural module boundaries

Output:

{
  "total_lines": 1470,
  "main_function_lines": 1144,
  "argument_groups": [
    {"name": "search", "flags": ["--search", "--limit"], "lines": [601, 750]},
    {"name": "index", "flags": ["--index-all", "--index-code"], "lines": [751, 900]},
    {"name": "holodae", "flags": ["--start-holodae", "--stop-holodae"], "lines": [901, 1050]},
    {"name": "module", "flags": ["--link-modules", "--query-modules"], "lines": [1051, 1200]},
    {"name": "codeindex", "flags": ["--code-index-report"], "lines": [1201, 1350]}
  ],
  "shared_dependencies": ["throttler", "reward_events", "args"],
  "extraction_priority": ["search", "index", "holodae", "module", "codeindex"]
}

Step 2: Extract Command Modules (400 tokens)

Qwen Extraction Task: For each command group:

  1. Extract code from main() function
  2. Create commands/{name}.py file
  3. Convert to class-based command pattern
  4. Preserve all flag handling logic

Template Pattern:

# commands/search.py
from typing import Any, Dict
from ..core import HoloIndex

class SearchCommand:
    def __init__(self, holo_index: HoloIndex):
        self.holo_index = holo_index

    def execute(self, args, throttler, add_reward_event) -> Dict[str, Any]:
        \"\"\"Execute search command with preserved flag logic\"\"\"
        # [EXTRACTED CODE FROM MAIN() LINES 601-750]
        results = self.holo_index.search(args.search, limit=args.limit)
        return {"results": results, "success": True}

Output: 5 new command module files created


Step 3: Refactor main() Function (200 tokens)

Qwen Refactoring Task:

  1. Remove extracted code from main()
  2. Add command routing logic
  3. Instantiate command classes
  4. Delegate execution to appropriate command

New main() Structure:

def main() -> None:
    args = parser.parse_args()
    throttler = AgenticOutputThrottler()

    # Initialize HoloIndex
    holo_index = HoloIndex(...)

    # Command routing
    if args.search:
        from .commands.search import SearchCommand
        cmd = SearchCommand(holo_index)
        result = cmd.execute(args, throttler, add_reward_event)
    elif args.index or args.index_all:
        from .commands.index import IndexCommand
        cmd = IndexCommand(holo_index)
        result = cmd.execute(args, throttler, add_reward_event)
    # ... etc for other commands

    # Render output (preserved logic)
    render_response(throttler, result, args)

Output: Refactored main.py (reduced from 1,144 → ~300 lines)


Step 4: Gemma Pattern Fidelity Validation (100 tokens)

Gemma Validation Task: Compare original vs refactored:

  1. All 67 flags still recognized
  2. Execution flow unchanged
  3. Output format identical
  4. No missing imports

Validation Checks:

original_flags = extract_flags("cli.py")
refactored_flags = extract_flags("cli/main.py") + extract_flags("cli/commands/*.py")

assert set(original_flags) == set(refactored_flags), "Missing flags detected"
assert pattern_fidelity >= 0.90, "Pattern fidelity below threshold"

Output:

{
  "pattern_fidelity": 0.95,
  "flags_preserved": 67,
  "missing_flags": [],
  "regressions_detected": 0,
  "validation_passed": true
}

Step 5: Generate Migration Report (100 tokens)

Report Contents:

  1. Files created (5 command modules)
  2. main() reduction (1,144 → 300 lines, 74% reduction)
  3. Validation results (fidelity: 95%)
  4. Token cost (actual vs estimated)
  5. Next steps (testing, documentation)

Output:

# CLI Refactoring Report

**Date**: 2025-10-25
**File**: holo_index/cli.py
**Status**: COMPLETE ✅

## Changes Summary
- main() reduced: 1,144 → 300 lines (74% reduction)
- Command modules created: 5
- Total lines: 1,470 → 1,350 (distributed across 6 files)
- Pattern fidelity: 95% (Gemma validated)

## Files Created
1. cli/commands/search.py (200 lines)
2. cli/commands/index.py (180 lines)
3. cli/commands/holodae.py (190 lines)
4. cli/commands/module.py (210 lines)
5. cli/commands/codeindex.py (170 lines)

## Validation
- ✅ All 67 flags preserved
- ✅ Zero regressions detected
- ✅ Pattern fidelity: 95%
- ✅ Imports resolved

## Token Cost
- Estimated: 1,300 tokens
- Actual: 1,150 tokens (12% under budget)

## Next Steps
1. Run integration tests
2. Update documentation
3. Commit with 0102 approval

Execution Constraints

Authorized Actions (Autonomous)

  • ✅ Create new files in cli/commands/ directory
  • ✅ Extract code from main() function
  • ✅ Update imports in main.py
  • ✅ Run Gemma validation checks

Requires 0102 Approval

  • ❌ Modifying flag names
  • ❌ Removing any flags
  • ❌ Changing command behavior
  • ❌ Committing changes to git

Safety Guardrails

  1. Backup: Create cli.py.backup before modification
  2. Validation: Gemma fidelity must be ≥90%
  3. Rollback: Restore backup if validation fails
  4. Reporting: Report progress after each extraction

Pattern Memory Storage

After successful execution, store refactoring pattern:

{
  "pattern_name": "cli_refactoring",
  "original_size": 1470,
  "refactored_size": 1350,
  "main_reduction": 0.74,
  "modules_extracted": 5,
  "token_cost": 1150,
  "fidelity": 0.95,
  "success": true,
  "learned": "Extract commands by flag groups, preserve shared state via dependency injection"
}

Example Invocation

Via WRE Master Orchestrator:

from modules.infrastructure.wre_core.wre_master_orchestrator import WREMasterOrchestrator

orchestrator = WREMasterOrchestrator()

result = orchestrator.execute_skill(
    skill_name="qwen_cli_refactor",
    agent="qwen",
    input_context={
        "file_path": "holo_index/cli.py",
        "current_lines": 1470,
        "main_function_lines": 1144,
        "target_reduction_percent": 70,
        "output_directory": "holo_index/cli/commands/"
    }
)

print(f"Refactoring {'succeeded' if result['success'] else 'failed'}")
print(f"Pattern fidelity: {result['pattern_fidelity']}")
print(f"Token cost: {result['token_cost']}")

WSP Compliance

References:

  • WSP 49: Module Structure (file size limits)
  • WSP 72: Block Independence (command isolation)
  • WSP 50: Pre-Action Verification (backup before modification)
  • WSP 96: WRE Skills Protocol (this skill definition)

Success Metrics

MetricTargetActual (Expected)
main() reduction>70%74%
Modules extracted55
Pattern fidelity>90%95%
Token cost<1,5001,150
Regressions00

Next Evolution: After 10+ successful executions, promote from prototype → production

Quick Install

/plugin add https://github.com/Foundup/Foundups-Agent/tree/main/qwen_cli_refactor

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

Foundup/Foundups-Agent
Path: .claude/skills/qwen_cli_refactor
bitcoinblockchain-technologydaesdaofoundupspartifact

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill