evaluating-code-models
Über
Diese Fähigkeit führt standardisierte Code-Generierungs-Benchmarks wie HumanEval und MBPP durch, um die Modellleistung anhand von Pass@k-Metriken zu bewerten. Es ist das branchenübliche Tool des BigCode-Projekts zum Vergleichen von Programmierfähigkeiten, Testen der Mehrsprachenunterstützung und Messen der Codequalität. Verwenden Sie es beim Benchmarking von Modellen, beim Vergleichen ihrer Fähigkeiten oder zum Reproduzieren von HuggingFace-Leaderboard-Auswertungen.
Schnellinstallation
Claude Code
Empfohlennpx skills add majiayu000/claude-skill-registry -a claude-code/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/evaluating-code-modelsKopieren Sie diesen Befehl und fügen Sie ihn in Claude Code ein, um diese Fähigkeit zu installieren
GitHub Repository
Verwandte Skills
evaluating-code-models
MetaThis skill benchmarks code generation models using industry-standard evaluations like HumanEval and MBPP across multiple programming languages. It calculates pass@k metrics for comparing model performance, testing multi-language support, and measuring code quality. Developers should use it when rigorously evaluating or comparing coding models, as it's the same tool powering HuggingFace's code leaderboards.
langsmith-observability
MetaLangSmith provides LLM observability for tracing, evaluating, and monitoring AI applications. Developers should use it for debugging prompts and chains, systematic output evaluation, and monitoring production systems. Its key capabilities include performance tracing, dataset testing, and analysis of latency and token usage.
phoenix-observability
TestenPhoenix is an open-source AI observability platform for tracing, evaluating, and monitoring LLM applications. It provides detailed traces for debugging, runs evaluations on datasets, and offers real-time monitoring for production systems. Key capabilities include experiment pipelines and self-hosted observability without vendor lock-in.
evaluating-llms-harness
TestenThis skill runs standardized LLM evaluations across 60+ academic benchmarks like MMLU and GSM8K using the industry-standard lm-evaluation-harness. Use it for benchmarking model quality, comparing different models, or tracking training progress with support for HuggingFace, vLLM, and API-based models. It provides a consistent, widely-adopted method for reporting academic results.
