Zurück zu Fähigkeiten

weights-and-biases

davila7
Aktualisiert 11 days ago
432 Ansichten
18,478
1,685
18,478
Auf GitHub ansehen
DesignMLOpsWeights And BiasesWandBExperiment TrackingHyperparameter TuningModel RegistryCollaborationReal-Time VisualizationPyTorchTensorFlowHuggingFace

Über

Diese Fähigkeit integriert Weights & Biases für umfassendes ML-Experiment-Tracking und MLOps. Sie protokolliert automatisch Metriken, visualisiert das Training in Echtzeit und verwaltet Hyperparameter-Sweeps sowie Modellversionen. Nutzen Sie sie, um Runs zu vergleichen, Modelle zu optimieren und direkt aus Ihrer Entwicklungsumgebung in Team-Arbeitsbereichen zusammenzuarbeiten.

Schnellinstallation

Claude Code

Empfohlen
Primär
npx skills add davila7/claude-code-templates -a claude-code
Plugin-BefehlAlternativ
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternativ
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/weights-and-biases

Kopieren Sie diesen Befehl und fügen Sie ihn in Claude Code ein, um diese Fähigkeit zu installieren

GitHub Repository

davila7/claude-code-templates
Pfad: cli-tool/components/skills/ai-research/mlops-weights-and-biases
0
anthropicanthropic-claudeclaudeclaude-code

Verwandte Skills

quantizing-models-bitsandbytes

Andere

This skill quantizes LLMs to 8-bit or 4-bit precision using bitsandbytes, achieving 50-75% memory reduction with minimal accuracy loss. It's ideal for running larger models on limited GPU memory or accelerating inference, supporting formats like INT8, NF4, and FP4. The skill integrates with HuggingFace Transformers and enables QLoRA training and 8-bit optimizers.

Skill ansehen

huggingface-tokenizers

Dokumente

This skill provides high-performance tokenization using HuggingFace's Rust-based library, processing 1GB of text in under 20 seconds. It supports BPE, WordPiece, and Unigram algorithms while enabling custom tokenizer training and alignment tracking. Use it when you need production-fast tokenization or to build custom tokenizers integrated with the transformers ecosystem.

Skill ansehen

fine-tuning-with-trl

Andere

This skill enables fine-tuning of LLMs using TRL's reinforcement learning methods including SFT, DPO, and PPO for RLHF and preference alignment. It's designed for aligning models with human feedback and works with HuggingFace Transformers. Use it when you need to implement RLHF, optimize with rewards, or train from human preferences.

Skill ansehen

crewai-multi-agent

Meta

CrewAI is a lightweight multi-agent orchestration framework for building teams of specialized AI agents that collaborate autonomously on complex tasks. It enables role-based agent collaboration with memory and supports sequential or hierarchical workflows for production use. The framework is built without LangChain dependencies for lean, fast execution.

Skill ansehen