Back to Skills

llama-factory

davila7
Updated Today
30 views
15,516
1,344
15,516
View on GitHub
DesignFine-TuningLLaMA FactoryLLMWebUINo-CodeQLoRALoRAMultimodalHuggingFaceLlamaQwenGemma

About

This skill provides expert guidance for fine-tuning LLMs using the LLaMA-Factory framework, including its no-code WebUI and support for 100+ models with QLoRA quantization. Use it when implementing, debugging, or learning best practices for LLaMA-Factory solutions in your development workflow.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternative
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/llama-factory

Copy and paste this command in Claude Code to install this skill

Documentation

Llama-Factory Skill

Comprehensive assistance with llama-factory development, generated from official documentation.

When to Use This Skill

This skill should be triggered when:

  • Working with llama-factory
  • Asking about llama-factory features or APIs
  • Implementing llama-factory solutions
  • Debugging llama-factory code
  • Learning llama-factory best practices

Quick Reference

Common Patterns

Quick reference patterns will be added as you use the skill.

Reference Files

This skill includes comprehensive documentation in references/:

  • _images.md - Images documentation
  • advanced.md - Advanced documentation
  • getting_started.md - Getting Started documentation
  • other.md - Other documentation

Use view to read specific reference files when detailed information is needed.

Working with This Skill

For Beginners

Start with the getting_started or tutorials reference files for foundational concepts.

For Specific Features

Use the appropriate category reference file (api, guides, etc.) for detailed information.

For Code Examples

The quick reference section above contains common patterns extracted from the official docs.

Resources

references/

Organized documentation extracted from official sources. These files contain:

  • Detailed explanations
  • Code examples with language annotations
  • Links to original documentation
  • Table of contents for quick navigation

scripts/

Add helper scripts here for common automation tasks.

assets/

Add templates, boilerplate, or example projects here.

Notes

  • This skill was automatically generated from official documentation
  • Reference files preserve the structure and examples from source docs
  • Code examples include language detection for better syntax highlighting
  • Quick reference patterns are extracted from common usage examples in the docs

Updating

To refresh this skill with updated documentation:

  1. Re-run the scraper with the same configuration
  2. The skill will be rebuilt with the latest information

GitHub Repository

davila7/claude-code-templates
Path: cli-tool/components/skills/ai-research/fine-tuning-llama-factory
anthropicanthropic-claudeclaudeclaude-code

Related Skills

quantizing-models-bitsandbytes

Other

This skill quantizes LLMs to 8-bit or 4-bit precision using bitsandbytes, reducing memory usage by 50-75% with minimal accuracy loss for GPU-constrained environments. It supports multiple formats (INT8, NF4, FP4) and enables QLoRA training and 8-bit optimizers. Use it with HuggingFace Transformers when you need to fit larger models into limited memory or accelerate inference.

View skill

axolotl

Design

This skill provides expert guidance for fine-tuning LLMs using the Axolotl framework, helping developers configure YAML files and implement advanced techniques like LoRA/QLoRA and DPO/KTO. Use it when working with Axolotl features, debugging code, or learning best practices for fine-tuning across 100+ models. It offers comprehensive assistance including multimodal support and performance optimization.

View skill

peft-fine-tuning

Other

This skill enables parameter-efficient fine-tuning of large language models using LoRA, QLoRA, and other adapter methods, drastically reducing GPU memory requirements. It's ideal for fine-tuning 7B-70B models on consumer hardware by training less than 1% of parameters while maintaining accuracy. The integration with Hugging Face's ecosystem supports multi-adapter serving and rapid iteration with task-specific adapters.

View skill

weights-and-biases

Design

This skill enables ML experiment tracking and MLOps using Weights & Biases, automatically logging metrics and visualizing training in real-time. It helps developers optimize hyperparameters with sweeps, compare runs, and manage a versioned model registry. Use it for collaborative ML project management with full artifact lineage tracking.

View skill