Back to Skills

llama-cpp

davila7
Updated Today
11 views
15,516
1,344
15,516
View on GitHub
OtherInference ServingLlama.cppCPU InferenceApple SiliconEdge DeploymentGGUFQuantizationNon-NVIDIAAMD GPUsIntel GPUsEmbedded

About

llama-cpp enables LLM inference on CPU, Apple Silicon, and consumer GPUs without requiring NVIDIA hardware or CUDA. It's ideal for edge deployment, Macs, or AMD/Intel systems, offering GGUF quantization for reduced memory and significant speedups over PyTorch on CPU. Use this when you need to run models on non-NVIDIA hardware or in resource-constrained environments.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternative
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/llama-cpp

Copy and paste this command in Claude Code to install this skill

Documentation

llama.cpp

Pure C/C++ LLM inference with minimal dependencies, optimized for CPUs and non-NVIDIA hardware.

When to use llama.cpp

Use llama.cpp when:

  • Running on CPU-only machines
  • Deploying on Apple Silicon (M1/M2/M3/M4)
  • Using AMD or Intel GPUs (no CUDA)
  • Edge deployment (Raspberry Pi, embedded systems)
  • Need simple deployment without Docker/Python

Use TensorRT-LLM instead when:

  • Have NVIDIA GPUs (A100/H100)
  • Need maximum throughput (100K+ tok/s)
  • Running in datacenter with CUDA

Use vLLM instead when:

  • Have NVIDIA GPUs
  • Need Python-first API
  • Want PagedAttention

Quick start

Installation

# macOS/Linux
brew install llama.cpp

# Or build from source
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

# With Metal (Apple Silicon)
make LLAMA_METAL=1

# With CUDA (NVIDIA)
make LLAMA_CUDA=1

# With ROCm (AMD)
make LLAMA_HIP=1

Download model

# Download from HuggingFace (GGUF format)
huggingface-cli download \
    TheBloke/Llama-2-7B-Chat-GGUF \
    llama-2-7b-chat.Q4_K_M.gguf \
    --local-dir models/

# Or convert from HuggingFace
python convert_hf_to_gguf.py models/llama-2-7b-chat/

Run inference

# Simple chat
./llama-cli \
    -m models/llama-2-7b-chat.Q4_K_M.gguf \
    -p "Explain quantum computing" \
    -n 256  # Max tokens

# Interactive chat
./llama-cli \
    -m models/llama-2-7b-chat.Q4_K_M.gguf \
    --interactive

Server mode

# Start OpenAI-compatible server
./llama-server \
    -m models/llama-2-7b-chat.Q4_K_M.gguf \
    --host 0.0.0.0 \
    --port 8080 \
    -ngl 32  # Offload 32 layers to GPU

# Client request
curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-2-7b-chat",
    "messages": [{"role": "user", "content": "Hello!"}],
    "temperature": 0.7,
    "max_tokens": 100
  }'

Quantization formats

GGUF format overview

FormatBitsSize (7B)SpeedQualityUse Case
Q4_K_M4.54.1 GBFastGoodRecommended default
Q4_K_S4.33.9 GBFasterLowerSpeed critical
Q5_K_M5.54.8 GBMediumBetterQuality critical
Q6_K6.55.5 GBSlowerBestMaximum quality
Q8_08.07.0 GBSlowExcellentMinimal degradation
Q2_K2.52.7 GBFastestPoorTesting only

Choosing quantization

# General use (balanced)
Q4_K_M  # 4-bit, medium quality

# Maximum speed (more degradation)
Q2_K or Q3_K_M

# Maximum quality (slower)
Q6_K or Q8_0

# Very large models (70B, 405B)
Q3_K_M or Q4_K_S  # Lower bits to fit in memory

Hardware acceleration

Apple Silicon (Metal)

# Build with Metal
make LLAMA_METAL=1

# Run with GPU acceleration (automatic)
./llama-cli -m model.gguf -ngl 999  # Offload all layers

# Performance: M3 Max 40-60 tokens/sec (Llama 2-7B Q4_K_M)

NVIDIA GPUs (CUDA)

# Build with CUDA
make LLAMA_CUDA=1

# Offload layers to GPU
./llama-cli -m model.gguf -ngl 35  # Offload 35/40 layers

# Hybrid CPU+GPU for large models
./llama-cli -m llama-70b.Q4_K_M.gguf -ngl 20  # GPU: 20 layers, CPU: rest

AMD GPUs (ROCm)

# Build with ROCm
make LLAMA_HIP=1

# Run with AMD GPU
./llama-cli -m model.gguf -ngl 999

Common patterns

Batch processing

# Process multiple prompts from file
cat prompts.txt | ./llama-cli \
    -m model.gguf \
    --batch-size 512 \
    -n 100

Constrained generation

# JSON output with grammar
./llama-cli \
    -m model.gguf \
    -p "Generate a person: " \
    --grammar-file grammars/json.gbnf

# Outputs valid JSON only

Context size

# Increase context (default 512)
./llama-cli \
    -m model.gguf \
    -c 4096  # 4K context window

# Very long context (if model supports)
./llama-cli -m model.gguf -c 32768  # 32K context

Performance benchmarks

CPU performance (Llama 2-7B Q4_K_M)

CPUThreadsSpeedCost
Apple M3 Max1650 tok/s$0 (local)
AMD Ryzen 9 7950X3235 tok/s$0.50/hour
Intel i9-13900K3230 tok/s$0.40/hour
AWS c7i.16xlarge6440 tok/s$2.88/hour

GPU acceleration (Llama 2-7B Q4_K_M)

GPUSpeedvs CPUCost
NVIDIA RTX 4090120 tok/s3-4×$0 (local)
NVIDIA A1080 tok/s2-3×$1.00/hour
AMD MI25070 tok/s$2.00/hour
Apple M3 Max (Metal)50 tok/s~Same$0 (local)

Supported models

LLaMA family:

  • Llama 2 (7B, 13B, 70B)
  • Llama 3 (8B, 70B, 405B)
  • Code Llama

Mistral family:

  • Mistral 7B
  • Mixtral 8x7B, 8x22B

Other:

  • Falcon, BLOOM, GPT-J
  • Phi-3, Gemma, Qwen
  • LLaVA (vision), Whisper (audio)

Find models: https://huggingface.co/models?library=gguf

References

Resources

GitHub Repository

davila7/claude-code-templates
Path: cli-tool/components/skills/ai-research/inference-serving-llama-cpp
anthropicanthropic-claudeclaudeclaude-code

Related Skills

quantizing-models-bitsandbytes

Other

This skill quantizes LLMs to 8-bit or 4-bit precision using bitsandbytes, reducing memory usage by 50-75% with minimal accuracy loss for GPU-constrained environments. It supports multiple formats (INT8, NF4, FP4) and enables QLoRA training and 8-bit optimizers. Use it with HuggingFace Transformers when you need to fit larger models into limited memory or accelerate inference.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that enables fast structured generation with JSON/regex outputs and constrained decoding. It's ideal for agentic workflows with tool calls and multi-turn conversations, offering significantly faster inference through RadixAttention prefix caching. Use it when you need production-scale performance with shared context across requests.

View skill

hqq-quantization

Other

HQQ enables fast, calibration-free quantization of LLMs down to 2-bit precision without needing a dataset. It's ideal for rapid quantization workflows and for deployment with vLLM or HuggingFace Transformers. Key advantages include significantly faster quantization than methods like GPTQ and support for fine-tuning quantized models.

View skill

tensorrt-llm

Other

TensorRT-LLM is an NVIDIA-optimized library for deploying LLMs on NVIDIA GPUs, delivering up to 100x faster inference than PyTorch. Use it for production serving where you need maximum throughput, low latency, and support for features like quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

View skill