Back to Skills

awq-quantization

majiayu000
Updated 13 days ago
14 views
58
9
58
View on GitHub
OtherOptimizationAWQQuantization4-BitActivation-AwareMemory OptimizationFast InferencevLLM IntegrationMarlin Kernels

About

AWQ is a 4-bit weight quantization technique that uses activation patterns to preserve critical weights, enabling faster inference with minimal accuracy loss. It's ideal for deploying large models on memory-constrained GPUs, offering better speed and accuracy than alternatives like GPTQ. This award-winning method works well with instruction-tuned and multimodal models.

Quick Install

Claude Code

Recommended
Primary
npx skills add majiayu000/claude-skill-registry -a claude-code
Plugin CommandAlternative
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/awq-quantization

Copy and paste this command in Claude Code to install this skill

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/awq
0

Related Skills

quantizing-models-bitsandbytes

Other

This skill quantizes LLMs to 8-bit or 4-bit precision using bitsandbytes, achieving 50-75% memory reduction with minimal accuracy loss. It's ideal for running larger models on limited GPU memory or accelerating inference, supporting formats like INT8, NF4, and FP4. The skill integrates with HuggingFace Transformers and enables QLoRA training and 8-bit optimizers.

View skill

gguf-quantization

Design

This skill enables GGUF quantization for efficient model deployment on consumer hardware like CPUs and Apple Silicon. It provides flexible 2-8 bit quantization options without requiring GPU acceleration. Use it when optimizing models for local inference tools or resource-constrained environments.

View skill

awq-quantization

Other

AWQ is a 4-bit weight quantization technique that uses activation patterns to preserve critical weights, enabling 3x faster inference with minimal accuracy loss. It's ideal for deploying large models (7B-70B) on limited GPU memory and is particularly effective for instruction-tuned and multimodal models. This skill integrates with vLLM and Marlin kernels for optimized deployment.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that uses RadixAttention for automatic prefix caching, enabling significantly faster structured generation. It's ideal for developers needing JSON/regex outputs, constrained decoding, or building agentic workflows with tool calls. Use it when you require up to 5× faster inference than alternatives like vLLM in scenarios with shared prefixes.

View skill