Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

vllm-project/llm-compressor

tool icon LLM Compressor

docs PyPI

llmcompressor is an easy-to-use library for optimizing models for deployment with vllm, including:

  • Comprehensive set of quantization algorithms for weight-only and activation quantization
  • Seamless integration with Hugging Face models and repositories
  • safetensors-based file format compatible with vllm
  • Large model support via accelerate

✨ Read the announcement blog here! ✨

LLM Compressor Flow


πŸ’¬ Join us on the vLLM Community Slack and share your questions, thoughts, or ideas in:

  • #sig-quantization
  • #llm-compressor

πŸš€ What's New!

Big updates have landed in LLM Compressor! To get a more in-depth look, check out the LLM Compressor overview.

Some of the exciting new features include:

  • Batched Calibration Support: LLM Compressor now supports calibration with batch sizes > 1. A new batch_size argument has been added to the dataset_arguments enabling the option to improve quantization speed. Default batch_size is currently set to 1
  • New Model-Free PTQ Pathway: A new model-free PTQ pathway has been added to LLM Compressor, called model_free_ptq. This pathway allows you to quantize your model without the requirement of Hugging Face model definition and is especially useful in cases where oneshot may fail. This pathway is currently supported for data-free pathways only i.e FP8 quantization and was leveraged to quantize the Mistral Large 3 model. Additional examples have been added illustrating how LLM Compressor can be used for Kimi K2
  • Extended KV Cache and Attention Quantization Support: LLM Compressor now supports attention quantization. KV Cache quantization, which previously only supported per-tensor scales, has been extended to support any quantization scheme including a new per-head quantization scheme. Support for these checkpoints is on-going in vLLM and scripts to get started have been added to the experimental folder
  • Generalized AWQ Support: The AWQModifier has been updated to support quantization schemes beyond W4A16 (e.g W4AFp8). In particular, AWQ no longer constrains that the quantization config needs to have the same settings for group_size, symmetric, and num_bits for each config_group
  • AutoRound Quantization Support: Added AutoRoundModifier for quantization using AutoRound, an advanced post-training algorithm that optimizes rounding and clipping ranges through sign-gradient descent. This approach combines the efficiency of post-training quantization with the adaptability of parameter tuning, delivering robust compression for large language models while maintaining strong performance
  • Experimental MXFP4 Support: Models can now be quantized using an MXFP4 pre-set scheme. Examples can be found under the experimental folder. This pathway is still experimental as support and validation with vLLM is still a WIP.
  • R3 Transform Support: LLM Compressor now supports applying transforms to attention in the style of SpinQuant's R3 rotation. Note: this feature is currently not yet supported in vLLM. An example applying R3 can be found in the experimental folder

Supported Formats

  • Activation Quantization: W8A8 (int8 and fp8)
  • Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)
  • 2:4 Semi-structured and Unstructured Sparsity

Supported Algorithms

  • Simple PTQ
  • GPTQ
  • AWQ
  • SmoothQuant
  • SparseGPT
  • AutoRound

When to Use Which Optimization

Please refer to compression_schemes.md for detailed information about available optimization schemes and their use cases.

Installation

pip install llmcompressor

Get Started

End-to-End Examples

Applying quantization with llmcompressor:

User Guides

Deep dives into advanced usage of llmcompressor:

Quick Tour

Let's quantize Qwen3-30B-A3B with FP8 weights and activations using the Round-to-Nearest algorithm.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe may be changed to target different quantization algorithms or formats.

Apply Quantization

Quantization is applied by selecting an algorithm and calling the oneshot API.

from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.utils import dispatch_for_generation
MODEL_ID = "Qwen/Qwen3-30B-A3B"
# Load model.
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# Configure the quantization algorithm and scheme.
# In this case, we:
# * quantize the weights to FP8 using RTN with block_size 128
# * quantize the activations dynamically to FP8 during inference
recipe = QuantizationModifier(
 targets="Linear",
 scheme="FP8_BLOCK",
 ignore=["lm_head", "re:.*mlp.gate$"],
)
# Apply quantization.
oneshot(model=model, recipe=recipe)
# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
dispatch_for_generation(model)
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to(
 model.device
)
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")
# Save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-BLOCK"
model.save_pretrained(SAVE_DIR)
tokenizer.save_pretrained(SAVE_DIR)

Inference with vLLM

The checkpoints created by llmcompressor can be loaded and run in vllm:

Install:

pip install vllm

Run:

from vllm import LLM
model = LLM("Qwen/Qwen3-30B-A3B-FP8-BLOCK")
output = model.generate("My name is")

Questions / Contribution

  • If you have any questions or requests open an issue and we will add an example or documentation.
  • We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.

Citation

If you find LLM Compressor useful in your research or projects, please consider citing it:

@software{llmcompressor2024,
 title={{LLM Compressor}},
 author={Red Hat AI and vLLM Project},
 year={2024},
 month={8},
 url={https://github.com/vllm-project/llm-compressor},
}

About

Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /