This repository was archived by the owner on Sep 10, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 248
This repository was archived by the owner on Sep 10, 2025. It is now read-only.
Enable torchao.experimental EmbeddingQuantization #1520
Open
Assignees
@Jack-Khuu
Description
🚀 The feature, motivation and pitch
Quantization is a technique used to reduce the speed, size, or memory requirements of a model and torchao is PyTorch's native quantization library for inference and training
There are new experimental quantizations in torchao that we would like to enable in torchchat. Specifically this task is for enabling EmbeddingQuantizer and SharedEmbeddingQuantizer.
Entrypoint:
torchchat/torchchat/utils/quantize.py
Line 101 in 1384f7d
def quantize_model(
Task: Using ExecuTorch as a reference (pytorch/executorch#9548) add support for EmbeddingQuantizer and SharedEmbeddingQuantizer.
cc: @metascroy, @manuelcandales
Alternatives
No response
Additional context
No response
RFC (Optional)
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Ready