Skip to content

Support lowering quantized checkpoint from HF Hub #67

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 28, 2025

Conversation

guangy10
Copy link
Collaborator

@guangy10 guangy10 commented May 7, 2025

Users can lower a HF model with quantization by:

  1. Load the fp32 checkpoint and apply the quantization, export and lowering recipe to generate the PTE, which is supported in Introduce 8da4w quant for decoder-only text models #62
  2. Load a pre-quantized checkpoint and directly lower it to ExecuTorch via the exact same from_pretrained API. Behind the scene, it will bypass the quantization as the checkpoint has been quantized already. This path shows the composability of different recipes.

This PR is handling path 2 above.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@guangy10
Copy link
Collaborator Author

guangy10 commented May 7, 2025

E           ValueError: Failed to find class AOPerModuleConfig in any of the allowed modules: torchao.quantization, torchao.sparsity.sparse_api, torchao.prototype.quantization

Will need to upgrade the pinned torchao.

@guangy10 guangy10 force-pushed the support_quantized_ckp branch from 84cf672 to 2af1cd4 Compare May 7, 2025 19:24
@guangy10
Copy link
Collaborator Author

guangy10 commented May 7, 2025

Rebased after #68 is merged

@guangy10 guangy10 changed the title Support lowering quantized checkpoint from Hub Support lowering quantized checkpoint from HF Hub May 7, 2025
@guangy10 guangy10 force-pushed the support_quantized_ckp branch from 2af1cd4 to 7b8833f Compare May 7, 2025 19:43
@guangy10 guangy10 marked this pull request as ready for review May 7, 2025 19:43
@guangy10
Copy link
Collaborator Author

guangy10 commented May 7, 2025

cc @metascroy @jerryzh168 for review.

@guangy10 guangy10 force-pushed the support_quantized_ckp branch from 7b8833f to 1151915 Compare May 7, 2025 21:40
@guangy10 guangy10 force-pushed the support_quantized_ckp branch 2 times, most recently from 4a80eba to 976ccfd Compare May 20, 2025 21:51
@guangy10 guangy10 force-pushed the support_quantized_ckp branch from 976ccfd to 0bd3471 Compare May 27, 2025 23:38
@guangy10 guangy10 merged commit 34cece4 into huggingface:main May 28, 2025
106 of 107 checks passed
@guangy10 guangy10 deleted the support_quantized_ckp branch May 28, 2025 19:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants