hqq-quantization
Enables fast, calibration-free quantization of LLMs to 4/3/2-bit precision, optimizing memory and inference efficiency.
Install this skill
or
hqq-quantization3 files
Comments
Sign in to leave a comment.
No comments yet. Be the first to comment!
GitHub Stars 22.3K
Rate this skill
Categorydevelopment
UpdatedApril 4, 2026
davila7/claude-code-templates