awq-quantization by davila7

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

Coding
15.7K Stars
1.4K Forks
Updated Jan 12, 2026, 05:31 AM

Why Use This

This skill provides specialized capabilities for davila7's codebase.

Use Cases

  • Developing new features in the davila7 repository
  • Refactoring existing code to follow davila7 standards
  • Understanding and working with davila7's codebase structure

Skill Snapshot

Auto scan of skill assets. Informational only.

Valid SKILL.md

Checks against SKILL.md specification

Source & Community

Skill Version
main
Community
15.7K 1.4K
Updated At Jan 12, 2026, 05:31 AM

Skill Stats

SKILL.md 311 Lines
Total Files 1
Total Size 0 B
License MIT