Skip to content

Model Quantization Guide

Overview Quantization reduces model precision from FP32/FP16 to INT8/INT4, dramatically reducing memory usage and improving inference speed with minimal quality loss. Quantization Types Format Bits Memory Reduction Quality Loss FP16 16 2x None INT8 8 4x Minimal INT4 4 8x Small GPTQ 4 8x Small AWQ 4 8x Very small GGUF 2-8 Variable Depends BitsAndBytes (Easy) from transformers import AutoModelForCausalLM, BitsAndBytesConfig # 8-bit quantization model_8bit = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1", load_in_8bit=True, device_map="auto" ) # 4-bit quantization bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model_4bit = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-v0.1", quantization_config=bnb_config, device_map="auto" ) GPTQ (Pre-quantized) from transformers import AutoModelForCausalLM # Load pre-quantized GPTQ model model = AutoModelForCausalLM.from_pretrained( "TheBloke/Mistral-7B-v0.1-GPTQ", device_map="auto" ) AWQ (Activation-aware) from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized( "TheBloke/Mistral-7B-v0.1-AWQ", fuse_layers=True ) GGUF (llama.cpp) For CPU inference with Ollama or llama.cpp: ...

December 1, 2025 · 2 min · 231 words · BlogIA Team