Back to Models

Qwen3-235B-A22B

Mixture of Experts 235.0B Parameters

Active Parameters: 22.0B

Model Specifications

Layers 94
Hidden Dimension 4,096
Attention Heads 64
KV Heads 4
Max Context 40K tokens
Vocabulary Size 151,936

VRAM Requirements

VRAM usage for all quantization and cache format combinations. Base overhead: 1.5 GB (CUDA context + activations).

Quantization Cache Format Model Weights 8K Context 16K Context 32K Context 40K Context
FP16 16.0 bpw FP32 493.5 GB 497.94 GB (+2.94 KV) 500.88 GB (+5.88 KV) 506.75 GB (+11.75 KV) 509.69 GB (+14.69 KV)
FP16 16.0 bpw FP16 493.5 GB 496.47 GB (+1.47 KV) 497.94 GB (+2.94 KV) 500.88 GB (+5.88 KV) 502.34 GB (+7.34 KV)
FP16 16.0 bpw Q8_0 493.5 GB 495.81 GB (+0.81 KV) 496.62 GB (+1.62 KV) 498.23 GB (+3.23 KV) 499.04 GB (+4.04 KV)
FP16 16.0 bpw FP8 (Exp) 493.5 GB 495.73 GB (+0.73 KV) 496.47 GB (+1.47 KV) 497.94 GB (+2.94 KV) 498.67 GB (+3.67 KV)
FP16 16.0 bpw Q4_0 (Exp) 493.5 GB 495.44 GB (+0.44 KV) 495.88 GB (+0.88 KV) 496.76 GB (+1.76 KV) 497.2 GB (+2.2 KV)
Q8_0 8.0 bpw FP32 246.75 GB 251.19 GB (+2.94 KV) 254.12 GB (+5.88 KV) 260.0 GB (+11.75 KV) 262.94 GB (+14.69 KV)
Q8_0 8.0 bpw FP16 246.75 GB 249.72 GB (+1.47 KV) 251.19 GB (+2.94 KV) 254.12 GB (+5.88 KV) 255.59 GB (+7.34 KV)
Q8_0 8.0 bpw Q8_0 246.75 GB 249.06 GB (+0.81 KV) 249.87 GB (+1.62 KV) 251.48 GB (+3.23 KV) 252.29 GB (+4.04 KV)
Q8_0 8.0 bpw FP8 (Exp) 246.75 GB 248.98 GB (+0.73 KV) 249.72 GB (+1.47 KV) 251.19 GB (+2.94 KV) 251.92 GB (+3.67 KV)
Q8_0 8.0 bpw Q4_0 (Exp) 246.75 GB 248.69 GB (+0.44 KV) 249.13 GB (+0.88 KV) 250.01 GB (+1.76 KV) 250.45 GB (+2.2 KV)
Q4_K_M 4.65 bpw FP32 143.42 GB 147.86 GB (+2.94 KV) 150.8 GB (+5.88 KV) 156.67 GB (+11.75 KV) 159.61 GB (+14.69 KV)
Q4_K_M 4.65 bpw FP16 143.42 GB 146.39 GB (+1.47 KV) 147.86 GB (+2.94 KV) 150.8 GB (+5.88 KV) 152.27 GB (+7.34 KV)
Q4_K_M 4.65 bpw Q8_0 143.42 GB 145.73 GB (+0.81 KV) 146.54 GB (+1.62 KV) 148.15 GB (+3.23 KV) 148.96 GB (+4.04 KV)
Q4_K_M 4.65 bpw FP8 (Exp) 143.42 GB 145.66 GB (+0.73 KV) 146.39 GB (+1.47 KV) 147.86 GB (+2.94 KV) 148.6 GB (+3.67 KV)
Q4_K_M 4.65 bpw Q4_0 (Exp) 143.42 GB 145.36 GB (+0.44 KV) 145.8 GB (+0.88 KV) 146.69 GB (+1.76 KV) 147.13 GB (+2.2 KV)
Q4_K_S 4.58 bpw FP32 141.26 GB 145.7 GB (+2.94 KV) 148.64 GB (+5.88 KV) 154.51 GB (+11.75 KV) 157.45 GB (+14.69 KV)
Q4_K_S 4.58 bpw FP16 141.26 GB 144.23 GB (+1.47 KV) 145.7 GB (+2.94 KV) 148.64 GB (+5.88 KV) 150.11 GB (+7.34 KV)
Q4_K_S 4.58 bpw Q8_0 141.26 GB 143.57 GB (+0.81 KV) 144.38 GB (+1.62 KV) 146.0 GB (+3.23 KV) 146.8 GB (+4.04 KV)
Q4_K_S 4.58 bpw FP8 (Exp) 141.26 GB 143.5 GB (+0.73 KV) 144.23 GB (+1.47 KV) 145.7 GB (+2.94 KV) 146.44 GB (+3.67 KV)
Q4_K_S 4.58 bpw Q4_0 (Exp) 141.26 GB 143.21 GB (+0.44 KV) 143.65 GB (+0.88 KV) 144.53 GB (+1.76 KV) 144.97 GB (+2.2 KV)
Q3_K_M 3.91 bpw FP32 120.6 GB 125.04 GB (+2.94 KV) 127.97 GB (+5.88 KV) 133.85 GB (+11.75 KV) 136.79 GB (+14.69 KV)
Q3_K_M 3.91 bpw FP16 120.6 GB 123.57 GB (+1.47 KV) 125.04 GB (+2.94 KV) 127.97 GB (+5.88 KV) 129.44 GB (+7.34 KV)
Q3_K_M 3.91 bpw Q8_0 120.6 GB 122.91 GB (+0.81 KV) 123.71 GB (+1.62 KV) 125.33 GB (+3.23 KV) 126.14 GB (+4.04 KV)
Q3_K_M 3.91 bpw FP8 (Exp) 120.6 GB 122.83 GB (+0.73 KV) 123.57 GB (+1.47 KV) 125.04 GB (+2.94 KV) 125.77 GB (+3.67 KV)
Q3_K_M 3.91 bpw Q4_0 (Exp) 120.6 GB 122.54 GB (+0.44 KV) 122.98 GB (+0.88 KV) 123.86 GB (+1.76 KV) 124.3 GB (+2.2 KV)
Q2_K 2.63 bpw FP32 81.12 GB 85.56 GB (+2.94 KV) 88.49 GB (+5.88 KV) 94.37 GB (+11.75 KV) 97.31 GB (+14.69 KV)
Q2_K 2.63 bpw FP16 81.12 GB 84.09 GB (+1.47 KV) 85.56 GB (+2.94 KV) 88.49 GB (+5.88 KV) 89.96 GB (+7.34 KV)
Q2_K 2.63 bpw Q8_0 81.12 GB 83.43 GB (+0.81 KV) 84.23 GB (+1.62 KV) 85.85 GB (+3.23 KV) 86.66 GB (+4.04 KV)
Q2_K 2.63 bpw FP8 (Exp) 81.12 GB 83.35 GB (+0.73 KV) 84.09 GB (+1.47 KV) 85.56 GB (+2.94 KV) 86.29 GB (+3.67 KV)
Q2_K 2.63 bpw Q4_0 (Exp) 81.12 GB 83.06 GB (+0.44 KV) 83.5 GB (+0.88 KV) 84.38 GB (+1.76 KV) 84.82 GB (+2.2 KV)

Total VRAM = Model Weights + KV Cache + 1.5 GB overhead. Actual usage may vary ±5% based on inference engine and optimizations.

Check if your GPU can run Qwen3-235B-A22B

Use our calculator to see if this model fits your specific hardware configuration.