Qwen3-Coder-Next-oQ2

oQ2 mixed-precision MLX quantization produced via oMLX.

  • Quantization: oQ2 (sensitivity-driven, group_size=64)
  • Format: MLX safetensors, loadable with mlx-vlm and mlx-lm

Usage

pip install mlx-vlm
python3 -m mlx_vlm generate --model bearzi/Qwen3-Coder-Next-oQ2 --prompt "Your prompt here" --max-tokens 512

About oQ

oQ measures per-layer quantization sensitivity through calibration inference and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. See oMLX docs.

Downloads last month
-
Safetensors
Model size
8B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bearzi/Qwen3-Coder-Next-oQ2

Quantized
(99)
this model

Collection including bearzi/Qwen3-Coder-Next-oQ2