Error - (json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0))
#11 opened 2 days ago
by
Paval
test
#10 opened 4 days ago
by
HOSAMMAHDY
For all blackwell owners, gemma-3-12b-qat-abliterated-sikaworld-fp4-ltx2
1
#9 opened 8 days ago
by
Sikaworld1990
Gemma fp8 proplematic structure, question regarding the selective "mixed precision" quantization in specific layers (e.g., Layer 46),, comfy_quant
#8 opened 9 days ago
by
Sikaworld1990
Gemma lora
7
#7 opened 11 days ago
by
RuneXX
Why squish?
#6 opened about 1 month ago
by
kabachuha
"Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj"
7
#4 opened 2 months ago
by
MrRyukami
XPU Not working "No backend can handle 'dequantize_per_tensor_fp8': eager: x: device xpu not in {'cuda', 'cpu'}"
3
#3 opened 2 months ago
by
AI-Joe-git
Create README.md
#2 opened 2 months ago
by
dayz1593572159
Fp8 text encoder
👍🔥 9
5
#1 opened 2 months ago
by
kakkkarotto