Instructions to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF", filename="Qwen3.5-0.8B/Qwen3.5-0.8B-PRISM-DQ.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16 # Run inference directly in the terminal: llama-cli -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16 # Run inference directly in the terminal: llama-cli -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16 # Run inference directly in the terminal: ./llama-cli -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Use Docker
docker model run hf.co/Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
- LM Studio
- Jan
- vLLM
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
- Ollama
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with Ollama:
ollama run hf.co/Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
- Unsloth Studio new
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF to start chatting
- Pi new
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Run Hermes
hermes
- Docker Model Runner
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with Docker Model Runner:
docker model run hf.co/Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
- Lemonade
How to use Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF:BF16
Run and chat with the model
lemonade run user.Qwen3.5-PRISM-Dynamic-Quant-GGUF-BF16
List all available models
lemonade list
Qwen3.5 PRISM Dynamic Quantization (GGUF)
PRISM Dynamic Quantization (PRISM-DQ) applies per-tensor-class bit allocation based on structural weight analysis — no calibration data or importance matrices required. Each tensor class (attention keys, FFN gates, SSM components, etc.) receives a quantization type proportional to its measured sensitivity, while staying within a target bits-per-weight budget.
This repo contains PRISM-DQ quantized GGUFs for the full Qwen3.5 vision-language model family (0.8B, 2B, 4B, 9B), plus multimodal projection weights (mmproj) for vision capabilities.
Benchmark Results
Perplexity Comparison (UltraChat, 5 chunks, 512 ctx)
| Model | Method | BPW | PPL | Size |
|---|---|---|---|---|
| Qwen3.5-0.8B | Q3_K_M | 4.96 | 12.14 | 470 MB |
| PRISM-DQ | 4.94 | 11.42 | 468 MB | |
| Q3_K_M (imatrix) | 4.96 | 11.31 | 470 MB | |
| UD-Q3_K_XL | 5.19 | 10.94 | 492 MB | |
| IQ4_XS (imatrix) | 5.20 | 10.35 | 493 MB | |
| UD-Q4_K_XL | 5.89 | 10.07 | 559 MB | |
| Qwen3.5-2B | Q3_K_M | 4.69 | 9.35 | 1107 MB |
| PRISM-DQ | 4.68 | 9.26 | 1104 MB | |
| Q3_K_M (imatrix) | 4.69 | 8.40 | 1107 MB | |
| UD-Q3_K_XL | 4.91 | 8.27 | 1159 MB | |
| IQ4_XS (imatrix) | 4.97 | 8.12 | 1173 MB | |
| UD-Q4_K_XL | 5.68 | 8.07 | 1340 MB | |
| Qwen3.5-4B | Q3_K_M | 4.36 | 6.88 | 2293 MB |
| PRISM-DQ | 4.31 | 6.82 | 2271 MB | |
| Q3_K_M (imatrix) | 4.36 | 6.62 | 2293 MB | |
| UD-Q3_K_XL | 4.63 | 6.66 | 2436 MB | |
| IQ4_XS (imatrix) | 4.70 | 6.51 | 2477 MB | |
| UD-Q4_K_XL | 5.53 | 6.56 | 2912 MB | |
| Qwen3.5-9B | Q3_K_M | 4.17 | 6.25 | 4674 MB |
| PRISM-DQ | 4.15 | 6.18 | 4652 MB | |
| Q3_K_M (imatrix) | 4.17 | 5.96 | 4674 MB | |
| UD-Q3_K_XL | 4.51 | 6.01 | 5054 MB | |
| IQ4_XS (imatrix) | 4.61 | 6.03 | 5169 MB | |
| UD-Q4_K_XL | 5.33 | 5.86 | 5966 MB |
Key Findings
- PRISM-DQ beats uniform Q3_K_M on all 4 models (1-6% PPL improvement) at same or lower BPW
- Smallest file size at competitive perplexity across the Qwen3.5 family
- No calibration data needed — allocation decisions are purely weight-analysis-based
- When combined with importance matrices, PRISM-DQ+imatrix achieves Pareto-optimal results on 4B and 9B
Model Files
Each subfolder contains the quantized model GGUF plus multimodal projection weights:
Qwen3.5-0.8B/
Qwen3.5-0.8B-PRISM-DQ.gguf (446 MB)
mmproj-BF16.gguf
mmproj-F16.gguf
mmproj-F32.gguf
chat_template.jinja
Qwen3.5-2B/
Qwen3.5-2B-PRISM-DQ.gguf (1.0 GB)
mmproj-BF16.gguf
mmproj-F16.gguf
mmproj-F32.gguf
chat_template.jinja
Qwen3.5-4B/
Qwen3.5-4B-PRISM-DQ.gguf (2.1 GB)
mmproj-BF16.gguf
mmproj-F16.gguf
mmproj-F32.gguf
chat_template.jinja
Qwen3.5-9B/
Qwen3.5-9B-PRISM-DQ.gguf (4.3 GB)
mmproj-BF16.gguf
mmproj-F16.gguf
mmproj-F32.gguf
chat_template.jinja
Usage
Text-only (llama.cpp)
# Download a model
huggingface-cli download Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF \
Qwen3.5-9B/Qwen3.5-9B-PRISM-DQ.gguf --local-dir .
# Run with llama-cli
llama-cli -m Qwen3.5-9B/Qwen3.5-9B-PRISM-DQ.gguf \
-p "You are a helpful assistant." \
--chat-template-file Qwen3.5-9B/chat_template.jinja \
-cnv
Vision (multimodal)
# Download model + mmproj
huggingface-cli download Ex0bit/Qwen3.5-PRISM-Dynamic-Quant-GGUF \
Qwen3.5-9B/Qwen3.5-9B-PRISM-DQ.gguf \
Qwen3.5-9B/mmproj-BF16.gguf --local-dir .
# Run with llama-mtmd-cli
llama-mtmd-cli -m Qwen3.5-9B/Qwen3.5-9B-PRISM-DQ.gguf \
--mmproj Qwen3.5-9B/mmproj-BF16.gguf \
--chat-template-file Qwen3.5-9B/chat_template.jinja \
-cnv
LM Studio / Ollama
These GGUFs work with any llama.cpp-compatible runtime. Simply point your application at the .gguf file.
PRISM-DQ Quantization Recipes
Qwen3.5-0.8B (target 3.5 BPW)
llama-quantize \
--tensor-type "attn_gate=Q3_K" \
--tensor-type "attn_k=Q3_K" \
--tensor-type "attn_output=IQ4_XS" \
--tensor-type "attn_q=Q3_K" \
--tensor-type "attn_qkv=Q3_K" \
--tensor-type "attn_v=Q4_K" \
--tensor-type "ffn_down=Q3_K" \
--tensor-type "ffn_gate=Q3_K" \
--tensor-type "ffn_up=Q3_K" \
--tensor-type "ssm_alpha=Q3_K" \
--tensor-type "ssm_beta=IQ4_XS" \
--tensor-type "ssm_out=IQ4_XS" \
--tensor-type "token_embd=Q3_K" \
--tensor-type "blk\.(4)\.ssm_beta=Q4_K" \
--tensor-type "blk\.(18)\.ssm_out=Q4_K" \
input.gguf output.gguf Q3_K
Qwen3.5-2B (target 3.5 BPW)
llama-quantize \
--tensor-type "attn_gate=Q3_K" \
--tensor-type "attn_k=Q4_K" \
--tensor-type "attn_output=Q4_K" \
--tensor-type "attn_q=Q4_K" \
--tensor-type "attn_qkv=Q3_K" \
--tensor-type "attn_v=Q4_K" \
--tensor-type "ffn_down=Q3_K" \
--tensor-type "ffn_gate=Q3_K" \
--tensor-type "ffn_up=Q3_K" \
--tensor-type "ssm_alpha=Q4_K" \
--tensor-type "ssm_beta=Q4_K" \
--tensor-type "ssm_out=Q3_K" \
--tensor-type "token_embd=Q3_K" \
input.gguf output.gguf Q3_K
Qwen3.5-4B (target 3.5 BPW)
llama-quantize \
--tensor-type "attn_gate=Q3_K" \
--tensor-type "attn_k=Q4_K" \
--tensor-type "attn_output=Q5_K" \
--tensor-type "attn_q=Q3_K" \
--tensor-type "attn_qkv=Q3_K" \
--tensor-type "attn_v=Q4_K" \
--tensor-type "ffn_down=Q3_K" \
--tensor-type "ffn_gate=Q3_K" \
--tensor-type "ffn_up=Q3_K" \
--tensor-type "ssm_alpha=Q4_K" \
--tensor-type "ssm_beta=Q4_K" \
--tensor-type "ssm_out=Q3_K" \
--tensor-type "token_embd=Q3_K" \
input.gguf output.gguf Q3_K
Qwen3.5-9B (target 3.5 BPW)
llama-quantize \
--tensor-type "attn_gate=Q3_K" \
--tensor-type "attn_k=Q4_K" \
--tensor-type "attn_output=IQ4_XS" \
--tensor-type "attn_q=Q4_K" \
--tensor-type "attn_qkv=Q3_K" \
--tensor-type "attn_v=Q4_K" \
--tensor-type "ffn_down=Q3_K" \
--tensor-type "ffn_gate=Q3_K" \
--tensor-type "ffn_up=Q3_K" \
--tensor-type "output=Q3_K" \
--tensor-type "ssm_alpha=Q4_K" \
--tensor-type "ssm_beta=Q4_K" \
--tensor-type "ssm_out=Q3_K" \
--tensor-type "token_embd=Q3_K" \
input.gguf output.gguf Q3_K
How PRISM-DQ Works
PRISM Dynamic Quantization analyzes each weight tensor using 7 structural metrics:
- PL-Alpha-Hill — spectral heavy-tail index via eigenvalue analysis
- Spectral Dominance — top singular value ratio (rank-1 approximation quality)
- OSQE — optimal scale quantization error at multiple bit levels (2, 3, 4, 6 bit)
- Matrix Imbalance — max of row/column coefficient of variation
- Fragility — log-ratio of 2-bit vs 4-bit quantization error
- Boundary Density — fraction of values near quantization bin boundaries
- Spectral Position Prior — bidirectional spectral norm product encoding layer position
These metrics are combined into a composite sensitivity score per tensor class. A Lagrangian allocator then distributes bits across classes to minimize total quantization distortion subject to the BPW budget, with per-block refinement for individual tensor overrides.
License
This model is released under the Apache 2.0 license, consistent with the base Qwen3.5 models.
Acknowledgments
- Downloads last month
- 1,018
We're not able to determine the quantization variants.
