n24q02m/Qwen3-Reranker-0.6B-GGUF
GGUF-quantized version of Qwen/Qwen3-Reranker-0.6B for use with qwen3-embed and llama-cpp-python.
Available Variants
| Variant | File | Size | Description |
|---|---|---|---|
| Q4_K_M | qwen3-reranker-0.6b-q4-k-m.gguf |
378 MB | 4-bit quantization (recommended) |
Usage
qwen3-embed
pip install qwen3-embed[gguf]
from qwen3_embed import TextCrossEncoder
reranker = TextCrossEncoder("n24q02m/Qwen3-Reranker-0.6B-GGUF")
scores = list(reranker.rerank("What is AI?", ["AI is...", "Pizza is..."]))
llama-cpp-python (direct)
from llama_cpp import Llama
model = Llama(
model_path="qwen3-reranker-0.6b-q4-k-m.gguf",
n_ctx=40960,
logits_all=False,
)
# Use chat template for scoring (see qwen3-embed source for details)
Conversion Details
- Source: Qwen/Qwen3-Reranker-0.6B
- Method:
convert_hf_to_gguf.py(F16) +llama-quantize(Q4_K_M)
Related
- ONNX variants: n24q02m/Qwen3-Reranker-0.6B-ONNX
- Downloads last month
- 245
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.