Qwen3-32B-Thinking-speculator.eagle3

Model Overview

  • Verifier: Qwen3-32B
  • Speculative Decoding Algorithm: EAGLE-3
  • Model Architecture: Eagle3Speculator
  • Release Date: 3/12/2026
  • Version: 1.0
  • Model Developers: RedHat

This model is a speculator model designed for use with Qwen/Qwen3-32B, based on the EAGLE-3 speculative decoding algorithm. It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Pro-300K-Filtered and the HuggingFaceH4/ultrachat_200k datasets.

This model is optimized for reasoning and chain-of-thought workloads and is meant to be used with thinking enabled. It must be used with the Qwen3-32B chat template, specifically through the /chat/completions endpoint.

Use with vLLM

vllm serve Qwen3-32B \
  -tp 1 \
  --speculative-config '{
    "model": "RedHatAI/Qwen3-32B-Thinking-speculator.eagle3",
    "num_speculative_tokens": 5,
    "method": "eagle3"
  }'

Evaluations

Model / run: Qwen3-32B-Thinking-speculator.eagle3

vLLM: 0.17.0rc1.dev122+g26bd43b52.d20260306.precompiled

Training data: Magpie + UltraChat; responses from the Qwen/Qwen3-32B model (with reasoning enabled).

Use cases

Use Case Dataset Number of Samples
Coding HumanEval 164
Math Reasoning math_reasoning 80
Question Answering qa 80
MT_bench (Question) question 80
RAG rag 80
Summarization summarization 80
Translation translation 80
Writing writing 80

Acceptance lengths (draft length)

Dataset k=1 k=2 k=3 k=4 k=5
HumanEval 1.79 2.37 2.79 3.10 3.31
math_reasoning 1.85 2.56 3.12 3.59 3.92
qa 1.77 2.34 2.79 3.02 3.29
question 1.80 2.42 2.84 3.16 3.32
rag 1.76 2.31 2.69 2.98 3.14
summarization 1.70 2.16 2.45 2.63 2.73
translation 1.74 2.26 2.62 2.84 2.98
writing 1.80 2.41 2.85 3.15 3.38
Details

Configuration

  • Model: Qwen3-32B-Thinking-speculator.eagle3
  • Data: Magpie+ ultrachat — responses from Qwen3-32B model (reasoning)
  • temperature: 0.0
  • vLLM version: 0.17.0rc1.dev122+g26bd43b52.d20260306.precompiled
  • backend: vLLM chat_completions
  • rate-type: throughput
  • max-seconds per run: 300
  • hardware: 8× GPU (tensor parallel 8)
  • Benchmark data: RedHatAI/speculator_benchmarks
  • vLLM serve: --no-enable-prefix-caching, --max-num-seqs 64, --enforce-eager

Command

GUIDELLM__PREFERRED_ROUTE="chat_completions" \
GUIDELLM__MAX_CONCURRENCY=128 \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/speculator_benchmarks" \
  --data-args '{"data_files": "HumanEval.jsonl"}' \
  --rate-type throughput \
  --max-seconds 300 \
  --backend-args '{"extra_body": {"chat_completions": {"temperature": 0.0}}}'
Downloads last month
235
Safetensors
Model size
2B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including RedHatAI/Qwen3-32B-Thinking-speculator.eagle3

Paper for RedHatAI/Qwen3-32B-Thinking-speculator.eagle3