Instructions to use fangwu97/DeepSearch-1.5B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use fangwu97/DeepSearch-1.5B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="fangwu97/DeepSearch-1.5B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("fangwu97/DeepSearch-1.5B") model = AutoModelForCausalLM.from_pretrained("fangwu97/DeepSearch-1.5B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use fangwu97/DeepSearch-1.5B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "fangwu97/DeepSearch-1.5B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "fangwu97/DeepSearch-1.5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/fangwu97/DeepSearch-1.5B
- SGLang
How to use fangwu97/DeepSearch-1.5B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "fangwu97/DeepSearch-1.5B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "fangwu97/DeepSearch-1.5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "fangwu97/DeepSearch-1.5B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "fangwu97/DeepSearch-1.5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use fangwu97/DeepSearch-1.5B with Docker Model Runner:
docker model run hf.co/fangwu97/DeepSearch-1.5B
DeepSearch-1.5B🌟 is a 1.5B parameter reasoning model trained with Reinforcement Learning with Verifiable Rewards (RLVR), enhanced by Monte Carlo Tree Search (MCTS).
Unlike prior approaches that restrict structured search to inference, DeepSearch integrates MCTS into training, enabling systematic exploration, fine-grained credit assignment, and efficient replay buffering.
This model achieves state-of-the-art accuracy among 1.5B reasoning models while being 5.7× more compute-efficient than extended RL training baselines.
Model Details
- Developed by: Fang Wu*, Weihao Xuan*, Heli Qi*, Ximing Lu, Aaron Tu, Li Erran Li, Yejin Choi
- Institutional affiliations: Stanford University, University of Tokyo, RIKEN AIP, University of Washington, UC Berkeley, Amazon AWS, Columbia University
- Paper: DeepSearch: Overcome the Bottleneck of Reinforcement Learning with Verifiable Rewards via Monte Carlo Tree Search
- Code: Github
- Base Model: Nemotron-Research-Reasoning-Qwen-1.5B v2
- Parameters: 1.5B
- Framework: veRL
- License: Apache-2.0
Quickstart
Environment
pip install vllm # vllm>=v0.8.5.post1 should work
pip install transformers # transformers>=4.52.4 should work
Using vLLM to generate
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
def convert_question_to_messages(question: str):
messages = [
{"role": "user",
"content": question + " Let's think step by step and output the final answer within \\boxed{}. \
"}
]
return messages
model_id="fangwu97/DeepSearch-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
sampling_params = SamplingParams(
temperature=0.6,
top_p=0.95,
max_tokens=32768
)
model = LLM(
model=model_id,
tensor_parallel_size=1
)
prompt = tokenizer.apply_chat_template(
convert_question_to_messages("Find the sum of all integer bases $b>9$ for which $17_{b}$ is a divisor of $97_{b}$."),
add_generation_prompt=True,
tokenize=False
)
outputs = model.generate({"prompt": prompt}, sampling_params=sampling_params, use_tqdm=False)
response = outputs[0].outputs[0].text
print(response)
Performance
| Benchmark | Nemotron-RR-Qwen-1.5B v2 | DeepSearch-1.5B |
|---|---|---|
| AIME 2024 | 51.77 | 53.65 |
| AIME 2025 | 32.92 | 35.42 |
| AMC 2023 | 88.83 | 90.39 |
| MATH500 | 92.24 | 92.53 |
| Minerva | 39.75 | 40.00 |
| Olympiad | 64.69 | 65.72 |
| Average | 61.70 | 62.95 |
DeepSearch improves average accuracy by +1.25 points over the best prior 1.5B model, while using 5.7× more GPU hours.
Training
- Dataset: DeepMath-103K (rigorously decontaminated)
- Training steps: 100
- Search strategy:
- Global Frontier Selection
- Entropy-based guidance
- Replay buffer with solution caching
- Hardware: 16× NVIDIA H100 (96GB)
- Compute: ~330 GPU hours
Ethical Considerations
- Positive: Reduces training costs and carbon footprint.
- Risks: Systematic exploration methods could be adapted to sensitive domains (e.g., code synthesis).
- Transparency: Full implementation and training details are released for reproducibility.
Citation
@misc{wu2025deepsearch,
title = {DeepSearch: Overcome the Bottleneck of Reinforcement Learning with Verifiable Rewards via Monte Carlo Tree Search},
author = {Wu, Fang and Xuan, Weihao and Qi, Heli and Lu, Ximing and Tu, Aaron and Li, Li Erran and Choi, Yejin},
year = {2025},
eprint = {2509.25454},
archivePrefix = {arXiv},
primaryClass = {cs.AI},
doi = {10.48550/arXiv.2509.25454},
}
- Downloads last month
- 32
Paper for fangwu97/DeepSearch-1.5B
Evaluation results
- avg@32 on AIME 2024self-reported53.650
- avg@32 on AIME 2024self-reported35.420
- avg@32 on AIME 2024self-reported90.390
- avg@32 on AIME 2024self-reported92.530
- avg@32 on AIME 2024self-reported40.000
- avg@32 on AIME 2024self-reported65.720
