deepthinkvla_base / README.md
yinchenghust's picture
Add model card and metadata (#1)
9dd8909
metadata
library_name: transformers
pipeline_tag: robotics
base_model: physical-intelligence/pi0fast_base
tags:
  - vision-language-action
  - chain-of-thought
  - embodied-ai

DeepThinkVLA: Enhancing Reasoning Capability of Vision-Language-Action Models

DeepThinkVLA is a Vision-Language-Action (VLA) model designed to enhance the reasoning capabilities of robotic agents through explicit deliberation. It refactors the policy into a 2.9B parameter hybrid decoder that generates a reasoning trace (Chain-of-Thought) before emitting action chunks.

Model Description

DeepThinkVLA addresses the challenges of integrating Chain-of-Thought (CoT) into VLA models by satisfying two key conditions:

  1. Decoding Alignment: It uses a hybrid-attention decoder that pairs causal attention for linguistic reasoning tokens with bidirectional attention for parallel action decoding.
  2. Causal Alignment: The model is trained via a two-stage SFT-then-RL pipeline (using GRPO) to ensure the reasoning chain is causally linked to task success.

The model is initialized from the pi0-FAST checkpoint and demonstrates significant performance gains on robotic manipulation benchmarks.

Performance

  • LIBERO: 97.0% average success rate.
  • LIBERO-Plus: 79.0% zero-shot robustness under distribution shifts.
  • RoboTwin 2.0: 59.3% success rate, exceeding prior VLA baselines by significant margins.

Citation

If you find this work helpful, please consider citing:

@article{yin2025deepthinkvla,
  title={DeepThinkVLA: Enhancing Reasoning Capability of Vision-Language-Action Models},
  author={Yin, Cheng and Lin, Yankai and Xu, Wang and Tam, Sikyuen and Zeng, Xiangrui and Liu, Zhiyuan and Yin, Zhouping},
  journal={arXiv preprint arXiv:2511.15669},
  year={2025}
}