Pi0-fast official implementation trained on VLABench datasets.

This repository provides the official release of the Pi0-fast model trained with the whole VLABench's official primitive tasks dataset. To be noticed, this config corresponds to the relative chunk.

Evaluation

To run this checkpoint, please clone this repo: https://github.com/Shiduo-zh/openpi, and checkout to the branch main. Assume that you download this checkpoints and put it in the directory checkpoints, to run the policy as server, please run:

bash vla_bench_scipts/serve_policy.sh pifast_ft_vlabench_primitive checkpoints/VLABench/pi0-fast-primitive-10task/29999/

After serving the policy, open another terminal and run:

bash vla_bench_scipts/multi_run_vlabench.sh <Your path to store the evaluate results>

Train

To reproduce the training result, please run the training script with the config pifast_ft_vlabench_primitive.

XLA_PYTHON_CLIENT_MEM_FRACTION=0.95 uv run scripts/train.py pifast_ft_vlabench_primitive --exp-name=pi0_ft_vlabench_primitive --overwrite

Our checkpoint is trained on 8 H100 for 30k iterations, with 5000 episodes data acrossing 10 tasks.

Reference Results

The reference success rate of this model is:

Track add_condiment insert_flower select_book select_chemistry_tube select_drink select_fruit select_mahjong select_painting select_poker select_toy Avg_SR
track_1_in_distribution 0.42 0.04 0.28 0.16 0.082 0.38 0.24 0.48 0.58 0.24 0.291
track_2_cross_category 0.04 ? 0.184 0.08 0.12 0.32 0.10 0.46 ? 0.14 0.181
track_3_common_sense 0.32 ? 0.28 0.24 0.1 0.32 0.02 0.36 ? 0.14 0.211
track_4_semantic_instruction 0.24 ? 0.17 0.14 0.12 0.32 0.12 0.44 ? 0.1 0.199
track_6_unseen_texture 0.42 ? 0.34 0.1 0.1 0.26 0.18 0.38 ? 0.12 0.236
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support