GitHub Code arXiv Website

# đŸŽ¯ SPAR-Bench-Tiny-RGBD > A lightweight RGBD version of SPAR-Bench for **fast evaluation** of 3D-aware spatial reasoning in vision-language models (VLMs). **SPAR-Bench-Tiny-RGBD** is a subset of [SPAR-Bench-RGBD](https://huggingface.co/datasets/jasonzhango/SPAR-Bench-RGBD), containing **1,000 QA samples** (50 per task × 20 tasks), each augmented with depths, camera intrinsics, and pose information. This dataset is ideal for quick evaluation of 3D-aware models, while maintaining compatibility with the same structure as the full benchmark. ## đŸ“Ĩ Load with `datasets` ```python from datasets import load_dataset spar_rgbd = load_dataset("jasonzhango/SPAR-Bench-RGBD") ``` ## đŸ•šī¸ Evaluation SPAR-Bench-Tiny-RGBD uses the **same evaluation protocol and metrics** as the full [SPAR-Bench](https://huggingface.co/datasets/jasonzhango/SPAR-Bench). We provide an **evaluation pipeline** in our [GitHub repository](https://github.com/hutchinsonian/spar), built on top of [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). ## 📚 Bibtex If you find this project or dataset helpful, please consider citing our paper: ```bibtex @article{zhang2025from, title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D}, author={Zhang, Jiahui and Chen, Yurui and Zhou, Yanpeng and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li}, year={2025}, journal={arXiv preprint arXiv:2503.22976}, } ```