From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D
Paper
β’ 2503.22976 β’ Published
β’ 3
id int64 0 999 | img_type stringclasses 2 values | format_type stringclasses 2 values | task stringclasses 20 values | source stringclasses 1 value | image images listlengths 1 3 | depth sequencelengths 1 3 | pose sequencelengths 1 3 | intrinsic_color sequencelengths 1 3 | intrinsic_depth sequencelengths 1 3 | question stringlengths 182 1.36k | answer stringclasses 103 values |
|---|---|---|---|---|---|---|---|---|---|---|---|
0 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | [[-0.44636398553848267,-0.4912540018558502,0.7479490041732788,3.190169095993042,-0.8934580087661743,(...TRUNCATED) | [
[
1170.18798828125,
0,
647.75,
0,
0,
1170.18798828125,
483.75,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | [
[
577.87060546875,
0,
319.5,
0,
0,
577.87060546875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "The camera coordinates show the towel (red point) at 1.0 meters depth. What is the depth of the tow(...TRUNCATED) | 0.9 |
1 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | [[-0.5540639758110046,-0.2759700119495392,0.7854009866714478,2.062014102935791,-0.8290389776229858,0(...TRUNCATED) | [[1178.3519287109375,0.0,647.75,0.0,0.0,1178.3519287109375,483.75,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,1.(...TRUNCATED) | [
[
581.9021606445312,
0,
319.5,
0,
0,
581.9022216796875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "If the center of the chair (red point) is 3.7 meters deep, what is the depth of chair (blue point)?(...TRUNCATED) | 3.5 |
2 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[2130.0,2130.0,2130.0,2130.0,2130.0,2130.0,2130.0,2130.0,2130.0,2130.0,2130.0,2115.0,2115.0,2115.0(...TRUNCATED) | [[0.9693179726600647,-0.20720000565052032,0.13224799931049347,1.2438119649887085,-0.2278919965028762(...TRUNCATED) | [
[
577.87060546875,
0,
319.5,
0,
0,
577.87060546875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | [
[
577.87060546875,
0,
319.5,
0,
0,
577.87060546875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "Knowing that the center of chair (red point) is 1.8 meters deep, estimate the depth of bed (blue po(...TRUNCATED) | 1.6 |
3 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1429.0,1429.0,1429.0,1429.0,1429.0,1429.0,1429.0,1429(...TRUNCATED) | [[-0.8890349864959717,0.21207000315189362,-0.4057610034942627,2.9269580841064453,0.45516499876976013(...TRUNCATED) | [
[
1170.18798828125,
0,
647.75,
0,
0,
1170.18798828125,
483.75,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | [
[
578,
0,
319.5,
0,
0,
578,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "Given that toilet paper (red point) is at 1.6 meters, predict the depth of toilet (blue point). Ca(...TRUNCATED) | 1.5 |
4 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | [[0.2932389974594116,0.33917999267578125,-0.8938500285148621,5.539703845977783,0.9327509999275208,-0(...TRUNCATED) | [[1169.62109375,0.0,646.2950439453125,0.0,0.0,1167.1051025390625,489.9270324707031,0.0,0.0,0.0,1.0,0(...TRUNCATED) | [[577.5906982421875,0.0,318.9054260253906,0.0,0.0,578.7297973632812,242.68360900878906,0.0,0.0,0.0,1(...TRUNCATED) | "The dresser (red point) is placed at a depth of 3.0 meters along the Z-axis(camera coordinate syste(...TRUNCATED) | 2.1 |
5 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | [[0.9100059866905212,-0.15350499749183655,0.38512998819351196,4.489101886749268,-0.41239699721336365(...TRUNCATED) | [
[
1170.18798828125,
0,
647.75,
0,
0,
1170.18798828125,
483.75,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | [
[
577.87060546875,
0,
319.5,
0,
0,
577.87060546875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "Using the chair (red point) depth of 3.4 meters, determine the depth of tv (blue point). Calculate(...TRUNCATED) | 3.0 |
6 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1177.0,1177.0,1181.0,1177.0,1177.(...TRUNCATED) | [[-0.9735140204429626,-0.13659200072288513,0.18333899974822998,4.473992824554443,-0.2260980010032653(...TRUNCATED) | [
[
577.87060546875,
0,
319.5,
0,
0,
577.87060546875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | [
[
578,
0,
319.5,
0,
0,
578,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "The depth of the cabinet (red point) at its center is measured as 1.6 meters. Estimate the depth of(...TRUNCATED) | 1.4 |
7 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | [[0.6109970211982727,-0.003014999907463789,-0.7916269898414612,2.390932083129883,0.7916330099105835,(...TRUNCATED) | [
[
1170.18798828125,
0,
647.75,
0,
0,
1170.18798828125,
483.75,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | [
[
577.87060546875,
0,
319.5,
0,
0,
577.87060546875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "The range hood (red point) is placed at a depth of 1.8 meters along the Z-axis(camera coordinate sy(...TRUNCATED) | 1.9 |
8 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | [[-0.8749110102653503,-0.2052139937877655,0.43865498900413513,2.520699977874756,-0.48417699337005615(...TRUNCATED) | [[1163.445068359375,0.0,653.6260375976562,0.0,0.0,1164.7939453125,481.60003662109375,0.0,0.0,0.0,1.0(...TRUNCATED) | [[574.540771484375,0.0,322.5228271484375,0.0,0.0,577.583740234375,238.55885314941406,0.0,0.0,0.0,1.0(...TRUNCATED) | "Using sink (red point) at 1.6 meters as a guide, determine the depth of trash can (blue point). Ca(...TRUNCATED) | 1.9 |
9 | single_view | fill | depth_prediction_oc | scannet | [{"src":"/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fjasonzhango%2FSPAR-Bench-Tiny-RGBD%2F--%2F%7Bdataset_%3Cspan class="text-orange-500" data-svelte-h="svelte-rg93dk">(...TRUNCATED) | [[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED) | [[-0.9995409846305847,-0.0005740000051446259,-0.030302999541163445,1.2891939878463745,0.025614000856(...TRUNCATED) | [
[
1170.18798828125,
0,
647.75,
0,
0,
1170.18798828125,
483.75,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | [
[
577.87060546875,
0,
319.5,
0,
0,
577.87060546875,
239.5,
0,
0,
0,
1,
0,
0,
0,
0,
1
]
] | "With trash can (red point) at 1.6 meters, calculate the depth of toilet (blue point). Calculate or(...TRUNCATED) | 1.3 |
A lightweight RGBD version of SPAR-Bench for fast evaluation of 3D-aware spatial reasoning in vision-language models (VLMs).
SPAR-Bench-Tiny-RGBD is a subset of SPAR-Bench-RGBD, containing 1,000 QA samples (50 per task Γ 20 tasks), each augmented with depths, camera intrinsics, and pose information.
This dataset is ideal for quick evaluation of 3D-aware models, while maintaining compatibility with the same structure as the full benchmark.
datasets
from datasets import load_dataset
spar_rgbd = load_dataset("jasonzhango/SPAR-Bench-RGBD")
SPAR-Bench-Tiny-RGBD uses the same evaluation protocol and metrics as the full SPAR-Bench.
We provide an evaluation pipeline in our GitHub repository, built on top of lmms-eval.
If you find this project or dataset helpful, please consider citing our paper:
@article{zhang2025from,
title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
author={Zhang, Jiahui and Chen, Yurui and Zhou, Yanpeng and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
year={2025},
journal={arXiv preprint arXiv:2503.22976},
}