Datasets:
Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 80, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 33, in _get_pipeline_from_tar
for filename, f in tar_iterator:
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
for x in self.generator(*self.args):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1380, in _iter_from_urlpath
yield from cls._iter_tar(f)
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1331, in _iter_tar
stream = tarfile.open(fileobj=f, mode="r|*")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 1886, in open
t = cls(name, filemode, stream, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 1762, in __init__
self.firstmember = self.next()
^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 2750, in next
raise ReadError(str(e)) from None
tarfile.ReadError: bad checksum
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Wiki-CoE
Wiki-CoE is a multi-hop visual QA dataset built from Wikipedia article screenshots for the Chain-of-Evidence (CoE) framework. Each example contains a question, a gold answer, an evidence chain of (image, bounding box, sub-query) hops, and a bag of candidate screenshots in original resolution.
Contents
The dataset is distributed as a single zstd-compressed tarball
(wiki_coe_full.tar.zst, ~116 GB) split into 3 parts to satisfy the
Hugging Face 50 GB per-file limit:
| File | Size |
|---|---|
wiki_coe_full.tar.zst.part00 |
40 GB |
wiki_coe_full.tar.zst.part01 |
40 GB |
wiki_coe_full.tar.zst.part02 |
36 GB |
wiki_coe_full.md5 |
md5 of reassembled tarball |
After extraction you get:
wiki_coe/
βββ screenshots/ # 151,988 PNG screenshots (~127 GB raw)
βββ bbox_annotations/ # per-page bbox JSONs
βββ train.jsonl # CoE training samples
βββ val.jsonl # validation samples
βββ test.jsonl # test samples
Reassemble & extract
# 1. Download all three parts (and the md5 file)
# 2. Concatenate them in order:
cat wiki_coe_full.tar.zst.part* > wiki_coe_full.tar.zst
# 3. (Optional) verify integrity:
md5sum -c wiki_coe_full.md5
# 4. Extract (requires zstd):
tar -I 'zstd -d' -xf wiki_coe_full.tar.zst
Citation
If you use this dataset, please cite the Chain-of-Evidence paper.
- Downloads last month
- 43