Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code:   InfoError
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 140dcfcf-044a-4ff3-9367-35edd712a4f2)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 223, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
                  builder = load_dataset_builder(
                            ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1132, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 632, in get_module
                  data_files = DataFilesDict.from_patterns(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 689, in from_patterns
                  else DataFilesList.from_patterns(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 592, in from_patterns
                  origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 506, in _get_origin_metadata
                  return thread_map(
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
                  return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
                  return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/std.py", line 1169, in __iter__
                  for obj in iterable:
                             ^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
                  yield _result_or_cancel(fs.pop())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
                  return fut.result(timeout)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
                  return self.__get_result()
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
                  raise self._exception
                File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
                  result = self.fn(*self.args, **self.kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 485, in _get_single_origin_metadata
                  resolved_path = fs.resolve_path(data_file)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
                  repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
                  self._api.repo_info(
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
                  return method(
                         ^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
                  return super().send(request, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 140dcfcf-044a-4ff3-9367-35edd712a4f2)')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

RealPDEBench

RealPDEBench is a benchmark of paired real-world measurements and matched numerical simulations for complex physical systems. It is designed for spatiotemporal forecasting and sim-to-real transfer evaluation on real data.

This Hub repository (AI4Science-WestlakeU/RealPDEBench) is the release repo for RealPDEBench. The full dataset upload is planned alongside the paper posting (mid‑Jan 2026, arXiv). If you are reading this before the release, you may only see a small sample and metadata while the upload is in progress.

What makes RealPDEBench different?

  • Paired real + simulated data: each scenario provides experimental measurements and corresponding CFD/LES simulations.
  • Real-world evaluation: models are evaluated on real trajectories to quantify the sim-to-real gap.
  • Multi-modal mismatch: simulations include additional unmeasured modalities (e.g., pressure, species fields), enabling modality-masking and transfer strategies.

Data sources (high level)

  • Fluid systems (cylinder, controlled_cylinder, fsi, foil):
    • Real: Particle Image Velocimetry (PIV) in a circulating water tunnel
    • Sim: CFD (2D finite-volume + immersed-boundary; 3D GPU solvers depending on scenario)
  • Combustion (combustion):
    • Real: OH* chemiluminescence imaging (high-speed)
    • Sim: Large Eddy Simulation (LES) with detailed chemistry (NH(_3)/CH(_4)/air co-firing)

Scenarios (5)

Scenario Real data (measured) Numerical data (simulated) Frames / trajectory Spatial grid (after sub-sampling) HDF5 trajectories (real / numerical)
cylinder velocity (u,v) (u,v,p) 3990 64×128 92 / 92
controlled_cylinder (u,v) (u,v,p) (+ control params in filenames) 3990 64×128 96 / 96
fsi (u,v) (u,v,p) 2173 64×64 51 / 51
foil (u,v) (u,v,p) 3990 64×128 98 / 99
combustion OH* chemiluminescence intensity (1 channel) intensity surrogate (1) + 15 simulated fields 2001 128×128 30 / 30

Total trajectories (HDF5 files): ~735 (≈367 real + ≈368 numerical).

Physical parameter ranges (real experiments)

Scenario Key parameters (real)
cylinder Reynolds number (Re): 1800–12000
controlled_cylinder (Re): 1781–9843; control frequency (f): 0.5–1.4 Hz
fsi (Re): 3272–9068; mass ratio (m^*): 18.2–20.8
foil angle of attack (\alpha): 0°–20°; (Re): 2968–17031
combustion CH(_4) ratio: 20–100%; equivalence ratio (\phi): 0.75–1.3

Data format on the Hub (HF Arrow via save_to_disk)

Each split is stored as a Hugging Face datasets.Dataset saved with Dataset.save_to_disk(). Concretely, each split is a directory containing:

  • data-*.arrow (sharded Arrow files, float32 payloads stored as bytes)
  • dataset_info.json
  • state.json

test_mode metadata (JSON)

RealPDEBench supports test_mode evaluation splits (in_dist, out_dist, seen, unseen). The group definitions are shipped as JSON dicts per scenario:

  • in_dist_test_params_{type}.json
  • out_dist_test_params_{type}.json
  • remain_params_{type}.json

where {type} is real or numerical.

Temporal windowing (what an “example” means)

RealPDEBench is stored as sliding windows cut from longer trajectories. Each row corresponds to (sim_id, time_id):

  • sim_id: which trajectory (HDF5 file)
  • time_id: start index of the window

Typical window lengths (T):

  • 40 frames for cylinder, fsi, foil, combustion (often used as 20‑step input + 20‑step output)
  • 20 frames for controlled_cylinder (often 10 + 10)
  • 20 frames for combustion/surrogate_train (surrogate model training data)

Intended layout for the full release (mirrors the on-disk structure used by RealPDEBench loaders):

{repo_root}/
  cylinder/
    in_dist_test_params_real.json
    out_dist_test_params_real.json
    remain_params_real.json
    in_dist_test_params_numerical.json
    out_dist_test_params_numerical.json
    remain_params_numerical.json
    hf_dataset/
      real_train/  real_val/  real_test/
      numerical_train/  numerical_val/  numerical_test/
  fsi/
    in_dist_test_params_real.json
    out_dist_test_params_real.json
    remain_params_real.json
    in_dist_test_params_numerical.json
    out_dist_test_params_numerical.json
    remain_params_numerical.json
    hf_dataset/
      ...
  combustion/
    in_dist_test_params_real.json
    out_dist_test_params_real.json
    remain_params_real.json
    in_dist_test_params_numerical.json
    out_dist_test_params_numerical.json
    remain_params_numerical.json
    hf_dataset/
      real_train/  real_val/  real_test/
      numerical_train/              # (val/test intentionally empty)
      surrogate_train/              # combustion-only (surrogate model training)
      surrogate_train_sim_ids.txt
      surrogate_train_meta.json
  ...

How to download only what you need

For large data, use snapshot_download(..., allow_patterns=...) to avoid pulling the full repository.

import os
from huggingface_hub import snapshot_download
from datasets import load_from_disk

repo_id = "AI4Science-WestlakeU/RealPDEBench"
# Tip (recommended behind GFW / when using hf-mirror): disable Xet to avoid extra domains.
os.environ["HF_HUB_DISABLE_XET"] = "1"
local_dir = snapshot_download(
    repo_id=repo_id,
    repo_type="dataset",
    allow_patterns=["fsi/**"],  # example: download only the FSI folder
    endpoint="https://hf-mirror.com",
)

ds = load_from_disk(os.path.join(local_dir, "fsi", "hf_dataset", "numerical_val"))
row = ds[0]
print(row.keys())

Schema (columns)

Fluid datasets (cylinder, controlled_cylinder, fsi, foil)

  • Keys:
    • sim_id (string): trajectory file name (e.g., 10031.h5)
    • time_id (int): start frame index of the window
    • u, v (bytes): float32 arrays of shape (T, H, W)
    • p (bytes): float32 array (T, H, W) (numerical splits only)
    • shape_t, shape_h, shape_w (int): shapes for decoding

Combustion dataset (combustion)

  • Keys:
    • sim_id (string): e.g., 40NH3_1.1.h5
    • time_id (int): start frame index of the window
    • observed (bytes): float32 array (T, H, W) (real: measured intensity; numerical: surrogate intensity)
    • numerical (bytes): float32 array (T, H, W, 15) (numerical splits only)
    • numerical_channels (int): number of numerical channels (15)
    • shape_t, shape_h, shape_w (int): shapes for decoding

Combustion surrogate-train (combustion/surrogate_train)

Used to train a surrogate model mapping simulated modalities → real modality (combustion only).

  • Keys:
    • real (bytes): float32 array (T, H, W) (target intensity)
    • numerical (bytes): float32 array (T, H, W, C) (input fields)
    • plus shapes (*_shape_*) and numerical_channels

Decoding example

import numpy as np

u = np.frombuffer(row["u"], dtype=np.float32).reshape(
    row["shape_t"], row["shape_h"], row["shape_w"]
)

Current converted data size (local conversion; full release target)

These numbers refer to our current HF Arrow conversion outputs (not all uploaded to this test repo yet):

  • Total: ~954GB across all scenarios
  • Largest shard file: ~0.47GB (well below the Hub’s recommended <50GB per file)
  • Total file count: ~2.1k files (well below the Hub’s recommended <100k files per repo)

Per-scenario totals (HF Arrow):

Scenario Total size
combustion 622GB
cylinder 116GB
fsi 34GB
controlled_cylinder 61GB
foil 124GB

Recommended benchmark protocols

RealPDEBench supports three standard training paradigms (all evaluated on real-world data):

  • Simulated training (numerical only)
  • Real-world training (real only)
  • Simulated pretraining + real finetuning

License

This dataset is released under CC BY‑NC 4.0 (non‑commercial). Please credit the authors and the benchmark paper when using the dataset.

Citation (paper release mid‑Jan 2026)

ArXiv link and BibTeX will be added here at release time.

Contact

AI for Scientific Simulation and Discovery Lab, Westlake University
Maintainer: westlake-ai4s (Hugging Face)
Org: AI4Science-WestlakeU

Downloads last month
142