Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
episode_index: int64
length: int64
tasks: list<item: string>
vs
codebase_version: string
robot_type: string
total_episodes: int64
total_frames: int64
total_tasks: int64
total_chunks: int64
chunks_size: int64
fps: int64
features: struct<observation.state: struct<dtype: string, shape: list<item: int64>, _info: struct<names: list<item: string>>>, action: struct<dtype: string, shape: list<item: int64>, _info: struct<names: list<item: string>>>, episode_index: struct<dtype: string, shape: list<item: null>>, frame_index: struct<dtype: string, shape: list<item: null>>, timestamp: struct<dtype: string, shape: list<item: null>>, next.done: struct<dtype: string, shape: list<item: null>>, next.reward: struct<dtype: string, shape: list<item: null>>, next.success: struct<dtype: string, shape: list<item: null>>, index: struct<dtype: string, shape: list<item: null>>, task_index: struct<dtype: string, shape: list<item: null>>, observation.images.top: struct<dtype: string, shape: list<item: int64>, _info: struct<video_path: string>>, observation.images.wrist: struct<dtype: string, shape: list<item: int64>, _info: struct<video_path: string>>>
encoding: struct<video: struct<codec: string, pix_fmt: string, g: int64, crf: int64>>
video_info: struct<video.fps: int64, video.codec: string, video.pix_fmt: string, video.preset: string>
data_path: string
video_path: string
camera_keys: list<item: string>
camera_names: list<item: string>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3357, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2111, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2315, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 536, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
episode_index: int64
length: int64
tasks: list<item: string>
vs
codebase_version: string
robot_type: string
total_episodes: int64
total_frames: int64
total_tasks: int64
total_chunks: int64
chunks_size: int64
fps: int64
features: struct<observation.state: struct<dtype: string, shape: list<item: int64>, _info: struct<names: list<item: string>>>, action: struct<dtype: string, shape: list<item: int64>, _info: struct<names: list<item: string>>>, episode_index: struct<dtype: string, shape: list<item: null>>, frame_index: struct<dtype: string, shape: list<item: null>>, timestamp: struct<dtype: string, shape: list<item: null>>, next.done: struct<dtype: string, shape: list<item: null>>, next.reward: struct<dtype: string, shape: list<item: null>>, next.success: struct<dtype: string, shape: list<item: null>>, index: struct<dtype: string, shape: list<item: null>>, task_index: struct<dtype: string, shape: list<item: null>>, observation.images.top: struct<dtype: string, shape: list<item: int64>, _info: struct<video_path: string>>, observation.images.wrist: struct<dtype: string, shape: list<item: int64>, _info: struct<video_path: string>>>
encoding: struct<video: struct<codec: string, pix_fmt: string, g: int64, crf: int64>>
video_info: struct<video.fps: int64, video.codec: string, video.pix_fmt: string, video.preset: string>
data_path: string
video_path: string
camera_keys: list<item: string>
camera_names: list<item: string>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Ketchup Pick and Place Dataset
This dataset contains robot demonstrations for a pick and place task using a Universal Robots UR5e.
Dataset Statistics
- Total Episodes: 1
- Total Frames: 154
- FPS: 10
- Robot DOF: 7 (6DOF pose + gripper)
- Cameras: ['top', 'wrist']
Task Description
Pick and place manipulation task where the robot grasps a ketchup bottle and places it in a different location.
Data Format
This dataset follows the LeRobot format with:
- Robot states in
observation.state(7DOF: x, y, z, roll, pitch, yaw, gripper) - Actions as next robot states
- Video recordings from multiple camera angles
- Videos encoded with H.264 codec using FFmpeg for maximum compatibility
Camera Setup
- top: RGB camera feed
- wrist: RGB camera feed
Usage
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
# Load dataset
dataset = LeRobotDataset("sakehaosdfadfasf/ketchup_pick_place")
# Visualize data
python -m lerobot.scripts.visualize_dataset \
--repo-id sakehaosdfadfasf/ketchup_pick_place \
--episode-index 0
Video Format
Videos are encoded using FFmpeg with H.264 codec, YUV420P pixel format, and CRF 18 for high quality. This ensures maximum compatibility with LeRobot's video decoding pipeline.
- Downloads last month
- 15