Datasets:
YOGURT-ORDER-PREPARATION-sample
Overview
This dataset provides a high-fidelity, ego-centric capture of professional order preparation tasks within a cold-chain logistics environment. It focuses on the rapid, repetitive, and high-precision picking and packing of multi-unit yogurt packs. This resource is specifically designed to train robotic agents in bimanual coordination, spatial reachability, and contact-point optimization for fragile rigid goods.
Key Technical Features
Temporal Protocol (T1-T4): Every micro-action is frame-accurately annotated using our proprietary four-point system:
- T1 (Contact): Initial physical engagement.
- T2 (Lift-off): Static-to-dynamic transition.
- T3 (Placement): Target container engagement.
- T4 (Release): Total tactile disengagement.
Use Cases for Research
- Foundation Models & World Models: Training models to understand the Newtonian physics of rigid payloads and container boundaries.
- Bimanual Policy Learning: Developing algorithms for coordinated two-hand tasks, such as stabilizing a box while inserting a pack.
- End-to-End Picking Pipelines: Optimizing the "approach-to-grasp" phase in unstructured industrial environments where lighting and clutter vary.
About Origine AI
We build real-world manipulation datasets from professional environments across France: industrial kitchens, bakeries, butcheries, and workshops.
Our network of 100+ partner sites gives us direct, recurring access to expert practitioners doing their actual jobs. We deploy synchronized multi-modal capture stacks (ego-view, wrist cameras, IMU) on-site and adapt our setup to the specific requirements of each collection.
We are currently working with robotics labs on custom pilots focused on dexterous manipulation and deformable object handling. GDPR-compliant. EU-based.
Commercial Licensing and Contact
- The complete dataset and our custom collection services are available for commercial licensing and large-scale R&D. Whether you need existing data or a custom setup in a specific professional environment, reach out to discuss your requirements.
- π© hello@origineai.com
License
- This dataset is licensed under cc-by-nc-nd-4.0.
Dataset Statistics
This section provides detailed statistics extracted from dataset_metadata.json:
Overall Statistics
- Dataset Name: YOGURT-ORDER-PREPARATION-sample
- Batch ID: example
- Total Clips: 60
- Number of Sequences: 57
- Number of Streams: 1
- Stream Types: ego
Duration Statistics
- Total Duration: 7.97 minutes (478.49 seconds)
- Average Clip Duration: 7.97 seconds (7974.83 ms)
- Min Clip Duration: 5.10 seconds (5100 ms)
- Max Clip Duration: 15.12 seconds (15125 ms)
Clip Configuration
- Padding: 1500 ms
Statistics by Stream Type
Ego
- Number of clips: 60
- Total duration: 7.97 minutes (478.49 seconds)
- Average clip duration: 7.97 seconds (7974.83 ms)
- Min clip duration: 5.10 seconds (5100 ms)
- Max clip duration: 15.12 seconds (15125 ms)
Note: Complete metadata is available in
dataset_metadata.jsonin the dataset root directory.
Dataset Structure
The dataset uses a unified structure where each example contains all synchronized video streams:
dataset/
βββ data-*.arrow # Dataset files (Arrow format)
βββ dataset_info.json # Dataset metadata
βββ dataset_metadata.json # Complete dataset statistics
βββ state.json # Dataset state
βββ README.md # This file
βββ medias/ # Media files (mosaics, previews, etc.)
β βββ mosaic.mp4 # Mosaic preview video
βββ videos/ # All video clips
βββ ego/ # Ego video clips
Dataset Format
The dataset contains 60 synchronized scenes in a single train split. Each example includes:
- Synchronized video columns: One column per flux type (e.g.,
ego) - Scene metadata:
scene_id,sync_id,duration_ms,padding_ms,fps - Rich metadata dictionary: Task, environment, audio info, and synchronization details
All videos in a single example are synchronized and correspond to the same moment in time.
Usage
Load and Access Dataset
import json
import random
from pathlib import Path
import cv2
from huggingface_hub import snapshot_download
from datasets import load_from_disk
repo = "orgn3ai/YOGURT-ORDER-PREPARATION-sample"
# 1) Download snapshot locally
local_path = snapshot_download(repo_id=repo, repo_type="dataset")
base_dir = Path(local_path)
print("Snapshot path:", base_dir)
# 2) Load dataset saved with save_to_disk()
ds = load_from_disk(str(base_dir))
train = ds["train"] if isinstance(ds, dict) and "train" in ds else ds
print("Train rows:", len(train))
print("Train columns:", train.column_names)
# 3) Read root metadata.json and extract "flux"
metadata_path = base_dir / "dataset_metadata.json"
if not metadata_path.exists():
raise FileNotFoundError(
f"dataset_metadata.json not found at repo root: {metadata_path}\n"
"Check your repo tree; maybe it's named dataset_metadata.json instead."
)
with metadata_path.open("r", encoding="utf-8") as f:
root_meta = json.load(f)
flux = root_meta.get("flux")
if not isinstance(flux, list) or not flux:
raise ValueError(f'Expected metadata.json["flux"] to be a non-empty list, got: {flux}')
print("Flux entries:", flux)
# 4) Pick a random dataset entry
idx = random.randrange(len(train))
ex = train[idx]
print("\nRandom example index:", idx)
print("Example keys:", list(ex.keys()))
def resolve_video_path(video_value) -> Path:
"""
video_value can be:
- string path (most common case)
- dict like {"path": "...", "bytes": ...} (for backward compatibility)
"""
if isinstance(video_value, dict) and "path" in video_value:
rel = video_value["path"]
elif isinstance(video_value, str):
rel = video_value
else:
raise TypeError(f"Unsupported video value type: {type(video_value)}; value={video_value}")
# Normalize to avoid leading "./"
rel = str(rel).lstrip("/")
# Your dataset may store relative paths like "videos/ego/xxx.mp4"
# Resolve them inside the snapshot folder.
return base_dir / rel
def inspect_video(path: Path):
print(f" Local path: {path}")
print(f" Exists: {path.exists()}")
if not path.exists():
return {"ok": False, "reason": "file_not_found"}
cap = cv2.VideoCapture(str(path))
if not cap.isOpened():
return {"ok": False, "reason": "cannot_open"}
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = float(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Some codecs report fps=0; guard it
duration = (frame_count / fps) if fps and fps > 0 else None
# Try read first frame
ret, frame0 = cap.read()
cap.release()
info = {
"ok": True,
"width": width,
"height": height,
"fps": fps,
"frame_count": frame_count,
"duration_sec": duration,
"first_frame_ok": bool(ret),
"first_frame_shape": tuple(frame0.shape) if ret and frame0 is not None else None,
"first_frame_dtype": str(frame0.dtype) if ret and frame0 is not None else None,
}
return info
# 5) For each flux key, inspect the associated video
print("\n=== VIDEO CHECK ===")
for key in flux:
print(f"\nFlux key: {key}")
if key not in ex:
print(f" ERROR: key '{key}' not in example. Available keys: {list(ex.keys())}")
continue
try:
video_path = resolve_video_path(ex[key])
except Exception as e:
print(f" ERROR resolving path: {e}")
continue
info = inspect_video(video_path)
if not info["ok"]:
print(f" ERROR: {info['reason']}")
continue
print(" Video properties:")
print(f" - Resolution: {info['width']}x{info['height']}")
print(f" - FPS: {info['fps']:.3f}")
print(f" - Frames: {info['frame_count']}")
if info["duration_sec"] is not None:
print(f" - Duration: {info['duration_sec']:.3f}s")
else:
print(" - Duration: (fps unavailable)")
print(f" - First frame decoded: {info['first_frame_ok']}")
if info["first_frame_ok"]:
print(f" - Frame0 shape: {info['first_frame_shape']}")
print(f" - Frame0 dtype: {info['first_frame_dtype']}")
print('\n=== LABELS ===')
print(f"nbLabels: {len(ex['labels'])}")
for label in ex['labels']:
print(f" - {label['time_ms']}ms (withoutPadding): {label['label']}")
print("\nDONE.")
Dataset Features
Each example contains:
scene_id: Unique scene identifier (e.g., "01_0000")sync_id: Synchronization ID linking synchronized clipsduration_ms: Duration of the synchronized clip in milliseconds (includes padding)padding_ms: Padding applied to clips (added at beginning and end, total padding = padding_ms Γ 2)fps: Frames per second (extracted from video)batch_id: Batch identifierdataset_name: Dataset name from config- One column per flux: Each flux name from
metadata['flux_names']has its own column (e.g.,ego) - String path to video file (relative to dataset root) metadata: Dictionary containing:task: Task identifierenvironment: Environment descriptionhas_audio: Whether videos contain audionum_fluxes: Number of synchronized flux typesflux_names: List of flux names presentsequence_ids: List of original sequence IDssync_offsets_ms: List of synchronization offsets
Additional Notes
Important: This dataset uses a unified structure where each example contains all synchronized video streams in separate columns. All examples are in the train split.
Synchronization: Videos in the same example (same index in the train split) are automatically synchronized. They share the same sync_id and correspond to the same moment in time.
Flux Keys: The available flux keys are listed in dataset_metadata.json under the "flux" key. Use these keys to programmatically access video columns in each example.
Video Paths: Video paths are stored as strings (relative to the dataset root directory). Paths can be resolved using the resolve_video_path function shown in the usage example above.
License
This dataset is licensed under cc-by-nc-nd-4.0.
- Downloads last month
- 110