Datasets:
Add some more source and collection clarification
#2
by
egrace479
- opened
README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
license: cc0-1.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
-
pretty_name:
|
| 6 |
task_categories:
|
| 7 |
- image-classification
|
| 8 |
tags:
|
|
@@ -46,7 +46,7 @@ The annotations indicate the presence of animals in the images in YOLO format. T
|
|
| 46 |
### Dataset Description
|
| 47 |
|
| 48 |
- **Curated by:** Jenna Kline
|
| 49 |
-
- **Homepage:** [
|
| 50 |
- **Repository:** [Imageomics/mmla](https://github.com/Imageomics/mmla)
|
| 51 |
- **Paper:** [MMLA: Multi-Environment, Multi-Species, Low-Altitude Aerial Footage Dataset](https://arxiv.org/abs/2504.07744)
|
| 52 |
|
|
@@ -58,10 +58,9 @@ project at the [Mpala Research Center](https://mpala.org/) in Kenya in January 2
|
|
| 58 |
Sessions 1 and 2 are derived from the [full-length videos](https://huggingface.co/datasets/imageomics/KABR-mini-scene-raw-videos)
|
| 59 |
used in the [original KABR mini-scene release](https://huggingface.co/datasets/imageomics/KABR), now available in YOLO format.
|
| 60 |
Sessions 3, 4 and 5 are part of the [kabr-tools release](https://huggingface.co/datasets/imageomics/kabr-worked-examples),
|
| 61 |
-
derived from these [full-length videos](https://huggingface.co/datasets/imageomics/KABR-raw-videos).
|
| 62 |
The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery.
|
| 63 |
-
|
| 64 |
-
The dataset is designed to facilitate research in wildlife monitoring and conservation using advanced imaging technologies.
|
| 65 |
|
| 66 |
The dataset consists of 104,062 frames. Each frame is accompanied by annotations in YOLO format, indicating the presence of zebras and giraffes and their bounding boxes within the images. The annotations were completed manually by the dataset curator using [CVAT](https://www.cvat.ai/) and [kabr-tools](https://github.com/Imageomics/kabr-tools).
|
| 67 |
|
|
@@ -284,9 +283,9 @@ The dataset includes frames extracted from drone videos captured during five dis
|
|
| 284 |
```
|
| 285 |
|
| 286 |
### Data Instances
|
| 287 |
-
All images are named `<
|
| 288 |
|
| 289 |
-
Note on data partitions
|
| 290 |
|
| 291 |
|
| 292 |
### Data Fields
|
|
@@ -307,11 +306,9 @@ see the MMLA datasets from [Ol Pejeta Conservancy](https://huggingface.co/datase
|
|
| 307 |
- `width`: Width of the bounding box (normalized to `[0, 1]`)
|
| 308 |
- `height`: Height of the bounding box (normalized to `[0, 1]`)
|
| 309 |
|
| 310 |
-
|
| 311 |
-
|
| 312 |
-
|
| 313 |
-
Give your train-test splits for benchmarking; could be as simple as "split is indicated by the `split` column in the metadata file: `train`, `val`, or `test`." Or perhaps this is just the training dataset and other datasets were used for testing (you may indicate which were used).
|
| 314 |
-
-->
|
| 315 |
|
| 316 |
## Dataset Creation
|
| 317 |
|
|
@@ -325,7 +322,10 @@ The dataset was created to facilitate research in wildlife monitoring and conser
|
|
| 325 |
### Source Data
|
| 326 |
|
| 327 |
<!-- This section describes the source data (e.g., news text and headlines, social media posts, translated sentences, ...). As well as an original source it was created from (e.g., sampling from Zenodo records, compiling images from different aggregators, etc.) -->
|
| 328 |
-
|
|
|
|
|
|
|
|
|
|
| 329 |
|
| 330 |
#### Data Collection and Processing
|
| 331 |
|
|
|
|
| 2 |
license: cc0-1.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
+
pretty_name: MMLA Mpala
|
| 6 |
task_categories:
|
| 7 |
- image-classification
|
| 8 |
tags:
|
|
|
|
| 46 |
### Dataset Description
|
| 47 |
|
| 48 |
- **Curated by:** Jenna Kline
|
| 49 |
+
- **Homepage:** [MMLA project](https://github.com/Imageomics/mmla)
|
| 50 |
- **Repository:** [Imageomics/mmla](https://github.com/Imageomics/mmla)
|
| 51 |
- **Paper:** [MMLA: Multi-Environment, Multi-Species, Low-Altitude Aerial Footage Dataset](https://arxiv.org/abs/2504.07744)
|
| 52 |
|
|
|
|
| 58 |
Sessions 1 and 2 are derived from the [full-length videos](https://huggingface.co/datasets/imageomics/KABR-mini-scene-raw-videos)
|
| 59 |
used in the [original KABR mini-scene release](https://huggingface.co/datasets/imageomics/KABR), now available in YOLO format.
|
| 60 |
Sessions 3, 4 and 5 are part of the [kabr-tools release](https://huggingface.co/datasets/imageomics/kabr-worked-examples),
|
| 61 |
+
derived from these [full-length videos](https://huggingface.co/datasets/imageomics/KABR-raw-videos) (specifically, (Revision [fda789c](https://huggingface.co/datasets/imageomics/KABR-raw-videos/tree/fda789ce0b0f73f936964a08c2492e814d30dca4))).
|
| 62 |
The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery.
|
| 63 |
+
It includes frames from various sessions, with annotations indicating the presence of zebras in the images in YOLO format, and is designed to facilitate research in wildlife monitoring and conservation using advanced imaging technologies.
|
|
|
|
| 64 |
|
| 65 |
The dataset consists of 104,062 frames. Each frame is accompanied by annotations in YOLO format, indicating the presence of zebras and giraffes and their bounding boxes within the images. The annotations were completed manually by the dataset curator using [CVAT](https://www.cvat.ai/) and [kabr-tools](https://github.com/Imageomics/kabr-tools).
|
| 66 |
|
|
|
|
| 283 |
```
|
| 284 |
|
| 285 |
### Data Instances
|
| 286 |
+
All images are named `<video_id>_<frame_number>.jpg` under the particular session and full video to which they belong; these can be matched to dates based on the table above. The annotations are in YOLO format and are stored in a corresponding .txt file with the same name as the image.
|
| 287 |
|
| 288 |
+
**Note on data partitions:** DJI saves video files into 3GB chunks, so each session is divided into multiple video files. HuggingFace limits folders to 10,000 files per folder, so each video file is further divided into partitions of 10,000 files. The partition folders are named `partition_1`, `partition_2`, etc. The original video files are not included in the dataset.
|
| 289 |
|
| 290 |
|
| 291 |
### Data Fields
|
|
|
|
| 306 |
- `width`: Width of the bounding box (normalized to `[0, 1]`)
|
| 307 |
- `height`: Height of the bounding box (normalized to `[0, 1]`)
|
| 308 |
|
| 309 |
+
### Data Splits
|
| 310 |
+
|
| 311 |
+
This dataset was used in conjunction with the other two [MMLA datasets](https://huggingface.co/collections/imageomics/mmla) for both training and testing the [MMLA YOLO model](https://huggingface.co/imageomics/mmla#training-details).
|
|
|
|
|
|
|
| 312 |
|
| 313 |
## Dataset Creation
|
| 314 |
|
|
|
|
| 322 |
### Source Data
|
| 323 |
|
| 324 |
<!-- This section describes the source data (e.g., news text and headlines, social media posts, translated sentences, ...). As well as an original source it was created from (e.g., sampling from Zenodo records, compiling images from different aggregators, etc.) -->
|
| 325 |
+
|
| 326 |
+
Please see the original KABR
|
| 327 |
+
[full-length mini-scene videos](https://huggingface.co/datasets/imageomics/KABR-mini-scene-raw-videos)
|
| 328 |
+
(from the [KABR mini-scene release](https://huggingface.co/datasets/imageomics/KABR)), and the remaining KABR [full-length videos](https://huggingface.co/datasets/imageomics/KABR-raw-videos) for more information on the source data.
|
| 329 |
|
| 330 |
#### Data Collection and Processing
|
| 331 |
|