Datasets:

Modalities:
Image
Text
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
License:
File size: 13,976 Bytes
caa66d5
 
 
 
b37ba0d
 
 
caa66d5
 
 
 
 
 
 
b37ba0d
 
d4d67ee
 
caa66d5
b37ba0d
 
 
 
 
 
 
 
d4d67ee
b37ba0d
 
 
 
 
 
 
 
caa66d5
 
 
123b3df
caa66d5
 
 
 
b37ba0d
caa66d5
 
 
 
 
123b3df
 
6ead60d
caa66d5
 
 
 
 
3a358b3
caa66d5
986d08e
 
 
 
 
 
caa66d5
 
986d08e
23aec86
caa66d5
 
 
61ef6c7
 
caa66d5
 
 
123b3df
caa66d5
b37ba0d
d65b439
e61014d
d65b439
 
 
e61014d
 
 
 
 
d65b439
441b32b
 
 
 
 
 
 
d65b439
441b32b
 
 
b37ba0d
441b32b
b37ba0d
441b32b
b37ba0d
441b32b
b37ba0d
441b32b
b37ba0d
441b32b
b37ba0d
441b32b
b37ba0d
 
 
 
 
 
 
 
caa66d5
 
 
b37ba0d
e61014d
caa66d5
 
 
 
986d08e
 
 
 
caa66d5
 
 
 
 
 
 
 
b37ba0d
 
 
caa66d5
 
 
 
 
 
 
 
 
 
123b3df
caa66d5
 
 
 
 
 
 
 
 
b37ba0d
caa66d5
 
3a358b3
caa66d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3a358b3
caa66d5
 
 
 
 
 
 
 
 
 
4e3f270
caa66d5
 
 
b37ba0d
caa66d5
 
 
 
 
 
 
123b3df
410aa7b
caa66d5
 
 
 
c34b7e9
caa66d5
410aa7b
c34b7e9
caa66d5
 
 
 
d65b439
123b3df
2af7404
410aa7b
 
 
 
 
 
 
 
 
 
b37ba0d
410aa7b
 
123b3df
caa66d5
 
 
 
 
 
 
 
 
 
 
 
410aa7b
caa66d5
 
 
 
 
 
 
 
 
 
 
 
 
410aa7b
caa66d5
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
---
license: cc0-1.0
language:
- en
pretty_name: MMLA The Wilds
task_categories:
- image-classification
tags:
- biology
- image
- animals
- CV
- drone
- zebra
- African Painted Dog
- Persian Onanger
- giraffe
- Grevy's
size_categories: 10K<n<100K
description: >-
  Annotated video frames of giraffes, Grevy's zebras, Persian onagers, and
  African Painted Dogs collected at The Wilds in Ohio. This dataset is intended
  for use in training and evaluating computer vision models for animal detection
  and classification from drone imagery. It includes frames from various
  sessions, with annotations indicating the presence of animals in the images in
  YOLO format, and is designed to facilitate research in wildlife monitoring and
  conservation using autonomous drones.
configs:
- config_name: African-painted-dogs
  data_files: session_1/*/*.jpg
- config_name: Persion-onangers
  data_files: session_2/*/*.jpg
- config_name: Giraffes
  data_files: session_3/*/*.jpg
- config_name: Grevys-zebras
  data_files: session_4/*/*.jpg
---


# Dataset Card for MMLA The Wilds

<!-- Provide a quick summary of what the dataset is or can be used for. --> 

## Dataset Details
This dataset contains annotated video frames of giraffes, Grevy's zebras, Persian onagers, and African Painted Dogs, collected at [The Wilds](https://www.thewilds.org/) in Ohio. The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery. It includes frames from various sessions, with annotations indicating the presence of animals in the images in YOLO format, and is designed to facilitate research in wildlife monitoring and conservation using autonomous drones.


### Dataset Description

- **Curated by:** Jenna Kline
- **Homepage:** [MMLA website](https://imageomics.github.io/mmla/)
- **Repository:** [imageomics/mmla](https://github.com/imageomics/mmla)
- **Paper:** [MMLA: Multi-Environment, Multi-Species, Low-Altitude Drone Dataset](https://arxiv.org/abs/2504.07744)

<!-- Provide a longer summary of what this dataset is. -->
This dataset contains video frames collected at [The Wilds Conservation Center](https://www.thewilds.org/) in Ohio, USA, using drones. The Wilds is a 10,000 acre safari park and conservation center that is home to a variety of endangered species. The dataset includes video frames of African Painted Dogs, Giraffes, Persian Onangers, and Grevy's Zebras, captured during different sessions. The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery. 


The dataset consists of frames. Each frame is accompanied by annotations in YOLO format, indicating the presence of animals and their bounding boxes within the images. The annotations were completed manually by the dataset curator using [CVAT](https://www.cvat.ai/) and [kabr-tools](https://github.com/Imageomics/kabr-tools).

| Session | Date Collected | Size (pixels) | Total Frames | Species           | Drone Model    | Video IDs
|---------|---------------|---------------|--------------|-------------------|----------------|----------------| 
| `session_1` | 2024-06-14  | 2720 x 1530   | 13,749       | African Painted Dog | DJI Mini     | DJI_0034, DJI_0035 |
| `session_2` | 2024-04-18  | 4096 x 2160   | 4,053        | Persian Onanger   | Parrot Anafi   | P0100010, P0110011,P0080008, P0090009, P0070007, P0160016, P0120012 |
| `session_3`| 2024-07-31  | 3840 x 2160   | 3,436        | Giraffe           | Parrot Anafi   |  P0140018, P0150019|
| `session_4` | 2024-07-31  | 4096 x 2160   | 506          | Grevy's Zebra     | Parrot Anafi   | P0070010 |
| **Total Frames:** |     |               | **21,744**   |                   |                |


This table shows the data collected at [The Wilds Conservation Center](https://www.thewilds.org/) in Ohio, USA, with session information, dates, frame counts, and primary species observed.

The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery.

See the [fine-tuned YOLO11m model](https://huggingface.co/imageomics/mmla) that was trained using this dataset.



## Dataset Structure
```
/dataset/
    classes.txt
    session_1/
        DJI_0034/
            DJI_0034_000000.jpg
            DJI_0034_000000.txt
            ...
            DJI_0035_000013.txt
        DJI_0035/
            partition_1.zip
            partition_2.zip
            partition_3.zip
    session_2/
        P0140018/
          P0140018_000000.jpg
          P0140018_000000.txt
          ...
        P0150019/
          ...
          P0150019_000326.txt
    session_3/
        P0070007/
          P0070007_000000.jpg
          P0070007_000000.txt
          ...
        P0080008/
          ...
        P0090009/
          ...
        P0100010/
          ...
        P0110011/
          ...
        P0120012/
          ...
        P0160016/
          ...
          P0160016_000598.txt
    session_4/
        P0070010/
          P0070010_000000.jpg
          P0070010_000000.txt
          ...
          P0070010_000505.txt
```

### Data Instances
All images are named `<video_id>_<frame_number>.jpg`, under the particular session and full video to which they belong; these can be matched to dates based on the table above. The annotations are in YOLO format and are stored in a corresponding `.txt` file with the same name as the image.  
Note: the DJI_0035 files are in .zip folders, due to issues uploading the large directories to HF.

### Data Fields

**classes.txt**:
  - `0`: zebra
  - `1`: giraffe
  - `2`: onager
  - `3`: dog

**frame_id.txt**:
  - `class`: Class of the object in the image (0 for animal species)
  - `x_center`: X coordinate of the center of the bounding box (normalized to [0, 1])
  - `y_center`: Y coordinate of the center of the bounding box (normalized to [0, 1])
  - `width`: Width of the bounding box (normalized to [0, 1])
  - `height`: Height of the bounding box (normalized to [0, 1])

### Data Splits

This dataset was used in conjunction with the other two [MMLA datasets](https://huggingface.co/collections/imageomics/mmla) for both training and testing the [MMLA YOLO model](https://huggingface.co/imageomics/mmla#training-details).

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. For instance, what you intended to study and why that required curation of a new dataset (or if it's newly collected data and why the data was collected (intended use)), etc. -->

The dataset was created to facilitate research in wildlife monitoring and conservation using advanced imaging technologies. The goal is to develop and evaluate computer vision models that can accurately detect and classify animals from drone imagery, and their generalizability across different species and environments.


### Source Data

<!-- This section describes the source data (e.g., news text and headlines, social media posts, translated sentences, ...). As well as an original source it was created from (e.g., sampling from Zenodo records, compiling images from different aggregators, etc.) -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, re-sizing of images, tools and libraries used, etc. 
This is what _you_ did to it following collection from the original source; it will be overall processing if you collected the data initially.
-->

The African Painted Dog missions were collected manually using a [DJI Mavic Mini drone](https://www.dji.com/support/product/mavic-mini), while the Giraffe, Persian Onanger, and Grevy's Zebra missions were collected using a [Parrot Anafi drone](https://www.parrot.com/us/drones/anafi). The Grevy's zebras and giraffe missions were conducted semi-autonomously using the [WildWing system](https://imageomics.github.io/wildwing/), while the Persian onager data was collected manually. The drones were flown over the [Wilds Conservation Center](https://www.thewilds.org/) in Ohio, capturing video footage of the animals in their natural habitat.


The videos were annotated manually using the Computer Vision Annotation Tool ([CVAT](https://www.cvat.ai/)) and the [kabr-tools](https://github.com/Imageomics/kabr-tools) library. These detection annotations and original video files were then processed to extract individual frames, which were saved as JPEG images. The annotations were converted to YOLO format, with bounding boxes indicating the presence of zebras in each frame.

<!-- #### Who are the source data producers?
[More Information Needed] -->
<!-- This section describes the people or systems who originally created the data.

Ex: This dataset is a collection of images taken of the butterfly collection housed at the Ohio State University Museum of Biological Diversity. The associated labels and metadata are the information provided with the collection from biologists that study butterflies and supplied the specimens to the museum.
 -->


### Annotations
<!-- 
If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. 

Ex: We standardized the taxonomic labels provided by the various data sources to conform to a uniform 7-rank Linnean structure. (Then, under annotation process, describe how this was done: Our sources used different names for the same kingdom (both _Animalia_ and _Metazoa_), so we chose one for all (_Animalia_). -->


#### Annotation process
[CVAT](https://www.cvat.ai/) and [kabr-tools](https://github.com/Imageomics/kabr-tools) were used to annotate the video frames. The annotation process involved manually labeling the presence of animals in each frame, drawing bounding boxes around them, and converting the annotations to YOLO format.
<!-- This section describes the annotation process such as annotation tools used, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->

#### Who are the annotators?

<!-- This section describes the people or systems who created the annotations. -->
Jenna Kline (The Ohio State University) - ORCID: 0009-0006-7301-5774 \
Alison Zhong (The Ohio State University) \
Jake Yablok (The Ohio State University)

### Personal and Sensitive Information
The dataset was cleaned to remove any personal or sensitive information. 

## Licensing Information

This dataset is dedicated to the public domain (by applying the [CC0-1.0 Public Domain Waiver](https://creativecommons.org/publicdomain/zero/1.0/)) for the benefit of scientific pursuits. We ask that you cite the dataset and journal paper using the below citations if you make use of it in your research.

## Citation


**BibTeX:**

**Data**
```
@misc{mmla_wilds,
  author = {  Jenna Kline,
              Alison Zhong,
              Jake Yablok
          },
  title = {MMLA The Wilds Dataset (Revision e61014d)},
  year = {2025},
  url = {https://huggingface.co/datasets/imageomics/mmla-wilds},
  doi = {10.57967/hf/7379},
  publisher = {Hugging Face}
}
```

**Paper**
```
@misc{kline2025mmla,
      title={MMLA: Multi-Environment, Multi-Species, Low-Altitude Drone Dataset}, 
      author={Jenna Kline and Samuel Stevens and Guy Maalouf and Camille Rondeau Saint-Jean and Dat Nguyen Ngoc and Majid Mirmehdi and David Guerin and Tilo Burghardt and Elzbieta Pastucha and Blair Costelloe and Matthew Watson and Thomas Richardson and Ulrik Pagh Schultz Lundquist},
      year={2025},
      eprint={2504.07744},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.07744}, 
}
```

If you use this dataset, please also cite the [WildWing video dataset](https://huggingface.co/datasets/imageomics/wildwingdeployment) used to generate data for sessions 2-4.

**Related Papers**
```
@article{kline2025wildwing,
  title={WildWing: An open-source, autonomous and affordable UAS for animal behaviour video monitoring},
  author={Kline, Jenna and Zhong, Alison and Irizarry, Kevyn and Stewart, Charles V and Stewart, Christopher and Rubenstein, Daniel I and Berger-Wolf, Tanya},
  journal={Methods in Ecology and Evolution},
  year={2025},
  doi={10.1111/2041-210X.70018},
  publisher={Wiley Online Library}
}
```




## Acknowledgements

This work was supported by the [Imageomics Institute](https://imageomics.org), which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

This work was supported by the AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment [ICICLE](https://icicle.osu.edu/), which is funded by the US National Science Foundation under grant number OAC-2112606.

<!-- You may also want to credit the source of your data, i.e., if you went to a museum or nature preserve to collect it. -->

<!-- ## Glossary  -->

<!-- [optional] If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->

## More Information 
The data was gathered at [The Wilds](https://www.thewilds.org/) with permission from The Wilds Science Committee to take field observations and fly drones in the pastures.

<!-- [optional] Any other relevant information that doesn't fit elsewhere. -->

## Dataset Card Authors 

Jenna Kline

## Dataset Card Contact

kline.377 at osu.edu
<!-- Could include who to contact with questions, but this is also what the "Discussions" tab is for. -->