Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
License:
update readme
Browse files
README.md
CHANGED
|
@@ -39,4 +39,46 @@ configs:
|
|
| 39 |
data_files:
|
| 40 |
- split: validation
|
| 41 |
path: "stage2/valid_stage2.parquet"
|
| 42 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
data_files:
|
| 40 |
- split: validation
|
| 41 |
path: "stage2/valid_stage2.parquet"
|
| 42 |
+
---
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
### Dataset organization
|
| 47 |
+
|
| 48 |
+
The OpenGenome dataset is organized in 2 stages, where stage 1 has context length 8k and stage 2 has context length 131k. Each stage has their own datasplits.
|
| 49 |
+
|
| 50 |
+
```
|
| 51 |
+
- stage1
|
| 52 |
+
- train
|
| 53 |
+
- validation
|
| 54 |
+
- test
|
| 55 |
+
|
| 56 |
+
- stage2
|
| 57 |
+
- train
|
| 58 |
+
- validation
|
| 59 |
+
- test
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
### Instructions to download
|
| 63 |
+
|
| 64 |
+
You can load a dataset using HF's API, with an example below.
|
| 65 |
+
|
| 66 |
+
```
|
| 67 |
+
from datasets import load_dataset
|
| 68 |
+
|
| 69 |
+
stage1_data = load_dataset("LongSafari/open-genome", 'stage1')
|
| 70 |
+
|
| 71 |
+
# access just the train data
|
| 72 |
+
stage_1_train_data = stage1_data['train']
|
| 73 |
+
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Note: stage 1 training dataset is sharded into separate files due to it's large size.
|
| 77 |
+
|
| 78 |
+
We also provide a small dataset sample to test out the pipeline if you prefer.
|
| 79 |
+
|
| 80 |
+
```
|
| 81 |
+
sample_data = load_dataset("LongSafari/open-genome", 'sample')['validation']
|
| 82 |
+
|
| 83 |
+
```
|
| 84 |
+
|