Update README.md
Browse files
README.md
CHANGED
|
@@ -29,3 +29,115 @@ dataset_info:
|
|
| 29 |
download_size: 0
|
| 30 |
dataset_size: 786835439
|
| 31 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
download_size: 0
|
| 30 |
dataset_size: 786835439
|
| 31 |
---
|
| 32 |
+
|
| 33 |
+
# 🩺 MedMultiPoints: A Multimodal Dataset for Object Detection, Localization, and Counting in Medical Imaging
|
| 34 |
+
|
| 35 |
+
[](https://arxiv.org/abs/2505.16647)
|
| 36 |
+
[](https://creativecommons.org/licenses/by/4.0/)
|
| 37 |
+
📫 For queries, contact: [[email protected]](mailto:[email protected])
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
## Dataset Summary
|
| 42 |
+
|
| 43 |
+
**MedMultiPoints** is a curated, multimodal medical imaging dataset designed for **multi-task learning** in the medical domain—spanning **object detection**, **localization**, and **counting** tasks. It integrates data from **endoscopic** and **microscopic** modalities, reflecting real-world clinical diversity.
|
| 44 |
+
|
| 45 |
+
The dataset is introduced in the paper:
|
| 46 |
+
**"Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models"**
|
| 47 |
+
Presented at **IEEE CBMS 2025, Madrid, Spain.**
|
| 48 |
+
→ [Project Page & Code](https://github.com/Simula/PointDetectCount)
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
## Features
|
| 53 |
+
|
| 54 |
+
- **10,600 images** from diverse modalities: endoscopy (HyperKvasir) and microscopy (VISEM-Tracking)
|
| 55 |
+
- Rich **multi-type annotations**:
|
| 56 |
+
- **Bounding Boxes** (`bbox_2d`) for object detection
|
| 57 |
+
- **Point Annotations** (`point_2d`) for localization
|
| 58 |
+
- **Count Labels** (`counts`) for counting tasks
|
| 59 |
+
- Compatible with **Vision-Language Models (VLMs)** and **instruction-tuned pipelines**
|
| 60 |
+
- JSON-formatted annotations designed for seamless integration with multimodal training
|
| 61 |
+
|
| 62 |
+
---
|
| 63 |
+
|
| 64 |
+
## Data Schema
|
| 65 |
+
|
| 66 |
+
Each sample in the dataset contains:
|
| 67 |
+
|
| 68 |
+
| Field | Type | Description |
|
| 69 |
+
|-------------------|-----------|--------------------------------------------------|
|
| 70 |
+
| `image` | Image | Raw medical image |
|
| 71 |
+
| `image_sha256` | string | SHA-256 hash of the image for integrity |
|
| 72 |
+
| `img_size` | [int, int]| Original image width and height |
|
| 73 |
+
| `points` | list | List of `[x, y]` point annotations |
|
| 74 |
+
| `bbox` | list | List of `[x1, y1, x2, y2]` bounding boxes |
|
| 75 |
+
| `count` | int | Object count in the image |
|
| 76 |
+
| `label` | string | Class label (e.g., `polyps`, `sperm`, etc.) |
|
| 77 |
+
| `collection_method` | string | Task type: `counting`, `detection`, etc. |
|
| 78 |
+
| `classification` | string | Description of annotation type (e.g., pathological-findings) |
|
| 79 |
+
| `organ` | string | Target organ: `Lower GI`, `Microscopy`, etc. |
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
## Supported Tasks
|
| 84 |
+
|
| 85 |
+
This dataset supports the following **multi-task** settings:
|
| 86 |
+
|
| 87 |
+
- 🔲 **Object Detection** (bounding box prediction)
|
| 88 |
+
- 📍 **Localization** (point prediction)
|
| 89 |
+
- 🔢 **Counting** (object count regression)
|
| 90 |
+
- 🧠 **Multimodal Instruction-Based Learning**
|
| 91 |
+
|
| 92 |
+
---
|
| 93 |
+
|
| 94 |
+
## How to Load
|
| 95 |
+
|
| 96 |
+
```python
|
| 97 |
+
from datasets import load_dataset
|
| 98 |
+
|
| 99 |
+
ds = load_dataset("SushantGautam/MedMultiPoints")['train']
|
| 100 |
+
sample = ds[0]
|
| 101 |
+
|
| 102 |
+
# Access image and annotations
|
| 103 |
+
image = sample['image']
|
| 104 |
+
bbox = sample['bbox']
|
| 105 |
+
points = sample['points']
|
| 106 |
+
count = sample['count']
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
## Example
|
| 112 |
+
|
| 113 |
+
```json
|
| 114 |
+
{
|
| 115 |
+
"image_sha256": "71179abc4b011cc99bddb3344e3e114765b32bdf77e78892f046026d785a4bdb",
|
| 116 |
+
"img_size": [622, 529],
|
| 117 |
+
"points": [[234, 171.5]],
|
| 118 |
+
"bbox": [[38, 5, 430, 338]],
|
| 119 |
+
"count": 1,
|
| 120 |
+
"label": "polyps",
|
| 121 |
+
"collection_method": "counting",
|
| 122 |
+
"classification": "pathological-findings",
|
| 123 |
+
"organ": "Lower GI"
|
| 124 |
+
}
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
---
|
| 128 |
+
|
| 129 |
+
## Citation
|
| 130 |
+
|
| 131 |
+
If you use this dataset, please cite:
|
| 132 |
+
|
| 133 |
+
```bibtex
|
| 134 |
+
@article{Gautam2025May,
|
| 135 |
+
author = {Gautam, Sushant and Riegler, Michael A. and Halvorsen, P{\aa}l},
|
| 136 |
+
title = {{Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models}},
|
| 137 |
+
journal = {arXiv},
|
| 138 |
+
year = {2025},
|
| 139 |
+
month = may,
|
| 140 |
+
eprint = {2505.16647},
|
| 141 |
+
doi = {10.48550/arXiv.2505.16647}
|
| 142 |
+
}
|
| 143 |
+
```
|