Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image image |
|---|
HVU_VIC
HVU_VIC is an open-source Vietnamese image–caption corpus created to support image captioning and vision–language research, especially in low-resource language settings. The dataset was developed by a research team at Hung Vuong University, Phu Tho, Vietnam, led by Dr. Ha Nguyen, Deputy Head of the Department of Engineering Technology.
The corpus was built through an automated pipeline combining public web crawling, image–text extraction, and AI/heuristic-assisted filtering to improve overall quality, consistency, and usability for downstream multimodal tasks.
📋 Dataset Description
- Language: Vietnamese
- Task: Image Captioning (Image → Caption)
- Repository type: Dataset
- Annotation format: CSV with delimiter
| - Schema:
image|caption - Number of images: 29,970
- Number of captions: 29,970
- Captions per image: 1
⚙️ Creation Pipeline
The dataset was built using a 4-stage automated process:
- Selecting relevant public websites containing images paired with Vietnamese descriptions or captions.
- Automated web crawling to collect raw webpages, associated images, and basic metadata.
- Structure-based extraction to obtain clean image–caption pairs, followed by filename normalization and caption cleaning before exporting annotations to CSV format (
image|caption). - AI/heuristic-assisted filtering to remove noisy samples such as broken or duplicate images, empty or extremely short captions, wrong-language text, and corrupted characters.
📊 Dataset Statistics and Evaluation
Test Set
To ensure fair and unbiased evaluation, we constructed an independent test set of 500 images from MSCOCO, provided in this repository as Test_500.zip, and fully separated from all training corpora. For each image, one reference caption was selected, translated into Vietnamese, and manually reviewed by native Vietnamese speakers to ensure translation accuracy and semantic consistency.
This test set serves as a neutral benchmark for evaluating the generalization ability of Vietnamese image captioning models.
Corpus Statistics
After the full data collection and cleaning pipeline, the final corpus contains 29,970 image–caption pairs.
Key statistics are as follows:
- Number of images: 29,970
- Number of captions: 29,970
- Average caption length: 14.12 syllables
- Minimum caption length: 4 syllables
- Maximum caption length: 45 syllables
- Vocabulary size: 6,839
Most captions fall within the 10–20 syllable range, indicating that the dataset mainly consists of concise yet visually descriptive captions. The corpus covers diverse visual domains, including daily activities, objects, human interactions, landscapes, and public events.
Scaling Analysis
We conducted controlled training experiments using progressively larger subsets of the corpus: 5k, 15k, 25k, and 29,970 samples. The results show that model performance improves consistently as the dataset grows:
| Training Size | SacreBLEU | Cosine Similarity |
|---|---|---|
| 5,000 | 11.52 | 0.50 |
| 15,000 | 14.24 | 0.59 |
| 25,000 | 15.83 | 0.65 |
| 29,970 | 19.86 | 0.67 |
Compared with the 5k setting, SacreBLEU improves by approximately 72.4% when using the full dataset. This stable upward trend suggests that the corpus can be scaled effectively without noticeable instability or performance degradation.
Cross-Dataset Comparison
We further compared the proposed corpus with two Vietnamese image caption datasets: UIT-ViIC and KTVIC, under identical training settings. Each dataset was evaluated at two scales: 1,000 and 3,600 image–caption pairs.
| Metric | 1K_UIT-ViIC | 3.6K_UIT-ViIC | 1K_KTVIC | 3.6K_KTVIC | 1K_Ours | 3.6K_Ours |
|---|---|---|---|---|---|---|
| BLEU | 7.19 | 10.36 | 3.75 | 4.72 | 8.86 | 9.37 |
| Cosine | 0.18 | 0.21 | 0.36 | 0.29 | 0.34 | 0.37 |
The results show that:
- With 1,000 training samples, our dataset achieves the highest BLEU score (8.86).
- With 3,600 training samples, our dataset achieves the highest cosine similarity (0.37), indicating stronger semantic alignment between generated and reference captions.
Overall, these findings suggest that the proposed corpus provides strong training signals and is effective for Vietnamese image captioning, even when trained on relatively small subsets.
📁 Dataset Structure
GeneratingCaptions
├── README.md
└── HVU_VIC
├── Image-Caption
│ ├── 30K_IMG_1.zip
│ └── Captions_30k.csv
└── Test_500.zip
📥 Load Dataset from Hugging Face Hub
from datasets import load_dataset
ds = load_dataset(
"csv",
data_files="HVU_VIC/Image-Caption/Captions_30k.csv",
delimiter="|",
column_names=["image", "caption"]
)
print(ds["train"][0])
📚 Usage
- Train and evaluate Vietnamese image captioning models.
- Benchmark vision–language systems on Vietnamese captions (e.g., using BLEU and embedding-based cosine similarity).
- Support manual evaluation tasks such as descriptive/non-descriptive judgment and relevance scoring.
📌 Citation
If you use HVU_VIC in your research, please cite:
@inproceedings{nguyen2026method,
author = {Ha Nguyen and Quyen Nguyen and Dang Do and Ngoc Hoang and Quoc Le and Chung Mai},
title = {A Method for Building an Image Caption Corpus for Low-Resource Languages},
booktitle = {Proceedings of the 2026 International Symposium on Information and Communication Technology},
year = {2026},
publisher = {...},
series = {...},
address = {...},
note = {To appear}
}
❤️ Support / Funding
If you find HVU_VIC useful, please consider supporting our work.
Your contributions help us maintain the dataset, improve quality, and release new versions (cleaning, expansion, benchmarks, and tools).
🇻🇳 Donate via VietQR (scan to support)
This VietQR / NAPAS 247 code can be scanned by Vietnamese banking apps and some international payment apps that support QR bank transfers.
Bank: VietinBank (Vietnam)
Account name: NGUYEN TIEN HA
Account number: 103004492490
Branch: VietinBank CN PHU THO - HOI SO
🌍 International Support (Quick card payment)
If you are outside Vietnam, you can support this project via Buy Me a Coffee
(no PayPal account needed — pay directly with a credit/debit card):
- BuyMeACoffee: https://buymeacoffee.com/hanguyen0408
🌍 International Support (PayPal)
If you prefer PayPal, you can also support us here:
- PayPal.me: https://paypal.me/HaNguyen0408
✨ Other ways to support
- ⭐ Star this repository / dataset on Hugging Face
- 📌 Cite our paper if you use it in your research
- 🐛 Open issues / pull requests to improve the dataset and tools
📬 Contact / Maintainers
For questions, feedback, collaborations, or issue reports related to HVU_VIC, please contact:
Dr. Ha Nguyen (Project Lead) Hung Vuong University, Phu Tho, Vietnam Email: nguyentienha@hvu.edu.vn
- Downloads last month
- 122