PRIM: Towards Practical In-Image Multilingual Machine Translation (EMNLP 2025 Main)
📄Paper arXiv | 💻Code GitHub | 🤗Training set HuggingFace | 🤗Model HuggingFace
Introduction
This repository provides the PRIM benchmark, which is introduced in our paper PRIM: Towards Practical In-Image Multilingual Machine Translation.
PRIM (Practical In-Image Multilingual Machine Translation) is the first publicly available benchmark captured from real-word images for In-Image machine Translation.
The source images are collected from [1] and [2]. We sincerely thank the authors of these datasets for making their data available.
Citation
If you find our work helpful, we would greatly appreciate it if you could cite our paper:
@inproceedings{tian-etal-2025-prim,
title = "{PRIM}: Towards Practical In-Image Multilingual Machine Translation",
author = "Tian, Yanzhi and
Liu, Zeming and
Liu, Zhengyang and
Feng, Chong and
Li, Xin and
Huang, Heyan and
Guo, Yuhang",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.691/",
pages = "13693--13708",
ISBN = "979-8-89176-332-6",
abstract = "In-Image Machine Translation (IIMT) aims to translate images containing texts from one language to another. Current research of end-to-end IIMT mainly conducts on synthetic data, with simple background, single font, fixed text position, and bilingual translation, which can not fully reflect real world, causing a significant gap between the research and practical conditions. To facilitate research of IIMT in real-world scenarios, we explore Practical In-Image Multilingual Machine Translation (IIMMT). In order to convince the lack of publicly available data, we annotate the PRIM dataset, which contains real-world captured one-line text images with complex background, various fonts, diverse text positions, and supports multilingual translation directions. We propose an end-to-end model VisTrans to handle the challenge of practical conditions in PRIM, which processes visual text and background information in the image separately, ensuring the capability of multilingual translation while improving the visual quality. Experimental results indicate the VisTrans achieves a better translation quality and visual effect compared to other models. The code and dataset are available at: https://github.com/BITHLP/PRIM."
}
[1] Modal Contrastive Learning Based End-to-End Text Image Machine Translation
[2] MIT-10M: A Large Scale Parallel Corpus of Multilingual Image Translation
- Downloads last month
- 23