Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
382
3.8k
label
class label
4 classes
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
0Climate_variables
End of preview. Expand in Data Studio

VISQAM (Visual Question-Answering for Thematic Maps)

Dataset Summary

VISQAM (Visual Question-Answering for Thematic Maps) is a dataset of 400 geographic thematic maps annotated for visual question-answering (VQA) and legend detection tasks. Each map includes 3-5 question-answering (QA) pairs of geographic nature and bounding box annotations for map legends. The dataset combines maps from multiple sources covering land use, climate variables, choropleth maps, and diverse cartographic layouts.

This dataset is designed for tasks at the intersection of computer vision, natural language processing, and geographic information science (GIS), including map understanding, VQA on specialized scientific visualizations, and legend detection.

Dataset Structure

Data Instances

The dataset contains 400 images organized into 4 categories. Each image is accompanied by:

  • 3-5 QA related to geographic information in the map
  • Bounding box annotations for map legends
  • Original source attribution and licensing information

All annotations are provided in annotations.json, formatted following the COCO annotation standard.

Data Structure

  • images: Directory containing the thematic map images, divided into categories (one directory for each category)
  • annotations.json: COCO-formatted file containing:
    • QA pairs for each image
    • Bounding box coordinates for legend regions
    • Source URLs and license information for each image
    • Image metadata (width, height)

Image Categories

The dataset includes 4 categories of thematic maps:

  1. LULC (Land Use-Land Cover): Maps depicting land cover classifications and land use patterns

  2. Climate_variables: Maps showing atmospheric and meteorological data including temperature, precipitation, air quality, and other climate indicators

    • Sources: Copernicus Atmosphere Monitoring Service (CAMS), European Centre for Medium-Range Weather Forecasts (ECMWF)
  3. OWID (Our World in Data): Choropleth maps representing socioeconomic, demographic, and development indicators by country. Our focus was set on indicators related to environment (e.g., energy consumption, CO2 emissions).

  4. misc_layouts: Maps with diverse cartographic layouts and design patterns

Data Splits

Currently, the dataset provides a single train split containing all 400 examples.

Annotations

Each map in the dataset has been annotated with:

  • QA Pairs: 3-5 questions per image focusing on geographic content, spatial relationships, data interpretation, and map-specific information. The development of the questions was based on the GQA taxonomy, adjusted for the needs of our dataset. Every question has a structural and a semantic label.
  • Legend Bounding Boxes: Rectangular regions (x, y, width, height) marking the location of map legends.

All annotations follow the COCO format for consistency with existing computer vision pipelines and include complete attribution to original sources.

Personal and sensitive information

The dataset consists of publicly available thematic maps depicting geographic and statistical data. No personal or sensitive information about individuals is included.

Usage

Downloading the dataset

To maintain the directory structure and naming conventions of the dataset, download the dataset repository using git clone:

git clone https://huggingface.co/datasets/koukeft/visqam

Working with the annotations

The annotations.json file contains detailed annotations in COCO format. You can load it separately:

import json

# Change the path to annotations.json if necessary
with open('visqam/annotations.json', 'r') as f: 
    annotations = json.load(f)

# Access QA pairs
for annotation in annotations['images']:
    qa_pairs = annotation['qa_annotations']
    source_url = annotation['source_url']
    license_id = annotation['license']
    source_license = annotations['licenses'][license_id-1]['name']
    print(f"\nFor the image found at {source_url},\
              licensed under {source_license},\
              there are {len(qa_pairs)} QA annotations.")

Potential Use Cases

  • VQA on scientific geovisualizations
  • Geographic information extraction from maps
  • Legend detection
  • Cross-modal learning between cartographic visualizations and natural language
  • Automated map understanding and interpretation
  • Training vision-language models on geographic domain content

Licensing and attribution

Dataset license

This dataset is released under CC-BY-4.0 Creative Commons Attribution 4.0 International.

Source attribution

The images in the dataset retain the original license and source attribution as documented in annotations.json. Users must:

  • Consult the original licenses for each image category
  • Provide appropriate attribution when using or redistributing images
  • Respect the terms of the original data sources

Source Licenses by Category:

Please refer to annotations.json for specific licensing information for each individual image.

Citation

If you use this dataset in your research, please cite:

@dataset{visqam2025,
  author = {Koukouraki, Eftychia and Ajay, Ajay and Abubakar, Ahmad},
  title = {VISQAM: Visual Question Answering for Thematic Maps},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/koukeft/visqam}
}

Contact

For questions, feedback, or issues related to this dataset, please open a discussion on the dataset page.

Acknowledgments

The development of this dataset was funded by the NFDI4Earth Incubator Lab programme.

Version History

v1.0.0 (January 2025) - Initial Release

  • 400 images across 4 categories
  • 3-5 QA pairs per image
  • Legend bounding box annotations
  • Categories: LULC (100), Climate_variables (100), OWID (100), misc_layouts (100)

Planned Updates

  • v1.1.0: Additional images for each category (target: 800 total)
Downloads last month
197

Paper for koukeft/visqam