Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
TemporalHead / README.md
dmis-lab's picture
Update README.md
722a0d5 verified
metadata
license: cc-by-4.0
language:
  - en
configs:
  - config_name: Temporal
    data_files:
      - split: default
        path: Temporal/*.json
  - config_name: Invariant
    data_files:
      - split: default
        path: Invariant/*.json
task_categories:
  - question-answering
  - text-generation
tags:
  - circuit
  - temporal
  - knowledge
  - triplet
size_categories:
  - 10K<n<100K

[ACL 2025] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information

This repository contains two separate subsets of data (configs):

  • Temporal: JSON files in Temporal that include temporal knowledge.
  • Invariant: JSON files in Invariant that describe time-invariant knowledge based on LRE.

Each subset has its own schema. By defining them as two configs in the YAML header above, Hugging Face’s Dataset Viewer will show “Temporal” and “Invariant” as separate options in the configuration dropdown, allowing you to explore each schema independently without a schema‐mismatch error.


Dataset Overview

Motivation:
Large language models (LLMs) often struggle to answer questions whose answers change over time. We investigated whether there exist specialized attention heads—Temporal Heads—that are triggered by explicit dates (e.g., “In 2004, …”) or by implicit textual cues (e.g., “In the year …”) and that help the model recall or update time-specific facts.

  • Method: Using Knowledge Circuit analysis, we identified attention heads in LLMs that strongly activate on temporal signals (timestamps, years, etc.).
  • Findings: These Temporal Heads are crucial for time-sensitive recall. When you ablate (disable) them, the model’s performance on time-dependent questions degrades significantly, whereas its performance on static (time-invariant) knowledge remains almost unchanged.
  • Implications: By manipulating the outputs of these specific heads, one can potentially edit or correct a model’s temporal knowledge directly (e.g., if its internal knowledge about “Who was president in 1999?” is outdated).

Usage

from datasets import load_dataset

# 1. Load the "Temporal" config
#    - Each example in this split has fields like:
#      { "name": ..., "prompt_templates": [...], "samples": [ { "subject": ..., "object": ..., "time": ... }, ... ], ... }
Temporal = load_dataset("dmis-lab/TemporalHead", "Temporal")["default"]

# 2. Load the "Invariant" config
#    - Each example here has fields like:
#      { "name": ..., "prompt_templates": [...], "properties": { "relation_type": ..., ... }, "samples": [ { "subject": ..., "object": ... }, ... ], ... }
Invariant = load_dataset("dmis-lab/TemporalHead", "Invariant")["default"]

Citation and Acknowledgements

If you find our work is useful in your research, please consider citing our paper:

@article{park2025does,
  title={Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information},
  author={Park, Yein and Yoon, Chanwoong and Park, Jungwoo and Jeong, Minbyul and Kang, Jaewoo},
  journal={arXiv preprint arXiv:2502.14258},
  year={2025}
}

We also gratefully acknowledge the following open-source repositories and kindly ask that you cite their accompanying papers as well.

[1] https://github.com/zjunlp/KnowledgeCircuits
[2] https://github.com/hannamw/eap-ig
[3] https://github.com/evandez/relations

Contact

For any questions or issues, feel free to reach out to [522yein (at) korea.ac.kr].