File size: 3,843 Bytes
722a0d5 c5ab0f4 722a0d5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
license: cc-by-4.0
language:
- en
configs:
- config_name: Temporal
data_files:
- split: default
path: Temporal/*.json
- config_name: Invariant
data_files:
- split: default
path: Invariant/*.json
task_categories:
- question-answering
- text-generation
tags:
- circuit
- temporal
- knowledge
- triplet
size_categories:
- 10K<n<100K
---
# \[ACL 2025\] Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information
<center><img src = "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F5efbdc4ac3896117eab961a9%2FyzXDiuGZUHCaVkERZFSIO.png%26quot%3B%3C%2Fspan%3E width="1000" height="1000"></center>
**This repository contains two separate subsets of data (configs):**
- **Temporal**: JSON files in `Temporal` that include temporal knowledge.
- **Invariant**: JSON files in `Invariant` that describe time-invariant knowledge based on [LRE](https://arxiv.org/abs/2308.09124).
Each subset has its own schema. By defining them as two configs in the YAML header above, Hugging Face’s Dataset Viewer will show **“Temporal”** and **“Invariant”** as separate options in the configuration dropdown, allowing you to explore each schema independently without a schema‐mismatch error.
---
## Dataset Overview
**Motivation:**
Large language models (LLMs) often struggle to answer questions whose answers change over time. We investigated whether there exist specialized attention heads—**Temporal Heads**—that are triggered by explicit dates (e.g., “In 2004, …”) or by implicit textual cues (e.g., “In the year …”) and that help the model recall or update time-specific facts.
- **Method:** Using [Knowledge Circuit](https://arxiv.org/abs/2405.17969) analysis, we identified attention heads in LLMs that strongly activate on temporal signals (timestamps, years, etc.).
- **Findings:** These Temporal Heads are crucial for time-sensitive recall. When you ablate (disable) them, the model’s performance on time-dependent questions degrades significantly, whereas its performance on static (time-invariant) knowledge remains almost unchanged.
- **Implications:** By manipulating the outputs of these specific heads, one can potentially edit or correct a model’s temporal knowledge directly (e.g., if its internal knowledge about “Who was president in 1999?” is outdated).
---
## Usage
```python
from datasets import load_dataset
# 1. Load the "Temporal" config
# - Each example in this split has fields like:
# { "name": ..., "prompt_templates": [...], "samples": [ { "subject": ..., "object": ..., "time": ... }, ... ], ... }
Temporal = load_dataset("dmis-lab/TemporalHead", "Temporal")["default"]
# 2. Load the "Invariant" config
# - Each example here has fields like:
# { "name": ..., "prompt_templates": [...], "properties": { "relation_type": ..., ... }, "samples": [ { "subject": ..., "object": ... }, ... ], ... }
Invariant = load_dataset("dmis-lab/TemporalHead", "Invariant")["default"]
```
---
## Citation and Acknowledgements
If you find our work is useful in your research, please consider citing our [paper](https://arxiv.org/abs/2502.14258):
```
@article{park2025does,
title={Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information},
author={Park, Yein and Yoon, Chanwoong and Park, Jungwoo and Jeong, Minbyul and Kang, Jaewoo},
journal={arXiv preprint arXiv:2502.14258},
year={2025}
}
```
We also gratefully acknowledge the following open-source repositories and kindly ask that you cite their accompanying papers as well.
[1] https://github.com/zjunlp/KnowledgeCircuits
[2] https://github.com/hannamw/eap-ig
[3] https://github.com/evandez/relations
---
## Contact
For any questions or issues, feel free to reach out to [522yein (at) korea.ac.kr]. |