WhyTheMoon's picture
Update README.md
fad1d5b verified
---
license: mit
language:
- en
pipeline_tag: text-generation
arxiv:
- https://arxiv.org/abs/2508.06595
library_name: transformers
---
## Model Details
Best [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) checkpoint unlearned using [RMU](https://arxiv.org/abs/2403.03218) with the Keyword-Bio forget set. For more details, please check [our paper](https://arxiv.org/abs/2508.06595).
### sources
- Base model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- Repository: [https://github.com/xyzhu123/Synthetic_Textbook)
### Performance
| | WMDP-Bio | tinyMMLU | GSM8k | TriviaQA |
|----------------------------------------|:---------:|:----------:|:-------:|:--------:|
| Llama-3-8B-Instruct | 71.01 | 59.21 | 75.28 | 51.09 |
| Llama-3-8B-Instruct_RMU_Keyword-Bio | 70.30 | 60.42 | 75.97 | 51.31 |
## Citation
If you find this useful in your research, please consider citing our paper:
```
@misc{zhu2025llmunlearningexpertcurated,
title={LLM Unlearning Without an Expert Curated Dataset},
author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger},
year={2025},
eprint={2508.06595},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.06595},
}
```