--- license: mit language: - en pipeline_tag: text-generation arxiv: - https://arxiv.org/abs/2508.06595 library_name: transformers --- ## Model Details Best [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) checkpoint unlearned using [RMU](https://arxiv.org/abs/2403.03218) with the Filter-Cyber forget set. For more details, please check [our paper](https://arxiv.org/abs/2508.06595). ### sources - Base model: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) - Repository: [https://github.com/xyzhu123/Synthetic_Textbook) ### Performance | | WMDP-Cyber | tinyMMLU | GSM8k | TriviaQA | |---------------------------------------------|:---------:|:----------:|:-------:|:--------:| | Mistral-7B-Instruct-v0.3 | 41.52 | 64.20 | 50.19 | 56.81 | | Mistral-7B-Instruct-v0.3_RMU_Filter-Cyber | 27.77 | 55.71 | 36.09 | 53.44 | ## Citation If you find this useful in your research, please consider citing our paper: ``` @misc{zhu2025llmunlearningexpertcurated, title={LLM Unlearning Without an Expert Curated Dataset}, author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger}, year={2025}, eprint={2508.06595}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.06595}, } ```