Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using mistralai/Mistral-7B-v0.1 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: mlabonne/Monarch-7B
parameters:
density: 0.53
weight: 0.15
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
density: 0.53
weight: 0.3
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.53
weight: 0.3
- model: Intel/neural-chat-7b-v3-1
parameters:
density: 0.53
weight: 0.25
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 69.28 |
| AI2 Reasoning Challenge (25-Shot) | 67.66 |
| HellaSwag (10-Shot) | 85.94 |
| MMLU (5-Shot) | 65.02 |
| TruthfulQA (0-shot) | 56.39 |
| Winogrande (5-shot) | 79.32 |
| GSM8k (5-shot) | 61.33 |