Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 1
weight: 1
- model: Dampfinchen/Llama-3-8B-Ultra-Instruct
parameters:
density: 0.5
weight: 0.2
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
dtype: bfloat16
Test of salt sprinkle methode. The goal is to retain all of L3 Instruct's capabilities while adding better RP, RAG, German and story writing capabilities in the form of Ultra Instruct. Model may generate harmful responses, I'm not responsible for what you do with this model.
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 67.61 |
| AI2 Reasoning Challenge (25-Shot) | 61.35 |
| HellaSwag (10-Shot) | 77.76 |
| MMLU (5-Shot) | 67.88 |
| TruthfulQA (0-shot) | 52.82 |
| Winogrande (5-shot) | 74.98 |
| GSM8k (5-shot) | 70.89 |