Nexesenex's picture
Adding Evaluation Results (#1)
5c91141 verified
---
license: llama3.2
library_name: transformers
tags:
- mergekit
- merge
base_model:
- KingNish/Reasoning-Llama-1b-v0.1
- zztheaven/Llama-3.2-1B-Instruct-Open-R1-GRPO
- prithivMLmods/Bellatrix-Tiny-1B-R1
model-index:
- name: Llama_3.2_1b_OpenTree_R1_0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 53.66
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nexesenex/Llama_3.2_1b_OpenTree_R1_0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 6.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nexesenex/Llama_3.2_1b_OpenTree_R1_0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.76
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nexesenex/Llama_3.2_1b_OpenTree_R1_0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.34
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nexesenex/Llama_3.2_1b_OpenTree_R1_0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.6
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nexesenex/Llama_3.2_1b_OpenTree_R1_0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 7.5
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Nexesenex/Llama_3.2_1b_OpenTree_R1_0.1
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [zztheaven/Llama-3.2-1B-Instruct-Open-R1-GRPO](https://huggingface.co/zztheaven/Llama-3.2-1B-Instruct-Open-R1-GRPO) as a base.
### Models Merged
The following models were included in the merge:
* [KingNish/Reasoning-Llama-1b-v0.1](https://huggingface.co/KingNish/Reasoning-Llama-1b-v0.1)
* [prithivMLmods/Bellatrix-Tiny-1B-R1](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: KingNish/Reasoning-Llama-1b-v0.1
parameters:
weight: 1.0
- model: prithivMLmods/Bellatrix-Tiny-1B-R1
parameters:
weight: 1.0
base_model: zztheaven/Llama-3.2-1B-Instruct-Open-R1-GRPO
dtype: bfloat16
normalize: true
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Nexesenex__Llama_3.2_1b_OpenTree_R1_0.1-details)
| Metric |Value|
|-------------------|----:|
|Avg. |12.34|
|IFEval (0-Shot) |53.66|
|BBH (3-Shot) | 6.21|
|MATH Lvl 5 (4-Shot)| 4.76|
|GPQA (0-shot) | 0.34|
|MuSR (0-shot) | 1.60|
|MMLU-PRO (5-shot) | 7.50|