Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K
AI & ML interests
None defined yet.
Organization Card
These are the model weights associated with the TMLR 2024 publication LoRA Learns Less and Forgets Less (Biderman et al. 2024). This work was done in collaboration with Databricks Mosaic AI Research.
models
17
LoRA-TMLR-2024/magicoder-lora-rank-16-alpha-32
Updated
•
335
LoRA-TMLR-2024/starcoder-full-finetuning-lr-1e-05-20B-token
7B
•
Updated
LoRA-TMLR-2024/starcoder-lora-rank-256-20B-tokens
Updated
•
4
LoRA-TMLR-2024/openwebmath-full-finetuning-lr-1e-05-20B-tokens
7B
•
Updated
LoRA-TMLR-2024/openwebmath-lora-rank-256-20B-tokens
Updated
•
3
LoRA-TMLR-2024/openwebmath-lora-rank-16-20B-tokens
Updated
•
4
LoRA-TMLR-2024/starcoder-lora-rank-16-20B-tokens
Updated
•
5
LoRA-TMLR-2024/magicoder-lora-rank-64-alpha-128
Updated
•
292
LoRA-TMLR-2024/magicoder-lora-rank-256-alpha-512
Updated
•
306
LoRA-TMLR-2024/magicoder-full-finetuning-lr-5e-05
7B
•
Updated
datasets
0
None public yet