archit11/qwen2.5-coder-3b-verl-track-a-lora
LoRA adapter trained for repository-specific extended pretraining on verl Python source code.
Model details
- Base model:
Qwen/Qwen2.5-Coder-3B - Fine-tuning method: LoRA (
r=16) - Training corpus:
https://huggingface.co/datasets/archit11/verl-code-corpus-track-a-file-split - Split strategy: file-level train/validation/test split
- Sequence curriculum: [768, 1024]
- Effective learning rate: 0.0001
- Batch size: 8
- Gradient accumulation: 1
Evaluation summary
- Baseline perplexity (validation): 3.1820
- Baseline perplexity (test): 2.7764
- Post-training perplexity (validation): 2.7844
- Post-training perplexity (test): 2.2379
- Test perplexity reduction: 0.5385 (19.40%)
Usage
This repo stores adapter weights and tokenizer artifacts. Load it with PEFT on top of the base model.
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = "Qwen/Qwen2.5-Coder-3B"
adapter = "archit11/qwen2.5-coder-3b-verl-track-a-lora"
tokenizer = AutoTokenizer.from_pretrained(adapter)
model = AutoModelForCausalLM.from_pretrained(base, trust_remote_code=True)
model = PeftModel.from_pretrained(model, adapter)
- Downloads last month
- 10