Progressive Cognitive Architecture - Monolithic Math+Logic LoRA (English)
Single-adapter baseline that mixes arithmetic and logic capabilities in one Qwen2.5-1.5B LoRA.
Summary
This repository contains the monolithic comparison model used in the Socratic Routing study. Unlike the routed setup, this model keeps math and logic adaptation in one adapter rather than distributing them across specialized components.
Observed Behavior
Across the two completed seeds currently available for the mixed Socratic benchmark, this monolithic model achieved:
- 2-seed overall mean: 56.9%
- 2-seed logic composite mean: 55.8%
- 2-seed math composite mean: 58.1%
This makes it a balanced baseline: clearly stronger than the raw 1.5B base model, but weaker than the specialist-routed setup on the strongest completed routed run.
Intended Use
- compact mixed reasoning baseline
- comparison point against specialist and routed systems
- research on tradeoffs between monolithic and distributed adaptation
Limitations
- does not match the math specialist on arithmetic-heavy tasks
- does not match the logic specialist on logic-focused tasks
- provides a balanced compromise rather than a best-in-class result on either subdomain
Loading
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-1.5B", device_map="auto", torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
model = PeftModel.from_pretrained(
base_model,
"dexmac/progressive-cognitive-logic-dream-lora-en",
subfolder="lora_adapters"
)
Related Repositories
- Logic specialist: https://huggingface.co/dexmac/progressive-cognitive-logic-specialist-en
- Math specialist: https://huggingface.co/dexmac/progressive-cognitive-dream-lora-en
- Router model: https://huggingface.co/dexmac/progressive-cognitive-router-en
- Results dataset: https://huggingface.co/datasets/dexmac/progressive-cognitive-results
License
Apache 2.0
- Downloads last month
- -
Model tree for dexmac/progressive-cognitive-logic-dream-lora-en
Base model
Qwen/Qwen2.5-1.5B