Progressive Cognitive Architecture - Monolithic Math+Logic LoRA (English)

Single-adapter baseline that mixes arithmetic and logic capabilities in one Qwen2.5-1.5B LoRA.

Summary

This repository contains the monolithic comparison model used in the Socratic Routing study. Unlike the routed setup, this model keeps math and logic adaptation in one adapter rather than distributing them across specialized components.

Observed Behavior

Across the two completed seeds currently available for the mixed Socratic benchmark, this monolithic model achieved:

  • 2-seed overall mean: 56.9%
  • 2-seed logic composite mean: 55.8%
  • 2-seed math composite mean: 58.1%

This makes it a balanced baseline: clearly stronger than the raw 1.5B base model, but weaker than the specialist-routed setup on the strongest completed routed run.

Intended Use

  • compact mixed reasoning baseline
  • comparison point against specialist and routed systems
  • research on tradeoffs between monolithic and distributed adaptation

Limitations

  • does not match the math specialist on arithmetic-heavy tasks
  • does not match the logic specialist on logic-focused tasks
  • provides a balanced compromise rather than a best-in-class result on either subdomain

Loading

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-1.5B", device_map="auto", torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")

model = PeftModel.from_pretrained(
    base_model,
    "dexmac/progressive-cognitive-logic-dream-lora-en",
    subfolder="lora_adapters"
)

Related Repositories

License

Apache 2.0

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for dexmac/progressive-cognitive-logic-dream-lora-en

Base model

Qwen/Qwen2.5-1.5B
Adapter
(499)
this model

Space using dexmac/progressive-cognitive-logic-dream-lora-en 1