This is a Luna-7B-A4B fine-tune, produced through P-E-W's Heretic (v1.2.0) abliteration engine with Magnitude-Preserving Orthogonal Ablation enabled.

Note: The significance of this version is applying KaraKaraWitch's method of MLP-preservation to ensure that model intelligence and capabilities are preserved as per Golddiamondgold-Paperbliteration-L33-70b.


Heretication Results

Score Metric Value Parameter Value
Refusals 7/100 direction_index 19.33
KL Divergence 0.0383 attn.o_proj.max_weight 3.85
Initial Refusals 100/100 attn.o_proj.max_weight_position 21.62
attn.o_proj.min_weight 2.34
attn.o_proj.min_weight_distance 6.96
mlp.down_proj.max_weight 0.11
mlp.down_proj.max_weight_position 26.47
mlp.down_proj.min_weight 0.00
mlp.down_proj.min_weight_distance 16.30

Appendix

PaCMAP projection
 » [Trial 443] Refusals:  7/100, KL divergence: 0.0383
   [Trial 433] Refusals:  9/100, KL divergence: 0.0332
   [Trial 209] Refusals: 10/100, KL divergence: 0.0305
   [Trial 370] Refusals: 12/100, KL divergence: 0.0243
   [Trial 201] Refusals: 15/100, KL divergence: 0.0209
   [Trial 349] Refusals: 19/100, KL divergence: 0.0192
   [Trial 257] Refusals: 27/100, KL divergence: 0.0167
   [Trial 302] Refusals: 31/100, KL divergence: 0.0166
   [Trial 297] Refusals: 34/100, KL divergence: 0.0133
   [Trial 392] Refusals: 39/100, KL divergence: 0.0132
   [Trial 277] Refusals: 45/100, KL divergence: 0.0111
   [Trial 427] Refusals: 48/100, KL divergence: 0.0111
   [Trial 272] Refusals: 55/100, KL divergence: 0.0101
   [Trial 273] Refusals: 62/100, KL divergence: 0.0097
   [Trial 424] Refusals: 68/100, KL divergence: 0.0096
   [Trial 391] Refusals: 74/100, KL divergence: 0.0095
   [Trial 395] Refusals: 75/100, KL divergence: 0.0082
   [Trial 448] Refusals: 83/100, KL divergence: 0.0074
   [Trial 225] Refusals: 88/100, KL divergence: 0.0064
   [Trial 446] Refusals: 89/100, KL divergence: 0.0063
   [Trial 129] Refusals: 90/100, KL divergence: 0.0060
   [Trial  21] Refusals: 93/100, KL divergence: 0.0050
   [Trial  82] Refusals: 94/100, KL divergence: 0.0045
   [Trial 471] Refusals: 95/100, KL divergence: 0.0038
   [Trial  96] Refusals: 97/100, KL divergence: 0.0037
   [Trial 103] Refusals: 98/100, KL divergence: 0.0026
   [Trial 430] Refusals: 99/100, KL divergence: 0.0015
   [Trial   2] Refusals: 100/100, KL divergence: 0.0006

This is preview MoE version

🌙 Luna-7B-A4B – Roleplay Chat Model

Luna is a conversational AI model designed for immersive roleplay (RP) and natural chatting.
It is fine-tuned to respond in a more engaging, character-driven style compared to standard instruction-tuned models.

Notes:

  • Optimized for roleplay-style conversations
  • Flexible: can be used for creative writing, storytelling, or character interactions
  • For best performance, you should describe the system prompt for your character.
  • This model also train on varius task such as math, code and tool calling (agent) hoping for better performance.

Support me at:

Buy Me A Coffee

Cite:

@misc{Luna,
  title        = {Luna-7B-A4B – Roleplay Chat Model},
  author       = {Beyoru},
  year         = {2025},
  howpublished = {\url{https://huggingface.co/beyoru/Luna}}
}
Downloads last month
69
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MuXodious/Luna-7B-A4B-PaperWitch-heresy

Base model

beyoru/EvolLLM
Finetuned
beyoru/Luna-7B-A4B
Finetuned
(2)
this model
Quantizations
2 models

Collections including MuXodious/Luna-7B-A4B-PaperWitch-heresy