rnj-1-instruct-GGUF
rnj-1-instruct from EssentialAI is the instruction-tuned variant of the 8.3B-parameter dense language model family trained from scratch on 8.4T tokens using a Gemma 3-like architecture with global attention, YaRN for 32K context extension, 128K vocabulary, and Muon optimizer, delivering state-of-the-art open-weight performance in code generation (HumanEval+, MBPP+, BigCodeBench, LiveCodeBench v6), agentic coding (20.8% SWE-bench Verified bash-only, outperforming Gemini 2.0 Flash/Qwen2.5-Coder 32B), tool-calling (Berkeley Function Calling Leaderboard), multilingual code (MultiPL-E across C++/Java/JS/etc.), code infilling (86.21% HE-FIM-Python), math (GSM8k, Minerva-MATH, AIME '24/'25), and scientific reasoning (GPQA-Diamond, SuperGPQA) under Apache 2.0 license. Post-trained with limited SFT (150B tokens) for community extension, it supports pass@N scaling, FIM via special tokens, Hermes tool parser, and seamless integration with vLLM/SGLang (tool-choice enabled), Transformers (4.51.2+), llama.cpp quantization, Cline IDE agent, Claude Code router, and mini-SWE-agent for PR fixes, security patches, performance profiling (Enamel leader), and data visualization. Optimized for temperatures [0,0.6] with system prompts to mitigate code bias, it excels in real-world SWE trajectories but notes limitations in factual recall and identity hallucinations from web training data.
rnj-1-instruct [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| rnj-1-instruct.BF16.gguf | BF16 | 16.6 GB | Download |
| rnj-1-instruct.F16.gguf | F16 | 16.6 GB | Download |
| rnj-1-instruct.Q8_0.gguf | Q8_0 | 8.84 GB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 396
8-bit
16-bit
Model tree for prithivMLmods/rnj-1-instruct-GGUF
Base model
EssentialAI/rnj-1-instruct