LightOnOCR-2-1B-f32-GGUF
LightOnOCR-2-1B from lightonai is LightOn's flagship 1B-parameter end-to-end vision-language OCR model—the recommended variant for most tasks—refined via RLVR training on a 2.5x scaled 43M-page corpus with enhanced French, arXiv, scan, and LaTeX coverage for converting PDFs, scans, and document images into clean, naturally ordered text at 3.3× Chandra OCR speed, 1.7× OlmOCR, 5× dots.ocr, and 5.71 pages/s on H100 (~<$0.01/1k pages) while achieving state-of-the-art 83.2±0.9 on OlmOCR-Bench (outperforming Chandra-9B by 1.5+ points) across tables, receipts, forms, multi-column layouts, and math without brittle pipelines. Part of the fully differentiable LightOnOCR-2 family (including bbox variants for image localization, base models for fine-tuning, and soup merges), it uses a native-resolution ViT encoder (from Mistral-Small-3.1), MLP projector, and Qwen3 decoder with 1540px longest-edge preprocessing (200 DPI PDFs) for superior accuracy on degraded scans, scientific docs, and European languages under Apache 2.0, supporting LoRA/PEFT fine-tuning via Transformers for domain adaptation.
LightOnOCR-2-1B [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| LightOnOCR-2-1B-BF16.gguf | BF16 | 1.2 GB | Download |
| LightOnOCR-2-1B-F16.gguf | F16 | 1.2 GB | Download |
| LightOnOCR-2-1B-F32.gguf | F32 | 2.39 GB | Download |
| LightOnOCR-2-1B-Q8_0.gguf | Q8_0 | 639 MB | Download |
| LightOnOCR-2-1B.mmproj-BF16.gguf | mmproj-BF16 | 829 MB | Download |
| LightOnOCR-2-1B.mmproj-F16.gguf | mmproj-F16 | 819 MB | Download |
| LightOnOCR-2-1B.mmproj-F32.gguf | mmproj-F32 | 1.64 GB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 357
8-bit
16-bit
32-bit
Model tree for prithivMLmods/LightOnOCR-2-1B-f32-GGUF
Base model
lightonai/LightOnOCR-2-1B