GabForge Mini v1 — Vision + Coding GGUF

The first GabForge model: a fine-tuned Qwen3.5-9B with built-in vision capability, optimized for screenshot→code generation.

Model Details

Property Value
Base Model Qwen3.5-9B (multimodal)
Fine-tuning QLoRA (rank 32), 7,419 WebSight screenshot→code examples
Training 2.15 epochs, final loss 0.15
Quantization Q4_K_M
File Size 5.3 GB
Min VRAM 8 GB
License Apache 2.0

Capabilities

  • Screenshot→Code: Give it a UI screenshot, get HTML/CSS/JS back
  • General Coding: Inherits Qwen3.5-9B's strong coding ability
  • Vision Understanding: Reads UI layouts, diagrams, charts, error screenshots
  • Chat: Standard instruction-following conversational model

Usage

Works with any llama.cpp-compatible inference engine:

# With llama-server
llama-server -m GabForge-Mini-v1-Q4_K_M.gguf --port 8766

# With GabForge AI Studio (automatic)
# Download via Settings → AI Models → Local tab

Made for GabForge AI Studio

This model is the default local model in GabForge AI Studio — the privacy-first AI coding IDE. Vision runs entirely on your machine.

Training Data

  • HuggingFaceM4/WebSight — screenshot→HTML/CSS pairs
  • Additional coding data from Qwen3.5-9B's base knowledge
Downloads last month
112
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GabForge/GabForge-Mini-v1-GGUF

Finetuned
Qwen/Qwen3.5-9B
Quantized
(163)
this model