Whisper Medium Romanian (GGUF)

This repository contains GGUF optimized versions of the fine-tuned Romanian Whisper Medium model: IonGrozea/whisper-medium_ro-80mel.

Available Quantizations

File Type Size Use Case
whisper-medium-ro-80mel.gguf FP16 ~1.5GB Maximum accuracy
whisper-medium-ro-q8_0.gguf Q8_0 ~800MB High-end CPUs
whisper-medium-ro-q5_1.gguf Q5_1 ~600MB Balanced performance
whisper-medium-ro-q4_1.gguf Q4_1 ~480MB Low-resource / Fast

Usage with whisper.cpp

./main -m whisper-medium-ro-q5_1.gguf -f audio.wav -l ro
Downloads last month
9
GGUF
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for IonGrozea/whisper-medium_ro-80mel-gguf

Quantized
(1)
this model