Whisper Tiny Romanian (GGUF)

This repository contains GGUF optimized versions of the fine-tuned Romanian Whisper Tiny model: IonGrozea/whisper-tiny_ro-80mel.

Available Quantizations

File Type Size Use Case
whisper-tiny-ro-80mel.gguf FP16 488MB Maximum accuracy
whisper-tiny-ro-q8_0.gguf Q8_0 264MB High-end CPUs
whisper-tiny-ro-q5_1.gguf Q5_1 190MB Balanced performance
whisper-tiny-ro-q4_1.gguf Q4_1 160MB Low-resource / Fast

Usage with whisper.cpp

./main -m whisper-tiny-ro-q5_1.gguf -f audio.wav -l ro
Downloads last month
44
GGUF
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for IonGrozea/whisper-tiny_ro-80mel-gguf

Quantized
(1)
this model