Made an 8bit quantized whisper v3 for running via https://github.com/ggerganov/whisper.cpp

Recently converted and quantized from the full float32 model to avoid any errors or loss of precision.

sample command for transcribing a podcast .wav file:

./main -m wispher-v3-8bit-ggml.bin -t 4 -f podcast69episode.wav -of podcast69transcript -otxt -nt -np

Other quants included, the k quants are experimental.

q5_1 and q8 tested and work well.

image/png

image/png

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support