typeof/Baguettotron-gguf
This model was converted to GGUF format from PleIAs/Baguettotron using llama.cpp.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo typeof/Baguettotron-gguf --hf-file Baguettotron-Q4_K_M.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo typeof/Baguettotron-gguf --hf-file Baguettotron-Q4_K_M.gguf -c 2048
- Downloads last month
- 174
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for typeof/Baguettotron-GGUF
Base model
PleIAs/Baguettotron