Llama-2-7B-Chat-GGUF
Used for llama.cpp Original model: meta-llama/Llama-2-7b-chat-hf
GGUF: New model file format used by llama.cpp
This repo contains all quantized (q4_0, q4_1, q5_0, q5_1, q8_0) GGUF version of Llama-2-7B-Chat model
- Downloads last month
- 19
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support