Llama-2-7B-Chat-GGUF

Used for llama.cpp Original model: meta-llama/Llama-2-7b-chat-hf

GGUF: New model file format used by llama.cpp

This repo contains all quantized (q4_0, q4_1, q5_0, q5_1, q8_0) GGUF version of Llama-2-7B-Chat model

Downloads last month
19
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support