Require compiling by yourself

git clone https://github.com/ggml-org/llama.cpp --branch pr-17592

cmake -B build -DGGML_CUDA=ON

cmake --build build --config Release

Downloads last month
55
GGUF
Model size
35B params
Architecture
kimi-linear
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for lovedheart/Kimi-Linear-REAP-35B-A3B-Instruct-GGUF

Quantized
(26)
this model