Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
altomek
/
Luxe_4B-GGUF
like
0
Transformers
GGUF
English
imatrix
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
Luxe_4B
Luxe_4B
Selected GGUF quants of
https://huggingface.co/FourOhFour/Luxe_4B
Downloads last month
42
GGUF
Model size
5B params
Architecture
llama
Chat template
Hardware compatibility
Log In
to add your hardware
1-bit
IQ1_S
1.21 GB
4-bit
Q4_0
2.66 GB
Q4_0
2.65 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
altomek/Luxe_4B-GGUF
Base model
FourOhFour/Luxe_4B
Quantized
(
4
)
this model
Collection including
altomek/Luxe_4B-GGUF
Quants for ARM
Collection
8 items
•
Updated
Mar 2
•
1