litmudoc/Solar-Open-100B-MXFP4-Q8

This model litmudoc/Solar-Open-100B-MXFP4-Q8 was converted to MLX format from upstage/Solar-Open-100B using mlx-lm version 0.30.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("litmudoc/Solar-Open-100B-MXFP4-Q8")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
63
Safetensors
Model size
103B params
Tensor type
BF16
U32
F32
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for litmudoc/Solar-Open-100B-MXFP4-Q8

Quantized
(12)
this model