Can you make A-mix model, please?

#1
by PoomPuum - opened

Hi.Can you make A-mix model, please? https://civitai.com/models/1692152?modelVersionId=1915059
It is good and popular model. For my taste it makes environment less plastic and I like it.
I've tried some of your FP8 models - it runs 1.5 times faster than their originals on my machine. So it would be nice to have this model.

I would like to test GGUF version of this model too... but don't know how to launch your GGUF models without CLIPs.
Thank you!

already available, together with the clips and vae

BigDannyPt changed discussion status to closed

Thank you very much! Let me ask you another question... I read that _All models include CLIPs and VAE. So it should be heavier? But why does it more than 1 GB smaller than the model without _All? And the smaller the file the better for low VRAM cards (so _All models are always better in this case)? Sorry if it is a stupid question.

I'm not sure why, but that's how the model is made, not sure why the size is smaller, and I only noticed because another user pointed it out.
The only thing I can think is that the script is also converting the clips to fp8

Hm. But still if a file is X and another file is the same X but with additional stuff Y (Y = CLIPs + VAE in our case. So X+Y is _All model), then X+Y still should be > than just X.
I've tested both FP8 aMix models. BOTH of them work without additional CLIPs and VAE. So seems like both of them contain CLIPs and VAE..
I would say, judging the models size, it seems like that model without _All contains CLIPs and VAE twice but, as you said, in FP8 format (two clips g, two clips l, two VAEs), like somewhere in the process of creating FP8 it was add CLIPs and VAE instead of subtract CLIPs and VAE. Just thoughts, I'm newbie at this

P.S. I have noticed if I use your aMixIllustrious_aMix_clip_l.safetensors then I get an error "clip missing: ['text_projection.weight']", but the proccess goes smoothly regardless.
I googled, that I can extract clips myself from models. I did it with the help of ComfyUI with original aMixIllustrious_aMix.safetensors and got Clip_I that works without that error. And btw, if it matters, your clip_I is 234MB (246 144 524 bytes) and that one that I got is 235MB (247 325 596 bytes).

download the new clip_l, I've tested with the non _ALL and it worked correctly
Not sure when you have downloaded, but there were an issue int he script that I've updated later, so the latext version is the one that is working

and I just went to check, and it seems that there is soemthing worngly with the script:

. FP8_E5M2 (Standard / Partial)

This option targets only the main Transformer or UNet (the heavy calculation blocks).

    What it does: It looks for specific keywords in the tensor names (like diffusion_model, transformer_blocks, or double_blocks). It converts these massive layers to FP8.

    What it ignores: it leaves the VAE (image decoder) and CLIP/T5 (text encoders) in their original precision (usually FP16 or BF16).

    Why use it: This is generally considered the "Best Quality" option. Because the VAE and Text Encoders are relatively small but very sensitive to precision, keeping them at FP16 prevents "color bleeding," "crushed blacks," or loss of prompt understanding, while still saving 50% VRAM on the main model.

2. FP8_E5M2 (All)

This option performs a global conversion on every single compatible tensor in the file.

    What it does: It ignores the filter and converts every floating-point weight it finds (Model + VAE + CLIP/Text Encoders) into FP8.

    Why use it: This is the "Maximum Efficiency" option. It results in the smallest possible file size and the lowest possible VRAM usage.

    The Risk: Quantizing the VAE to FP8 can sometimes cause subtle visual artifacts (like slight noise in dark areas or less accurate skin textures). Quantizing the CLIP can occasionally result in slightly worse prompt following.

i need to update this since it seems that the ALL is converting everything to FP8 and the non ALL is converting only the model, and maintaining the VAE / CLIP, which isn't intended

I think I got it, I've done it with one checkpoint that I had, the hassuke and you can see that the files now are:

28/02/2026  10:04     2 567 716 796 hassakuXLIllustrious_v32-FP8_E5M2.safetensors
28/02/2026  10:04     3 469 224 292 hassakuXLIllustrious_v32-FP8_E5M2_All.safetensors

so the non all is now even smaller, while the all has the clip and vae baked in.
and tried to use the non all alone, and it gives the clip error
now is time to redo everything once again... -.-"

download the new clip_l, I've tested with the non _ALL and it worked correctly

I've tried your aMix clip_l and now tried hassaku clip_l. Both give an error (or a warning). I mean not some kind of pop up error, but error in ComfyUI's cmd "clip missing: ['text_projection.weight']":
your clip l
And that is when I used clip_l extracted by myself from aMix original model, no error:
exracted clip l
Not sure how this error affects results, but it doesn't stop the proccess, it continues smoothly.

and tried to use the non all alone, and it gives the clip error

Yeah, tried that Hassaku model of yours - it gives me clip error too and ComfyUI says that that model doesn't have clips nor vae. Now it is as it should be) Good job!

at this point, I'm trying to remake and upload all models that I have in the repo, It may take some days since I'm facing some issues with the upload with hugginface that gets stuck at around 2GB for some reason and takes a little bit of time to progress.
I'll let you know when I get to A-Mix and fix it.
If you see a model that was updated after the time of hassuke, it should be fixed, if you want you can test them to see if it works correctly.
I did tried with hassuke and it worked as intended.

I did tried with hassuke and it worked as intended.

Yes, I understand.
Thank you! I'll wait) It is not so emergency

I think that now is everything done.
The none all files are all 2.57gb while the all are over 3gb
And all clips and vae are also included

Sign up or log in to comment