Small experiment on top of RouWei-0.8-16ch-v0.1alpha
- Original RouWei-0.8-16ch-v0.1alpha by Minthy
- This model is just an experiment, check out Minthy for more.
Purpose:
- Intented to improve vibrance and color of RouWei-0.8-16ch-v0.1alpha.
Usage:
- Same as RouWei-0.8-16ch-v0.1alpha, but without the
mutiply latentsnode.
Training Details:
- One epoch of 133992 images mainly from deepghs/danbooru2024
- Training time was 11 hours using A100 80GB (Google Colab)
Training Config (Modified SD-Scripts):
!accelerate launch sdxl_train.py --pretrained_model_name_or_path "./rouwei_0.8_16ch_v0.1alpha_fp16.safetensors" --vae "./flux_vae.safetensors" --train_data_dir "" --max_train_epochs 1 --output_name "" --output_dir "" --save_precision "bf16" --save_model_as "safetensors" --save_every_n_epochs 1 --save_last_n_epochs 1 --train_batch_size 16 --min_snr_gamma 5 --console_log_simple --noise_offset 0.05 --optimizer_type "AdamW8bit" --lr_scheduler "cosine" --loss_type "l2" --learning_rate 0.00001 --caption_dropout_rate 0.01 --max_grad_norm 1.0 --caption_extension ".txt" --max_data_loader_n_workers 4 --persistent_data_loader_workers --no_half_vae --mixed_precision "bf16" --full_bf16 --gradient_accumulation_steps 1 --seed 23 --max_token_length 225 --resolution 1024 --optimizer_args weight_decay=0.0001 eps=1e-8 betas=0.9,0.999 --enable_bucket --min_bucket_reso 256 --max_bucket_reso 2048 --bucket_reso_steps 64 --gradient_checkpointing --use_flux_vae --xformers --color_aug --random_crop --flip_aug --save_state --block_lr 1e-5,1e-4,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-5,1e-4
Training Code:
- (Will release soon)
- Downloads last month
- 3
Model tree for TheRemixer/rouwei_0.8_16ch_v0.1alpha_bf16_remixTest
Base model
KBlueLeaf/kohaku-xl-beta5
Finetuned
Minthy/RouWei-0.6
Finetuned
Minthy/RouWei-0.7
Finetuned
Minthy/RouWei-0.8
Finetuned
Minthy/RouWei-0.8-16ch-v0.1alpha