- What’s the suitable dataset size and epochs?
- Can I train several tasks by training one time? Distinguish them by dataset and captions.
maybe
-
Dataset size + epochs (for Qwen-Image-Edit LoRA)
-
Practical starting range (per task):
- 20–50 pairs for a narrow thing (one product / character / angle set).
- 80–200 pairs if you want a more general edit style that works on many scenes.
-
Training length (rough):
- Aim for about 1k–5k training steps total. With 40–100 images and batch size 2–4, this usually means 100–200 epochs.
-
Keep images and captions high-quality; more low-quality images usually makes the LoRA worse, not better.
Good Qwen-specific tutorial that uses these kinds of numbers:
-
“Qwen Image Models Training – 0 to Hero (LoRA & Fine-Tuning, includes Image-Edit-2509)”
- Hugging Face post: https://huggingface.co/posts/MonsterMMORPG/482948371636786
-
FlyMyAI Qwen LoRA trainer (Qwen-Image & Qwen-Image-Edit), which recommends 10–50 image-text pairs at minimum:
-
-
Several tasks in one training run?
-
Yes, you can train one LoRA on several tasks at once. This is common in “multi-concept” LoRA training.
-
How to do it in practice:
- Mix all examples into one dataset.
- Use different trigger words / phrases in the captions to distinguish tasks (e.g.,
<angle_style>,<skin_fix>,<logo_style>). - Keep each task roughly balanced in number of samples (don’t let one task have 5× more images than the others).
-
Trade-offs:
- One multi-task LoRA is convenient but can cause interference between tasks.
- For very different tasks, many people prefer separate LoRAs per task and then combine them at inference time (load multiple LoRAs with different strengths).
-
Oh wow. Thats so little data actually! Has any1 tried w only 40-100 samples? (i ve only trained LLMs, have no experience w image gen)
e.g. https://note.com/tori29umai/n/n256f30d51669 (in Japanese)
hello everyone..!