Question about lora fine tune qwen-image-edit

  1. What’s the suitable dataset size and epochs?
  2. Can I train several tasks by training one time? Distinguish them by dataset and captions.
1 Like

maybe


  1. Dataset size + epochs (for Qwen-Image-Edit LoRA)

    • Practical starting range (per task):

      • 20–50 pairs for a narrow thing (one product / character / angle set).
      • 80–200 pairs if you want a more general edit style that works on many scenes.
    • Training length (rough):

      • Aim for about 1k–5k training steps total. With 40–100 images and batch size 2–4, this usually means 100–200 epochs.
    • Keep images and captions high-quality; more low-quality images usually makes the LoRA worse, not better.

    Good Qwen-specific tutorial that uses these kinds of numbers:

  2. Several tasks in one training run?

    • Yes, you can train one LoRA on several tasks at once. This is common in “multi-concept” LoRA training.

    • How to do it in practice:

      • Mix all examples into one dataset.
      • Use different trigger words / phrases in the captions to distinguish tasks (e.g., <angle_style>, <skin_fix>, <logo_style>).
      • Keep each task roughly balanced in number of samples (don’t let one task have 5× more images than the others).
    • Trade-offs:

      • One multi-task LoRA is convenient but can cause interference between tasks.
      • For very different tasks, many people prefer separate LoRAs per task and then combine them at inference time (load multiple LoRAs with different strengths).

Oh wow. Thats so little data actually! Has any1 tried w only 40-100 samples? (i ve only trained LLMs, have no experience w image gen)

1 Like

e.g. https://note.com/tori29umai/n/n256f30d51669 (in Japanese)

1 Like

hello everyone..!