--- tags: - text-to-image - stable-diffusion - garment generation - multi-modality license: apache-2.0 language: - en library_name: diffusers --- # IMAGGarment-1: Fine-Grained Garment Generation for Controllable Fashion Design
[**Project Page**](https://revive234.github.io/imaggarment.github.io/) **|** [**Paper**](https://arxiv.org/pdf/2504.13176) **|** [**Code**](https://github.com/muzishen/IMAGGarment-1)
--- ## Introduction IMAGGarment-1 addresses the challenges of multi-conditional controllability in personalized fashion design and digital apparel applications. Specifically, IMAGGarment-1 employs a two-stage training strategy to separately model global appearance and local details, while enabling unified and controllable generation through end-to-end inference. In the first stage, we propose a global appearance model that jointly encodes silhouette and color using a mixed attention module and a color adapter. In the second stage, we present a local enhancement model with an adaptive appearance-aware module to inject user-defined logos and spatial constraints, enabling accurate placement and visual consistency. ![framework](architecture.png)