Diffusion models have demonstrated great success in the field of text-to-image generation. However, alleviating the misalignment between the text prompts and images is still challenging. The root reason behind the misalignment has not been extensively investigated. We observe that the misalignment is caused by inadequate token attention activation. We further attribute this phenomenon to the diffusion model's insufficient condition utilization, which is caused by its training paradigm.
To address the issue, we propose π«CoMat, an end-to-end diffusion model fine-tuning strategy with an image-to-text concept matching mechanism. We leverage an image captioning model to measure image-to-text alignment and guide the diffusion model to revisit ignored tokens. A novel attribute concentration module is also proposed to address the attribute binding problem.
Without any image or human preference data, we use only 20K text prompts to fine-tune SDXL to obtain CoMat-SDXL. Extensive experiments show that CoMat-SDXL significantly outperforms the baseline model SDXL in two text-to-image alignment benchmarks and achieves start-of-the-art performance.
The text-to-image diffusion model (T2I-Model) first generates an image according to the text prompt. Then the image is sent to the Concept Matching module, Attribute Concentration module, and Fidelity Preservation module to compute the loss for fine-tuning the online T2I-Model.
Specifically, we leverage an image captioning model to supervise the diffusion model to sufficiently attend to each concept in the text prompt in the Concept Matching module. In the Attribute Concentration module, we promote the consistency of the attention map of each entity's noun and attributes. Finally, in the Fidelity Preservation module, we introduce a novel adversarial loss to conserve the generation quality of the online fine-tuning model.
Overview of π«CoMat.