Existing text-to-image generation approaches have set high standards for photorealism and text-image correspondence,
largely benefiting from web-scale text-image datasets, which can include up to 5 billion pairs. However,
text-to-image generation models trained on domain-specific datasets, such as urban scenes, medical images, and
faces, still suffer from low text-image correspondence due to the lack of text-image pairs. Additionally,
collecting billions of text-image pairs for a specific domain can be time-consuming and costly.
Thus, ensuring high text-image correspondence without relying on web-scale text-image datasets remains a challenging
task. In this paper, we present a novel approach for enhancing text-image correspondence by leveraging available
semantic layouts. Specifically, we propose a Gaussian-categorical diffusion process that simultaneously generates
both images and corresponding layout pairs. Our experiments reveal that we can guide text-to-image generation models
to be aware of the semantics of different image regions, by training the model to generate semantic labels for each
pixel. We demonstrate that our approach achieves higher text-image correspondence compared to existing text-to-image
generation approaches in the Multi-Modal CelebA-HQ and the Cityscapes dataset, where text-image pairs are scarce.
ICCV, 2023.
Minho Park, Jooyeol Yun, Seunghwan Choi, and Jaegul Choo.
"Learning to Generate Semantic Layouts for Higher Text-Image Correspondence in Text-to-Image Synthesis"
@inproceedings{park2023learning,
title={Learning to Generate Semantic Layouts for Higher Text-Image Correspondence in Text-to-Image Synthesis},
author={Park, Minho and Yun, Jooyeol and Choi, Seunghwan and Choo, Jaegul},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={7591--7600},
year={2023}
}