Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains

We expand the scope of image-to-image translation to include more distinct image domains, where the image sets have analogous structures, but may not share object types between them. Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains (SUNIT) is built to more successfu...

Full description

Bibliographic Details
Main Author: Ackerman, Wesley
Format: Others
Published: BYU ScholarsArchive 2020
Subjects:
Online Access:https://scholarsarchive.byu.edu/etd/8684
https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=9684&context=etd
Description
Summary:We expand the scope of image-to-image translation to include more distinct image domains, where the image sets have analogous structures, but may not share object types between them. Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains (SUNIT) is built to more successfully translate images in this setting, where content from one domain is not found in the other. Our method trains an image translation model by learning encodings for semantic segmentations of images. These segmentations are translated between image domains to learn meaningful mappings between the structures in the two domains. The translated segmentations are then used as the basis for image generation. Beginning image generation with encoded segmentation information helps maintain the original structure of the image. We qualitatively and quantitatively show that SUNIT improves image translation outcomes, especially for image translation tasks where the image domains are very distinct.