Summary: | 碩士 === 國立臺灣大學 === 資訊工程學研究所 === 107 === Multi-domain image-to-image translation has gained increasing attention recently. Previous methods take an image and some target attributes as inputs and generate an output image that has the desired attributes. However, this has one limitation. They require specifying the entire set of attributes even most of them would not be changed. To address this limitation, we propose RA-GAN, a novel and practical formulation to multi-domain image-to-image translation. The key idea is the use of relative attributes, which describes the desired change on selected attributes. To this end, we propose an adversarial framework that learns a single generator to translate images that not only match the relative attributes but also exhibit better quality. Moreover, Our generator is capable of modifying images by changing particular attributes of interest in a continuous manner while preserving the other ones. Experimental results demonstrate the effectiveness of our approach both qualitatively and quantitatively to the tasks of facial attribute transfer and interpolation.
|