CSSNet: Image-Based clothing styles switch
碩士 === 國立交通大學 === 多媒體工程研究所 === 107 === We propose a framework, the CSSNet to exchange the upper clothes across people with different pose, body shape and clothing. We present an approach consists of three stages. (1) Disentangling the features, such as cloth, body pose and semantic segmentation from...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/handle/qpxz4f |
Summary: | 碩士 === 國立交通大學 === 多媒體工程研究所 === 107 === We propose a framework, the CSSNet to exchange the upper clothes across people with different pose, body shape and clothing. We present an approach consists
of three stages. (1) Disentangling the features, such as cloth, body pose and semantic segmentation from source and target persons. (2) Synthesizing realistic and
high resolution target dressing style images. (3) Transfer the complex logo from
source clothing to target wearing. Our proposed end-to-end neural network architecture can generate the specific person to wear the target clothing. In addition,
we also propose a postprocess to recover the complex logos on network outputs
which are missing or blurring. Our results display more realistic and higher quality than previous methods. Our method can also preserve cloth shape and texture
simultaneously
|
---|