Generating Cross-domain Visual Description via Adversarial Learning
碩士 === 國立清華大學 === 電機工程學系所 === 105 === Impressive image captioning results are achieved in domains with plenty of training image and sentence pairs (e.g., MSCOCO). However, transferring to a target domain with significant domain shifts but no paired training data (referred to as cross-domain image ca...
Main Authors: | Chen, Tseng-Hung, 陳增鴻 |
---|---|
Other Authors: | Sun, Min |
Format: | Others |
Language: | en_US |
Published: |
2017
|
Online Access: | http://ndltd.ncl.edu.tw/handle/r8k45f |
Similar Items
-
Generative Adversarial Guided Learning for Domain Adaptation
by: Wei, Kai-Ya, et al.
Published: (2018) -
Learning cross-modal visual-tactile representation using ensembled generative adversarial networks
by: Xinwu Li, et al.
Published: (2019-03-01) -
Domain Adaptation for Imitation Learning Using Generative Adversarial Network
by: Tho Nguyen Duc, et al.
Published: (2021-07-01) -
Identity Preserving Generative Adversarial Network for Cross-Domain Person Re-Identification
by: Jialun Liu, et al.
Published: (2019-01-01) -
Knowledge Distillation via Generative Adversarial Networks
by: Chen, Wei-Chun, et al.
Published: (2018)