Improving Complex Scene Generation by Enhancing Multi-scale Representations of GAN Discriminators

While recent advances of GAN models enabled photo-realistic synthesis of various object images, challenges still remain in modeling more complex image distributions such as scenes with multiple objects. The difficulty lies in the high structural complexity of scene images, where the discriminator ca...

Full description

Bibliographic Details
Main Authors: Lee, H. (Author), Lee, S. (Author), Park, J. (Author), Shim, J. (Author)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2023
Subjects:
Online Access:View Fulltext in Publisher
View in Scopus
LEADER 02944nam a2200457Ia 4500
001 10.1109-ACCESS.2023.3270561
008 230529s2023 CNT 000 0 und d
020 |a 21693536 (ISSN) 
245 1 0 |a Improving Complex Scene Generation by Enhancing Multi-scale Representations of GAN Discriminators 
260 0 |b Institute of Electrical and Electronics Engineers Inc.  |c 2023 
300 |a 1 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1109/ACCESS.2023.3270561 
856 |z View in Scopus  |u https://www.scopus.com/inward/record.uri?eid=2-s2.0-85159688904&doi=10.1109%2fACCESS.2023.3270561&partnerID=40&md5=6fa96c6d39955474ed305ab85ac907e7 
520 3 |a While recent advances of GAN models enabled photo-realistic synthesis of various object images, challenges still remain in modeling more complex image distributions such as scenes with multiple objects. The difficulty lies in the high structural complexity of scene images, where the discriminator carries a heavy burden in discriminating complex structural differences between real and fake scene images. Therefore, enhancing the discriminative capability of the discriminator could be one of the effective strategies to improve the generation performance of GAN models. In this paper, we explore ways to boost the discriminative capability by leveraging two recent paradigms on visual representation learning: self-supervised learning and transfer learning. As the first approach, we propose a self-supervised auxiliary task tailored to enhance the multi-scale representations of the discriminator. In the second approach, we further enhance the discriminator by utilizing pretrained representations from various scene understanding models. To fully utilize knowledge from multiple expert models, we propose a multi-scale feature ensemble to mix multi-sale representations. Empirical results on challenging scene datasets demonstrate that the proposed strategies significantly advance the generation performance, enabling diverse and photo-realistic synthesis of complex scene images. Author 
650 0 4 |a Complex networks 
650 0 4 |a Complex scenes 
650 0 4 |a Discriminators 
650 0 4 |a Feature extraction 
650 0 4 |a Features extraction 
650 0 4 |a Generative adversarial networks 
650 0 4 |a Generative Adversarial Networks 
650 0 4 |a Generator 
650 0 4 |a Generators 
650 0 4 |a Image enhancement 
650 0 4 |a Job analysis 
650 0 4 |a Personnel training 
650 0 4 |a Scene generation 
650 0 4 |a Scene Generation 
650 0 4 |a Scene image 
650 0 4 |a Self-supervised learning 
650 0 4 |a Self-Supervised Learning 
650 0 4 |a Supervised learning 
650 0 4 |a Task analysis 
650 0 4 |a Training 
650 0 4 |a Transfer learning 
650 0 4 |a Transfer Learning 
700 1 0 |a Lee, H.  |e author 
700 1 0 |a Lee, S.  |e author 
700 1 0 |a Park, J.  |e author 
700 1 0 |a Shim, J.  |e author 
773 |t IEEE Access