|
|
|
|
LEADER |
02474 am a22002293u 4500 |
001 |
129446 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Li, Muyang
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
700 |
1 |
0 |
|a Lin, Ji
|e author
|
700 |
1 |
0 |
|a Ding, Yaoyao
|e author
|
700 |
1 |
0 |
|a Liu, Zhijian
|e author
|
700 |
1 |
0 |
|a Han, Song
|e author
|
245 |
0 |
0 |
|a GAN Compression: Efficient Architectures for Interactive Conditional GANs
|
260 |
|
|
|b Institute of Electrical and Electronics Engineers (IEEE),
|c 2021-01-19T17:04:47Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/129446
|
520 |
|
|
|a Conditional Generative Adversarial Networks (cGANs) have enabled controllable image synthesis for many computer vision and graphics applications. However, recent cGANs are 1-2 orders of magnitude more computationally-intensive than modern recognition CNNs. For example, GauGAN consumes 281G MACs per image, compared to 0.44G MACs for MobileNet-v3, making it difficult for interactive deployment. In this work, we propose a general-purpose compression framework for reducing the inference time and model size of the generator in cGANs. Directly applying existing CNNs compression methods yields poor performance due to the difficulty of GAN training and the differences in generator architectures. We address these challenges in two ways. First, to stabilize the GAN training, we transfer knowledge of multiple intermediate representations of the original model to its compressed model, and unify unpaired and paired learning. Second, instead of reusing existing CNN designs, our method automatically finds efficient architectures via neural architecture search (NAS). To accelerate the search process, we decouple the model training and architecture search via weight sharing. Experiments demonstrate the effectiveness of our method across different supervision settings (paired and unpaired), model architectures, and learning methods (e.g., pix2pix, GauGAN, CycleGAN). Without losing image quality, we reduce the computation of CycleGAN by more than 20x and GauGAN by 9x, paving the way for interactive image synthesis. The code and demo are publicly available.
|
520 |
|
|
|a National Science Foundation (U.S.). Career (Award 1943349)
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t 10.1109/CVPR42600.2020.00533
|
773 |
|
|
|t Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
|