|
|
|
|
LEADER |
01771 am a22002053u 4500 |
001 |
137596 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Bau, D
|e author
|
700 |
1 |
0 |
|a Liu, S
|e author
|
700 |
1 |
0 |
|a Wang, T
|e author
|
700 |
1 |
0 |
|a Zhu, JY
|e author
|
700 |
1 |
0 |
|a Torralba, A
|e author
|
245 |
0 |
0 |
|a Rewriting a Deep Generative Model
|
260 |
|
|
|b Springer International Publishing,
|c 2021-11-05T19:21:45Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/137596
|
520 |
|
|
|a © 2020, Springer Nature Switzerland AG. A deep generative model such as a GAN learns to model a rich set of semantic and physical rules about the target distribution, but up to now, it has been obscure how such rules are encoded in the network, or how a rule could be changed. In this paper, we introduce a new problem setting: manipulation of specific rules encoded by a deep generative model. To address the problem, we propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory. We derive an algorithm for modifying one entry of the associative memory, and we demonstrate that several interesting structural rules can be located and modified within the layers of state-of-the-art generative models. We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects, and we show several proof-of-concept applications. Finally, results on multiple datasets demonstrate the advantage of our method against standard fine-tuning methods and edit transfer algorithms.
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t 10.1007/978-3-030-58452-8_21
|
773 |
|
|
|t Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
|