Summary: | The generation of data has traditionally been specified using hand-crafted
algorithms. However, oftentimes the exact generative process is unknown
while only a limited number of samples are observed. One such case is
generating images that look visually similar to an exemplar image or as if
coming from a distribution of images. We look into learning the generating
process by constructing a similarity function that measures how close the
generated image is to the target image. We discuss a framework in which
the similarity function is specified by a pre-trained neural network without
fine-tuning, as is the case for neural texture synthesis, and a framework
where the similarity function is learned along with the generative process
in an adversarial setting, as is the case for generative adversarial networks.
The main point of discussion is the combined use of neural networks and
maximum mean discrepancy as a versatile similarity function. Additionally, we describe an improvement to state-of-the-art style transfer
that allows faster computations while maintaining generality of the generating
process. The proposed objective has desirable properties such as a simpler
optimization landscape, intuitive parameter tuning, and consistent frame-
by-frame performance on video. We use 80,000 natural images and 80,000
paintings to train a procedure for artistic style transfer that is efficient but
also allows arbitrary content and style images. === Science, Faculty of === Computer Science, Department of === Graduate
|