Deep joint demosaicking and denoising

© 2016 ACM. SA'16 Technical Papers,, December 05-08, 2016, Macao Demosaicking and denoising are the key first stages of the digital imaging pipeline but they are also a severely ill-posed problem that infers three color values per pixel from a single noisy measurement. Earlier methods rely on h...

Full description

Bibliographic Details
Main Authors: Gharbi, Michaël (Author), Chaurasia, Gaurav (Author), Paris, Sylvain (Author), Durand, Frédo (Author)
Format: Article
Language:English
Published: Association for Computing Machinery (ACM), 2021-10-27T20:06:07Z.
Subjects:
Online Access:Get fulltext
LEADER 01719 am a22001933u 4500
001 134672
042 |a dc 
100 1 0 |a Gharbi, Michaël  |e author 
700 1 0 |a Chaurasia, Gaurav  |e author 
700 1 0 |a Paris, Sylvain  |e author 
700 1 0 |a Durand, Frédo  |e author 
245 0 0 |a Deep joint demosaicking and denoising 
260 |b Association for Computing Machinery (ACM),   |c 2021-10-27T20:06:07Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/134672 
520 |a © 2016 ACM. SA'16 Technical Papers,, December 05-08, 2016, Macao Demosaicking and denoising are the key first stages of the digital imaging pipeline but they are also a severely ill-posed problem that infers three color values per pixel from a single noisy measurement. Earlier methods rely on hand-crafted filters or priors and still exhibit disturbing visual artifacts in hard cases such as moiré or thin edges. We introduce a new data-driven approach for these challenges: we train a deep neural network on a large corpus of images instead of using hand-tuned filters. While deep learning has shown great success, its naive application using existing training datasets does not give satisfactory results for our problem because these datasets lack hard cases. To create a better training set, we present metrics to identify difficult patches and techniques for mining community photographs for such patches. Our experiments show that this network and training procedure outperform state-of-the-art both on noisy and noise-free data. Furthermore, our algorithm is an order of magnitude faster than the previous best performing techniques. 
546 |a en 
655 7 |a Article 
773 |t 10.1145/2980179.2982399 
773 |t ACM Transactions on Graphics