Motion Deblurring in Image Color Enhancement by WGAN
Motion deblurring and image enhancement are active research areas over the years. Although the CNN-based model has an advanced state of the art in motion deblurring and image enhancement, it fails to produce multitask results when challenged with the images of challenging illumination conditions. Th...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2020-01-01
|
Series: | International Journal of Optics |
Online Access: | http://dx.doi.org/10.1155/2020/1295028 |
id |
doaj-30f799b5847242df878cbbcf1a47d7b2 |
---|---|
record_format |
Article |
spelling |
doaj-30f799b5847242df878cbbcf1a47d7b22020-11-25T03:36:33ZengHindawi LimitedInternational Journal of Optics1687-93841687-93922020-01-01202010.1155/2020/12950281295028Motion Deblurring in Image Color Enhancement by WGANJiangfan Feng0Shuang Qi1Chongqing University of Posts and Telecommunications, College of Computer Science and Technology, Chongqing, ChinaChongqing University of Posts and Telecommunications, College of Computer Science and Technology, Chongqing, ChinaMotion deblurring and image enhancement are active research areas over the years. Although the CNN-based model has an advanced state of the art in motion deblurring and image enhancement, it fails to produce multitask results when challenged with the images of challenging illumination conditions. The key idea of this paper is to introduce a novel multitask learning algorithm for image motion deblurring and color enhancement, which enables us to enhance the color effect of an image while eliminating motion blur. To achieve this, we explore the synchronization of processing two tasks for the first time by using the framework of generative adversarial networks (GANs). We add L1 loss to the generator loss to simulate the model to match the target image at the pixel level. To make the generated image closer to the target image at the visual level, we also integrate perceptual style loss into generator loss. After a lot of experiments, we get an effective configuration scheme. The best model trained for about one week has achieved state-of-the-art performance in both deblurring and enhancement. Also, its image processing speed is approximately 1.75 times faster than the best competitor.http://dx.doi.org/10.1155/2020/1295028 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Jiangfan Feng Shuang Qi |
spellingShingle |
Jiangfan Feng Shuang Qi Motion Deblurring in Image Color Enhancement by WGAN International Journal of Optics |
author_facet |
Jiangfan Feng Shuang Qi |
author_sort |
Jiangfan Feng |
title |
Motion Deblurring in Image Color Enhancement by WGAN |
title_short |
Motion Deblurring in Image Color Enhancement by WGAN |
title_full |
Motion Deblurring in Image Color Enhancement by WGAN |
title_fullStr |
Motion Deblurring in Image Color Enhancement by WGAN |
title_full_unstemmed |
Motion Deblurring in Image Color Enhancement by WGAN |
title_sort |
motion deblurring in image color enhancement by wgan |
publisher |
Hindawi Limited |
series |
International Journal of Optics |
issn |
1687-9384 1687-9392 |
publishDate |
2020-01-01 |
description |
Motion deblurring and image enhancement are active research areas over the years. Although the CNN-based model has an advanced state of the art in motion deblurring and image enhancement, it fails to produce multitask results when challenged with the images of challenging illumination conditions. The key idea of this paper is to introduce a novel multitask learning algorithm for image motion deblurring and color enhancement, which enables us to enhance the color effect of an image while eliminating motion blur. To achieve this, we explore the synchronization of processing two tasks for the first time by using the framework of generative adversarial networks (GANs). We add L1 loss to the generator loss to simulate the model to match the target image at the pixel level. To make the generated image closer to the target image at the visual level, we also integrate perceptual style loss into generator loss. After a lot of experiments, we get an effective configuration scheme. The best model trained for about one week has achieved state-of-the-art performance in both deblurring and enhancement. Also, its image processing speed is approximately 1.75 times faster than the best competitor. |
url |
http://dx.doi.org/10.1155/2020/1295028 |
work_keys_str_mv |
AT jiangfanfeng motiondeblurringinimagecolorenhancementbywgan AT shuangqi motiondeblurringinimagecolorenhancementbywgan |
_version_ |
1715166217830924288 |