A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps
The goal of full-reference image quality assessment (FR-IQA) is to predict the perceptual quality of an image as perceived by human observers using its pristine (distortion free) reference counterpart. In this study, we explore a novel, combined approach which predicts the perceptual quality of a di...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-11-01
|
Series: | Algorithms |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-4893/13/12/313 |
id |
doaj-fe1cfd2293ef48ef8a6276f9cfea043c |
---|---|
record_format |
Article |
spelling |
doaj-fe1cfd2293ef48ef8a6276f9cfea043c2020-11-29T00:01:02ZengMDPI AGAlgorithms1999-48932020-11-011331331310.3390/a13120313A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation MapsDomonkos Varga0Department of Networked Systems and Services, Budapest University of Technology and Economics, 1111 Budapest, HungaryThe goal of full-reference image quality assessment (FR-IQA) is to predict the perceptual quality of an image as perceived by human observers using its pristine (distortion free) reference counterpart. In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps. More specifically, a reference-distorted image pair is run through a pretrained convolutional neural network and the activation maps are compared with a traditional image similarity metric. Subsequently, the resulting feature vector is mapped onto perceptual quality scores with the help of a trained support vector regressor. A detailed parameter study is also presented in which the design choices of the proposed method is explained. Furthermore, we study the relationship between the amount of training images and the prediction performance. Specifically, it is demonstrated that the proposed method can be trained with a small amount of data to reach high prediction performance. Our best proposal—called ActMapFeat—is compared to the state-of-the-art on six publicly available benchmark IQA databases, such as KADID-10k, TID2013, TID2008, MDID, CSIQ, and VCL-FER. Specifically, our method is able to significantly outperform the state-of-the-art on these benchmark databases.https://www.mdpi.com/1999-4893/13/12/313full-reference image quality assessmentdeep learningconvolutional neural networks |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Domonkos Varga |
spellingShingle |
Domonkos Varga A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps Algorithms full-reference image quality assessment deep learning convolutional neural networks |
author_facet |
Domonkos Varga |
author_sort |
Domonkos Varga |
title |
A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps |
title_short |
A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps |
title_full |
A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps |
title_fullStr |
A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps |
title_full_unstemmed |
A Combined Full-Reference Image Quality Assessment Method Based on Convolutional Activation Maps |
title_sort |
combined full-reference image quality assessment method based on convolutional activation maps |
publisher |
MDPI AG |
series |
Algorithms |
issn |
1999-4893 |
publishDate |
2020-11-01 |
description |
The goal of full-reference image quality assessment (FR-IQA) is to predict the perceptual quality of an image as perceived by human observers using its pristine (distortion free) reference counterpart. In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps. More specifically, a reference-distorted image pair is run through a pretrained convolutional neural network and the activation maps are compared with a traditional image similarity metric. Subsequently, the resulting feature vector is mapped onto perceptual quality scores with the help of a trained support vector regressor. A detailed parameter study is also presented in which the design choices of the proposed method is explained. Furthermore, we study the relationship between the amount of training images and the prediction performance. Specifically, it is demonstrated that the proposed method can be trained with a small amount of data to reach high prediction performance. Our best proposal—called ActMapFeat—is compared to the state-of-the-art on six publicly available benchmark IQA databases, such as KADID-10k, TID2013, TID2008, MDID, CSIQ, and VCL-FER. Specifically, our method is able to significantly outperform the state-of-the-art on these benchmark databases. |
topic |
full-reference image quality assessment deep learning convolutional neural networks |
url |
https://www.mdpi.com/1999-4893/13/12/313 |
work_keys_str_mv |
AT domonkosvarga acombinedfullreferenceimagequalityassessmentmethodbasedonconvolutionalactivationmaps AT domonkosvarga combinedfullreferenceimagequalityassessmentmethodbasedonconvolutionalactivationmaps |
_version_ |
1724412988662743040 |