Spatial-Temporal Visual Attention Model for Video Quality Assessment
碩士 === 國立中興大學 === 電機工程學系所 === 107 === Objective assessment for videos has developed with mature technique. However, there are still some challenges, such as mimicking the behavior that human-beings do when they watch a video. In this paper, we introduce a model for full-reference (FR) video quality...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2019
|
Online Access: | http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5441104%22.&searchmode=basic |
id |
ndltd-TW-107NCHU5441104 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-107NCHU54411042019-11-30T06:09:40Z http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5441104%22.&searchmode=basic Spatial-Temporal Visual Attention Model for Video Quality Assessment 利用時空域的視覺注視模型所做的影片品質評估 Wei-Jyun Sun 孫維駿 碩士 國立中興大學 電機工程學系所 107 Objective assessment for videos has developed with mature technique. However, there are still some challenges, such as mimicking the behavior that human-beings do when they watch a video. In this paper, we introduce a model for full-reference (FR) video quality assessment (VQA) which is based on visual attention, optical flow, spatio-temporal slice (STS) images and center bias map. The model has three parts. First, we use the IW-SSIM to obtain the basic score and use visual attention map to weight the score to make the spatial part. Second, we use the optical flow to estimate the general direction and combine with the center bias map to make the temporal part. Third, we use the spatio-temporal slices to obtain the spatio-temporal detail to make spatio-temporal part. Then we use the well-known video database of Laboratory for Image & Video Engineering (LIVE), Computational and Subjective Image Quality (CSIQ) and Image & Video Processing Laboratory (IVPL). The experimental results show that our proposed model has better performance at some place than other proposed method. But we fall at the compression distortion parts. Tsung-Jung Liu 劉宗榮 2019 學位論文 ; thesis 48 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立中興大學 === 電機工程學系所 === 107 === Objective assessment for videos has developed with mature technique. However, there are still some challenges, such as mimicking the behavior that human-beings do when they watch a video. In this paper, we introduce a model for full-reference (FR) video quality assessment (VQA) which is based on visual attention, optical flow, spatio-temporal slice (STS) images and center bias map. The model has three parts. First, we use the IW-SSIM to obtain the basic score and use visual attention map to weight the score to make the spatial part. Second, we use the optical flow to estimate the general direction and combine with the center bias map to make the temporal part. Third, we use the spatio-temporal slices to obtain the spatio-temporal detail to make spatio-temporal part. Then we use the well-known video database of Laboratory for Image & Video Engineering (LIVE), Computational and Subjective Image Quality (CSIQ) and Image & Video Processing Laboratory (IVPL). The experimental results show that our proposed model has better performance at some place than other proposed method. But we fall at the compression distortion parts.
|
author2 |
Tsung-Jung Liu |
author_facet |
Tsung-Jung Liu Wei-Jyun Sun 孫維駿 |
author |
Wei-Jyun Sun 孫維駿 |
spellingShingle |
Wei-Jyun Sun 孫維駿 Spatial-Temporal Visual Attention Model for Video Quality Assessment |
author_sort |
Wei-Jyun Sun |
title |
Spatial-Temporal Visual Attention Model for Video Quality Assessment |
title_short |
Spatial-Temporal Visual Attention Model for Video Quality Assessment |
title_full |
Spatial-Temporal Visual Attention Model for Video Quality Assessment |
title_fullStr |
Spatial-Temporal Visual Attention Model for Video Quality Assessment |
title_full_unstemmed |
Spatial-Temporal Visual Attention Model for Video Quality Assessment |
title_sort |
spatial-temporal visual attention model for video quality assessment |
publishDate |
2019 |
url |
http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5441104%22.&searchmode=basic |
work_keys_str_mv |
AT weijyunsun spatialtemporalvisualattentionmodelforvideoqualityassessment AT sūnwéijùn spatialtemporalvisualattentionmodelforvideoqualityassessment AT weijyunsun lìyòngshíkōngyùdeshìjuézhùshìmóxíngsuǒzuòdeyǐngpiànpǐnzhìpínggū AT sūnwéijùn lìyòngshíkōngyùdeshìjuézhùshìmóxíngsuǒzuòdeyǐngpiànpǐnzhìpínggū |
_version_ |
1719300585588523008 |