Visual Attention-based Small Screen Adaptation for H.264 Videos
We develop a framework that uses visual attention analysis combined with temporal coherence to detect the attended region from a H.264 video bitstream, and display it on a small screen. A visual attention module based upon Walther and Koch's model gives us the attended region in I-frames. We pr...
Main Author: | |
---|---|
Language: | en |
Published: |
2008
|
Subjects: | |
Online Access: | http://hdl.handle.net/10012/3929 |
id |
ndltd-LACETR-oai-collectionscanada.gc.ca-OWTU.10012-3929 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-LACETR-oai-collectionscanada.gc.ca-OWTU.10012-39292013-10-04T04:08:40ZMukherjee, Abir2008-08-29T13:34:44Z2008-08-29T13:34:44Z2008-08-29T13:34:44Z2008http://hdl.handle.net/10012/3929We develop a framework that uses visual attention analysis combined with temporal coherence to detect the attended region from a H.264 video bitstream, and display it on a small screen. A visual attention module based upon Walther and Koch's model gives us the attended region in I-frames. We propose a temporal coherence matching framework that uses the motion information in P-frames to extend the attended region over the H.264 video sequence. Evaluations show encouraging results with over 80% successful detection rate for objects of interest, and 85% respondents claiming satisfactory output.envisual attentionvideo adaptationH.264Visual Attention-based Small Screen Adaptation for H.264 VideosThesis or DissertationElectrical and Computer EngineeringMaster of Applied ScienceElectrical and Computer Engineering |
collection |
NDLTD |
language |
en |
sources |
NDLTD |
topic |
visual attention video adaptation H.264 Electrical and Computer Engineering |
spellingShingle |
visual attention video adaptation H.264 Electrical and Computer Engineering Mukherjee, Abir Visual Attention-based Small Screen Adaptation for H.264 Videos |
description |
We develop a framework that uses visual attention analysis combined with temporal
coherence to detect the attended region from a H.264 video bitstream, and display it on
a small screen. A visual attention module based upon Walther and Koch's model gives us
the attended region in I-frames. We propose a temporal coherence matching framework that
uses the motion information in P-frames to extend the attended region over the H.264
video sequence. Evaluations show encouraging results with over 80% successful detection rate for objects of interest, and 85% respondents claiming satisfactory output. |
author |
Mukherjee, Abir |
author_facet |
Mukherjee, Abir |
author_sort |
Mukherjee, Abir |
title |
Visual Attention-based Small Screen Adaptation for H.264 Videos |
title_short |
Visual Attention-based Small Screen Adaptation for H.264 Videos |
title_full |
Visual Attention-based Small Screen Adaptation for H.264 Videos |
title_fullStr |
Visual Attention-based Small Screen Adaptation for H.264 Videos |
title_full_unstemmed |
Visual Attention-based Small Screen Adaptation for H.264 Videos |
title_sort |
visual attention-based small screen adaptation for h.264 videos |
publishDate |
2008 |
url |
http://hdl.handle.net/10012/3929 |
work_keys_str_mv |
AT mukherjeeabir visualattentionbasedsmallscreenadaptationforh264videos |
_version_ |
1716600046217068544 |