Learning Gaze Transitions from Depth to Improve Video Saliency Estimation
© 2017 IEEE. In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easil...
Main Authors: | Leifman, George (Author), Rudoy, Dmitry (Author), Swedish, Tristan (Author), Bayro-Corrochano, Eduardo (Author), Raskar, Ramesh (Author) |
---|---|
Other Authors: | Massachusetts Institute of Technology. Media Laboratory (Contributor) |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE),
2021-11-09T21:59:21Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Leveraging the crowd for annotation of retinal images
by: Leifman, George, et al.
Published: (2017) -
Visual attention: saliency detection and gaze estimation
by: Peng, Qinmu
Published: (2015) -
Modeling Spatiotemporal Correlations between Video Saliency and Gaze Dynamics
by: Yonetani, Ryo
Published: (2014) -
Deep Visual Teach and Repeat on Path Networks
by: Swedish, Tristan, et al.
Published: (2021) -
Effect of sound in videos on gaze : contribution to audio-visual saliency modelling
by: Song, Guanghan
Published: (2013)