Summary: | In this paper, we propose a novel method to detect saliency on face images. In our method, face and facial features are extracted as two top-down feature channels, linearly integrated with three traditional bottom-up features of color, intensity, and orientation, to yield final saliency map of a face image. By conducting an eye tracking experiment, a database with human fixations on 510 face images is obtained for analyzing the fixation distribution on face region. We find that fixations on face regions can be well modeled by a Gaussian mixture model (GMM), corresponding to face and facial features. Accordingly, we model face saliency by the GMM, learned from the training data of our database. In addition, we investigate that the weights of face feature channels rely on the face size in images, and the relationship between the weights and face size is, therefore, estimated by learning from the training data of our eye tracking database. The experimental results validate that our learning-based method is capable of dramatically improving the accuracy of saliency detection on face images over other ten state-of-the-art methods. Finally, we apply our saliency detection method to compress face images, with an improvement on visual quality or saving on bit-rate over the existing image encoder.
|