Summary: | The automatic recognition of the emotions in still images is inherently more challenging than other visual recognition tasks, such as scene recognition, object classification and semantic image classification, as it involves a higher level of abstraction in the human cognition perspective. Symmetry can be found in many objects in the nature and can be used for many purposes such as object detection and recognition. Furthermore, rotating and flipping of the image is employed based on symmetry for training the classifier for the most accurate classification. Hence, there is a need to handle effectively large intra-class variance, scalability and subjectivity during recognition, and it is inherently ambiguous as an image can evoke multiple emotions. To address these issues, many of the existing works focus on improving the image representations. It is motivated by the observation that both global distributions and local image regions carry massive sentiments. In this research, three different pre-trained architectural models are implemented, and the classification performance of binary sentiment classification is examined on five widely-used effective datasets. Moreover, the features from the pre-trained models are selected optimally using the proposed Teaching Gaining Sharing Learning (TGSL) algorithm, which is the major contribution of the research. Extensive experiment results on the five datasets demonstrate that the proposed Visual sentiment analysis based on the TGSL algorithm with data augmentation achieved an improved performance compared to all other conventional techniques. The proposed framework uses the pre-trained model and never utilized any hand-crafted features, boosting the mean accuracy, sensitivity, and specificity to 99.11%, 99.31%, and 99.22%, respectively, for abstract dataset.
|