Multimodal Feature Learning for Video Captioning

Video captioning refers to the task of generating a natural language sentence that explains the content of the input video clips. This study proposes a deep neural network model for effective video captioning. Apart from visual features, the proposed model learns additionally semantic features that...

Full description

Bibliographic Details
Main Authors: Sujin Lee, Incheol Kim
Format: Article
Language:English
Published: Hindawi Limited 2018-01-01
Series:Mathematical Problems in Engineering
Online Access:http://dx.doi.org/10.1155/2018/3125879
Description
Summary:Video captioning refers to the task of generating a natural language sentence that explains the content of the input video clips. This study proposes a deep neural network model for effective video captioning. Apart from visual features, the proposed model learns additionally semantic features that describe the video content effectively. In our model, visual features of the input video are extracted using convolutional neural networks such as C3D and ResNet, while semantic features are obtained using recurrent neural networks such as LSTM. In addition, our model includes an attention-based caption generation network to generate the correct natural language captions based on the multimodal video feature sequences. Various experiments, conducted with the two large benchmark datasets, Microsoft Video Description (MSVD) and Microsoft Research Video-to-Text (MSR-VTT), demonstrate the performance of the proposed model.
ISSN:1024-123X
1563-5147