Context Aware Video Caption Generation with Consecutive Differentiable Neural Computer
Recent video captioning models aim at describing all events in a long video. However, their event descriptions do not fully exploit the contextual information included in a video because they lack the ability to remember information changes over time. To address this problem, we propose a novel cont...
Main Authors: | Jonghong Kim, Inchul Choi, Minho Lee |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-07-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/9/7/1162 |
Similar Items
-
CaptionNet: Automatic End-to-End Siamese Difference Captioning Model With Attention
by: Ariyo Oluwasanmi, et al.
Published: (2019-01-01) -
Action Recognition in Video Sequences using Deep Bi-Directional LSTM With CNN Features
by: Amin Ullah, et al.
Published: (2018-01-01) -
Fully Convolutional CaptionNet: Siamese Difference Captioning Attention Model
by: Ariyo Oluwasanmi, et al.
Published: (2019-01-01) -
Deep Recurrent Neural Networks for Human Activity Recognition
by: Abdulmajid Murad, et al.
Published: (2017-11-01) -
Speech Emotion Recognition Using Deep Learning Techniques: A Review
by: Ruhul Amin Khalil, et al.
Published: (2019-01-01)