Multi-Modal Learning over User-Contributed Content from Cross-Domain Social Media
博士 === 國立臺灣大學 === 資訊網路與多媒體研究所 === 104 === Social media have changed the world and our lives. Every day, millions of media data are uploaded to social-sharing websites. The goal of the research is to discover and summarize large amounts of media data from the emerging social media into information...
Main Authors: | Wen-Yu Lee, 李文瑜 |
---|---|
Other Authors: | 徐宏民 |
Format: | Others |
Language: | en_US |
Published: |
2016
|
Online Access: | http://ndltd.ncl.edu.tw/handle/98114821102121233466 |
Similar Items
-
Location Classification on Social Media by Multi-ModalityEngagement
by: Wen-Feng Cheng, et al.
Published: (2016) -
From Vision to Content: Construction of Domain-Specific Multi-Modal Knowledge Graph
by: Xiaoming Zhang, et al.
Published: (2019-01-01) -
The Effective Multi-user and Multi-media Interaction
by: Lee Chun Yen, et al.
Published: (2004) -
Multi-modal Dialogue User Interface for A Personal Photo Retrieval System
by: Hsiu-Wen Hsueh, et al.
Published: (2012) -
Adaptive cross-fusion learning for multi-modal gesture recognition
by: Benjia Zhou, et al.
Published: (2021-06-01)