Summary: | The high speed and low latency of 5G mobile network have accelerated the speed and amount of information transmission. Web video is likely to become the main mode of news production and dissemination in the future for its richer information and more convenient dissemination, which will subvert the traditional mode of event mining. Therefore, event mining based on web videos has become a new research hotspot. However, web videos are vulnerable to video editing, lighting, shooting perspective and shooting angle, and other factors, resulting in the inaccurate visual similarity detection problem. Generally speaking, effectively integrating humungous volumes of cross-model information would give a great help. However, web videos are described with few terms, and thus sparse text information becomes a challenge for cross-model information combination. To address this issue, this paper proposes a new collaborative optimization framework with the combination of inaccurate visual similarity detection information and sparse textual information. This framework is composed of three steps. After obtaining the statistics of the distribution characteristics of each word in all Near-Duplicate Keyframes (NDKs), the high-level semantic cross-correlation between NDKs is first mined with the help of textual features, forming a new set of semantic relevant NDKs with different visual expressions. Next, textual distribution features are enriched through finding more semantically related words by the new NDK set with various forms of visual expressions, solving the sparse distribution problem for each word in all NDKs. Finally, Multiple Correspondence Analysis (MCA) is used to mine the events. Experimental results with a large number of real data demonstrate that the proposed model outperforms the existing methods for web video event mining.
|