|
|
|
|
LEADER |
01780 am a22001813u 4500 |
001 |
143615 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Lin, Ji
|e author
|
700 |
1 |
0 |
|a Gan, Chuang
|e author
|
700 |
1 |
0 |
|a Han, Song
|e author
|
245 |
0 |
0 |
|a TSM: Temporal Shift Module for Efficient Video Understanding
|
260 |
|
|
|b IEEE,
|c 2022-06-30T17:26:01Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/143615
|
520 |
|
|
|a © 2019 IEEE. The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: It ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: Https://github. com/mit-han-lab/temporal-shift-module.
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t 10.1109/ICCV.2019.00718
|
773 |
|
|
|t Proceedings of the IEEE International Conference on Computer Vision
|