Video Compression with Advanced Motion Estimation and Residue Encoding Methods

碩士 === 國立臺灣大學 === 電信工程學研究所 === 105 === Nowadays image and video services play essential roles in human life. No matter in online video streams such as Youtube or video storage components such as blue-ray disc, image and video compression techniques become extremely important. With the rapid growth o...

Full description

Bibliographic Details
Main Authors: Hung-Yi Chen, 陳宏毅
Other Authors: Jian-Jiun Ding
Format: Others
Language:en_US
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/yp3dcd
Description
Summary:碩士 === 國立臺灣大學 === 電信工程學研究所 === 105 === Nowadays image and video services play essential roles in human life. No matter in online video streams such as Youtube or video storage components such as blue-ray disc, image and video compression techniques become extremely important. With the rapid growth of video applications, the demands on novel video applications becomes larger. For instance, many movies and online games aim to provide better visual enjoyment for people. Japan, which is the host country of 2020 Olympic, has set up the 8K UHD (4320p) to be the only resolution of live broadcast services. Based on the phenomena above, we could anticipate that people would have better visual experiences in the future, but as compensation, processing larger images and video sequences in limited time. Thus, providing applications with high efficiency on data storage and processing is urgent. Temporal prediction coding for video data is a crucial part of video compression standards, while motion estimation is the essential function in the temporal prediction coding. The frequently-adopted video coding standards such as MPEG-4, H.264/ABC and H.264 haven’t defined the detailed implementation of motion estimation, so many methods acquired the best block matching results by using full search algorithm. In this thesis, we first propose an efficient search algorithm which provides the near-optimal motion estimation results but with extremely low search cost. There are two key features: Expanding the region of support and introducing the decimation lattice to fulfill the low-cost search. The comparisons with the previous methods show that the proposed algorithm outperforms other algorithms in either matching accuracy and search cost. Some efficient motion estimation methods first transform the image frames into binary planes by one-bit transform, then implement the feature based motion estimation to cut down on the computational complexity. In the second part of the thesis, we propose a weighted block matching criterion and combine it with the proposed fast search algorithm to pursuit matching results with higher accuracy. Based on the subjective comparisons, the proposed algorithm has the highest matching accuracy and lowest search cost on average. Besides, the requirements for entropy coder with higher coding efficiency are also noticeable subjects in image and video compression. H.264/AVC baseline adopts context-based adaptive variable length coding (CAVLC) and exponential-Golomb coding as their entropy coder to implement residual coding. In this thesis, we follow the architecture of CAVLC but some critical parts would be stepped up to achieve a higher coding performance. In addition, we propose an improved adaptive arithmetic coding in order to tackle the problems such as motion compensation residual or motion vector differences. Last but not least, video compression starts from H.263 (including later presented H.264/AVC and HEVC) all introduce the sub-pixel techniques to extract minor motion performances. Nevertheless, in order to implement sub-pixel based motion, users should define the motion estimation accuracy and corresponding Interpolation filter previously, which is lack of flexibility. To overcome such a limitation, we introduce the optical flow to execute the motion estimation procedure, which provides two benefits that we could not only discard the Interpolation filter but also tuning the sub-pixel accuracy between different scales elastically.