Summary: | 博士 === 雲林科技大學 === 工程科技研究所博士班 === 97 === The current NTSC system uses the interlaced scan technique to display video sequence. The technique creates undesirable visual artifacts and makes the lines flicker, twitter, and crawl. On the other hand, it is unsuitable for devices like LCD displays, personal computer monitors, and HDTV that require a progressive scan format. Thus, video de-interlacing techniques, which convert the format of interlaced images to progressive images, are important today to improve the quality of display. In this dissertation, we propose two video de-interlacing methods to reduce jagged effect, blurred effect, and artifacts effect in display and improve the quality of pictures. At first, motion adaptive de-interlacing with horizontal and vertical motions detection is presented. In this method, de-interlacing begins with object motion detection, which is to ensure that the inter-field information is used precisely, which is capable of improving the quality of the visual results. Then, an efficient video de-interlacing technique with scene change detection and its VLSI architecture design is proposed. Most of de-interlacing techniques are capable of improving the quality of the visual results; nevertheless, their performances are seriously affected by scene change. To improve the quality of de-interlacing, the factors of scene change are taken into account when de-interlacing techniques are applied. Based on our approach, the high performance VLSI architecture has been designed and verified by FPGA, and then implemented with UMC 0.18um CMOS standard cell.
As progressive video displays become more popular, the technique of improving resolution for digital displays is more important. Therefore, converting images to different resolutions while maintaining their high quality and low operation cost at the same time become a significant issue. This thesis presents two theoretical agendas for the study of image interpolation algorithm for image scaling. First, a high-speed architecture of bi-cubic convolution interpolation is introduced to reduce the computational complexity of generating weighting coefficients and number of memory access times. Further, it attempts to minimize the error propagation which results from the fraction truncations when calculating pixel coordinates under fixed-point operations. Error propagation significantly diminishes the output image quality for hardware interpolation. In order to avoid the inaccuracy accumulation, a simple periodical compensation technique is presented to improve the average Root-Mean-Square Error (RMSE) significantly. From the perspective of hardware cost, the presented architecture saves about 50% cost compared to the latest bi-cubic hardware design work. The architecture was implemented on the Virtex-II FPGA, and the high-speed VLSI has been successfully designed and implemented with TSMC 0.13μm standard cell library. Secondly, a novel image interpolation method, extended linear interpolation, is presented. It is an efficient method with interpolation quality compatible to that of bi-cubic convolution interpolation. Based on the approach, the efficient hardware architecture was designed under real-time requirement. Compared to the latest bi-cubic hardware design work, the architecture saves about 60% of hardware cost. The architecture was implemented on the Virtex-II FPGA, and the high-speed VLSI has been successfully designed and implemented with TSMC 0.13μm standard cell library.
|