PERCEPTUALLY TUNED DPCM CODING OF COLOR IMAGES

碩士 === 大同大學 === 電機工程研究所 === 91 === Driven by a growing demand for transmission of visual data over media with limited capacity, increasing efforts have been made to strengthen compression techniques and maintain good visual quality of the compressed image by human visual model. JPEG-LS i...

Full description

Bibliographic Details
Main Authors: Chien-Ming Lei, 雷建明
Other Authors: Prof. Chun-Hsien Chou
Format: Others
Language:zh-TW
Published: 2003
Online Access:http://ndltd.ncl.edu.tw/handle/65258429386652024164
Description
Summary:碩士 === 大同大學 === 電機工程研究所 === 91 === Driven by a growing demand for transmission of visual data over media with limited capacity, increasing efforts have been made to strengthen compression techniques and maintain good visual quality of the compressed image by human visual model. JPEG-LS is the new ISO/ITU standard for lossless and near-lossless still image compression, but the characteristic of human visual model is not exploited well in the JPEG-LS to achieve perceptual image compression. To solve the problems mentioned above, we proposed a codec based on human visual model, DPCM (Differential Pulse Code Modulation), and JPEG-LS. The issue of this thesis is focusing on how to develop an image codec that can maintain good visual quality of the compressed image and does not require any quantization-related overhead information to be transmitted to the decoder. First, we transform image form RGB color space to YCbCr color space, then the JND (Just-Noticeable-Distortion) is obtained causally by the human visual model, predicted pixel, and previous reconstructed pixels. The determination of coding mode is according to the JND. While the JND is smaller or equal to a preset threshold, the target pixel is reconstructed by copying previous reconstructed pixel. Otherwise, the codec enter the standard DPCM coding mode, the JND is as quantization step-size in this coding mode and quantization index is coded by Golomb-Rice coding algorithm. Because the estimated JND are according to surroundings of every target pixel, locally adaptive perceptual quantization is achieved and any quantization-related overhead information does not need to be transmitted to the decoder.