Adaptive Image Watermarking Based on Human Visual System in Frequency Domain

碩士 === 東海大學 === 資訊科學系 === 89 === Due to the prevalence of Internet, the various information is digitizing rapidly and can be accessed easily. People can reproduce and manipulate these digital data without granting appropriate credit to the owner. Therefore, how to protect data on Internet is one the...

Full description

Bibliographic Details
Main Author: 柯朝輝
Other Authors: 蔡清欉
Format: Others
Language:zh-TW
Published: 2001
Online Access:http://ndltd.ncl.edu.tw/handle/56318367008984243354
id ndltd-TW-089THU00394010
record_format oai_dc
spelling ndltd-TW-089THU003940102015-10-13T12:10:00Z http://ndltd.ncl.edu.tw/handle/56318367008984243354 Adaptive Image Watermarking Based on Human Visual System in Frequency Domain 頻域上基於人類視覺系統之適應性影像浮水印技術 柯朝輝 碩士 東海大學 資訊科學系 89 Due to the prevalence of Internet, the various information is digitizing rapidly and can be accessed easily. People can reproduce and manipulate these digital data without granting appropriate credit to the owner. Therefore, how to protect data on Internet is one the important issue owners should face. One promising solution for the copyright protection of digital images is a so-called watermarking technique. The watermarking technique can hide an invisible signature or code in digital image to indicate the owner or recipient. The current watermarking schemes can be classified into two categories: spatial domain approach and frequency domain approach. Although the frequency domain techniques are robust when various signal-processing attacks, it is difficult for them to evaluate visual imperceptibility. For the purpose of overcoming mention above, the method of simulating attack is adopted to simulate signal-processing operations that modify the grayscale value of the image in spatial domain. First, we use a just-noticeable distortion (JND) based on human visual model to check out the maximal intensity of simulating attack. The image operated by simulating attack is transformed to the frequency domain using discrete cosine transform (DCT) and then the change of amount for each frequency component is obtained. The change of amount is the maximal intensity, which each DCT coefficient can be embedded into suitable capacity of watermarking information. The choice of embedding the watermark into DCT coefficient is exactly depended on the three factors - the frequency position, the magnitude of DCT coefficient, and the amount of embedding information. Secondly, a dynamic watermarking technique is considered. The original image is divided into several non-overlapped blocks, and their corresponding block content feature is computed. According to the different content feature of each block, the maximal amount of watermarking information for each block is embedded into the original image. The multi-watermarking technique is another main point approached in this paper. We proposed a rule of complementary correction that can overcome the drawback of single watermarking technique limited by the similarity of watermark depended upon the amount of cropped image. The veracity of extracted watermark that we analyze depends upon the characteristics coming from the change of coefficient because of doing signal-processing operations. Then we can correct its watermark with help of the other groups. The small amount of cropped image caused by cropping attack can still be extracted of embedded watermark. In summary, our watermarking scheme applies human visual system to frequency domain and makes sure that embedded watermark into images is invisible in spatial domain. Besides, we proposed the location strength model and the rule of complementary correction to make the embedded watermark high in robustness to resist artificial attacks. Experiment results showed that the proposed scheme demonstrated good performance of robustness against signal-processing attacks, such as JPEG compression, cropping, adding noise and blurring. 蔡清欉 2001 學位論文 ; thesis 134 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 東海大學 === 資訊科學系 === 89 === Due to the prevalence of Internet, the various information is digitizing rapidly and can be accessed easily. People can reproduce and manipulate these digital data without granting appropriate credit to the owner. Therefore, how to protect data on Internet is one the important issue owners should face. One promising solution for the copyright protection of digital images is a so-called watermarking technique. The watermarking technique can hide an invisible signature or code in digital image to indicate the owner or recipient. The current watermarking schemes can be classified into two categories: spatial domain approach and frequency domain approach. Although the frequency domain techniques are robust when various signal-processing attacks, it is difficult for them to evaluate visual imperceptibility. For the purpose of overcoming mention above, the method of simulating attack is adopted to simulate signal-processing operations that modify the grayscale value of the image in spatial domain. First, we use a just-noticeable distortion (JND) based on human visual model to check out the maximal intensity of simulating attack. The image operated by simulating attack is transformed to the frequency domain using discrete cosine transform (DCT) and then the change of amount for each frequency component is obtained. The change of amount is the maximal intensity, which each DCT coefficient can be embedded into suitable capacity of watermarking information. The choice of embedding the watermark into DCT coefficient is exactly depended on the three factors - the frequency position, the magnitude of DCT coefficient, and the amount of embedding information. Secondly, a dynamic watermarking technique is considered. The original image is divided into several non-overlapped blocks, and their corresponding block content feature is computed. According to the different content feature of each block, the maximal amount of watermarking information for each block is embedded into the original image. The multi-watermarking technique is another main point approached in this paper. We proposed a rule of complementary correction that can overcome the drawback of single watermarking technique limited by the similarity of watermark depended upon the amount of cropped image. The veracity of extracted watermark that we analyze depends upon the characteristics coming from the change of coefficient because of doing signal-processing operations. Then we can correct its watermark with help of the other groups. The small amount of cropped image caused by cropping attack can still be extracted of embedded watermark. In summary, our watermarking scheme applies human visual system to frequency domain and makes sure that embedded watermark into images is invisible in spatial domain. Besides, we proposed the location strength model and the rule of complementary correction to make the embedded watermark high in robustness to resist artificial attacks. Experiment results showed that the proposed scheme demonstrated good performance of robustness against signal-processing attacks, such as JPEG compression, cropping, adding noise and blurring.
author2 蔡清欉
author_facet 蔡清欉
柯朝輝
author 柯朝輝
spellingShingle 柯朝輝
Adaptive Image Watermarking Based on Human Visual System in Frequency Domain
author_sort 柯朝輝
title Adaptive Image Watermarking Based on Human Visual System in Frequency Domain
title_short Adaptive Image Watermarking Based on Human Visual System in Frequency Domain
title_full Adaptive Image Watermarking Based on Human Visual System in Frequency Domain
title_fullStr Adaptive Image Watermarking Based on Human Visual System in Frequency Domain
title_full_unstemmed Adaptive Image Watermarking Based on Human Visual System in Frequency Domain
title_sort adaptive image watermarking based on human visual system in frequency domain
publishDate 2001
url http://ndltd.ncl.edu.tw/handle/56318367008984243354
work_keys_str_mv AT kēcháohuī adaptiveimagewatermarkingbasedonhumanvisualsysteminfrequencydomain
AT kēcháohuī pínyùshàngjīyúrénlèishìjuéxìtǒngzhīshìyīngxìngyǐngxiàngfúshuǐyìnjìshù
_version_ 1716854478352678912