VLSI System Design and Optimization for Image Coding and Video Processing
博士 === 國立臺灣大學 === 電子工程學研究所 === 95 === Multimedia applications, such as radio, audio, camera phone, digital camera, cam coder, andmobile broadcasting TV, are more and more popular as the technologies of image sensor, communication, VLSI manufacture, and video coding standards are made great progress....
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2007
|
Online Access: | http://ndltd.ncl.edu.tw/handle/64341007171531617450 |
id |
ndltd-TW-095NTU05428107 |
---|---|
record_format |
oai_dc |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
博士 === 國立臺灣大學 === 電子工程學研究所 === 95 === Multimedia applications, such as radio, audio, camera phone, digital camera, cam
coder, andmobile broadcasting TV, are more and more popular as the technologies
of image sensor, communication, VLSI manufacture, and video coding standards
are made great progress. The efficient system platform design is more important
than the the module design since system-level improvements make more impacts
on performance, power, and memory bandwidth than the module-level improvements.
For the future embedded platform, the computing engines will converge into
three cores, central processing unit (CPU), graphic processing unit (GPU) and
video processing unit (VPU), for processing various contents. Among various
multimedia applications, image and video coding are always attractive. We discuss
research topics for the video processing unit in three aspects of implementations,
dedicated hardware, scalable hardware and reconfigurable hardware. The
design issues for the dedicated hardware are how to maximize system throughput,
minimize data lifetime and optimize dataflow. Besides, the system considerations
are also important since the video processing unit is a sub-system in a SoC chip.
The scalable hardware means that it allows design-time adaption for architecture
designs with unified building blocks and design methodologies. The scalable issue
is important for a rapid system extension when specifications are changed
A reconfigurable platform for the video processing unit is required to allow runtime
adaption for dataflow and computational behaviors of processing elements
and memory systems. In this dissertation, we present three efficient system implementations to demonstrate the improvements and novelties in the three aspects
of image and video system implementations.
The first two system implementation are for JPEG 2000 image coding. Firstly,
a 124 MSamples/sec JPEG 2000 codec is implemented on a 20.1 mm2 die with
0.18 μm CMOS technology dissipating 385 mW at 1.8 V and 42 MHz. This chip
is capable of processing 1920 × 1080 HD video at 30 fps. The previous uses
the tile-level pipeline scheduling between the discrete wavelet transform (DWT)
and embedded block coding (EBC). For a tile with size 256×256, it costs 175
KB on-chip SRAM for the architectures using on-chip tile memory or costs 310
MBytes/sec(MB/s) SDRAM bandwidth for the architectures using off-chip tile
memory. We proposed a level-switched scheduling to minimize the data lifetime
for the tile memory. The proposed scheduling eliminates 175 KB SRAM tile
memory for those architectures using on-chip tile memory and reduces 310 MB/s
memory bandwidth for those architectures using off-chip tile memory. By use of
this scheduling, the coefficients between the DWT and the EBC are transferred
with a pixel-pipelined dataflow due to the elimination of tile memory. In this
dataflow, no buffer is required between the DWT and the EBC. The coefficients
generated by the DWT are encoded by the EBC immediately for the encoding flow
or the decoded coefficients by the EBC are inverse-transformed immediately by
the DWT for the decoding flow. To enable this scheduling, a level-switched DWT
(LS-DWT) and a code-block switched EBC (CS-EBC) are developed. The LSDWT
and the CS-EBC process multiple code-blocks in multiple subbands with
an interleaving manner to eliminate the tile memory. The encoding and decoding
functions are implemented on an unified hardware with a little overheads for
the control circuits. Hardware sharing between encoder and decoder reduce 40%
silicon costs.
The another system for JPEG 2000 is the JPEG 2000 codec with bit-plane
scalable EBC architecture. It is implemented on 6.1 mm2 with 0.18 μm CMOS
technology dissipating 180 mW at 1.8 V and 60 MHz. It is capable of processxxiii
ing 78 MSamples/s for lossy coding at 1 bpp and 50 MSamples/s for lossless
coding. Four techniques are used to implement this chip. The pre-compression
rate-distortion optimization (pre-RDO) determines truncation points before coding
to reduce computations for the embedded block coding (EBC). The dataflow
conversion converts the discrete wavelet transform (DWT) coefficients into separate
bit-planes and the embedded compression compresses the data in each bitplane.
These two algorithms reduce the bandwidth of tile memory, which is used
for storing the DWT coefficients, by 40% and 60% at 1 bpp for the encoder and
decoder, respectively. The bit-plane parallel context formation algorithm enables
the EBC to encode or decode arbitrary numbers of bit-planes in parallel. The bitplane
parallel EBC is a scalable architecture and the numbers of bit-plane coders
in the EBC can be arbitrarily implemented according to a target specification.
We propose an architecture platform design with a reconfigurable memory
system. This work also provide an initial solution for the incoming standard,
MPEG reconfigurable video coding (RVC). It allows various reconfigurable engines
and dedicated accelerators with various access patterns to access data through
run-time configurable memory system. The reconfigurable memory system contains
three hierarchies, block translation cache, reconfigurable datapath, and physical
memories. The increase in physical memory banks provides higher internal
bandwidth to the reconfigurable datapath. The reconfigurable datapath allows arbitrary
parallel 2D access patterns including row, column, block, and subsample
by run-time reconfigurations. The block translation cache uses one tag entry to
represent a block of pixels in a frame. Based on this platform, we implement a
H.264 encoder capable of processing 1280×720 60 fps at 250 MHz and 1 V is
implemented with 90 nm process. By using the reconfigurable memory system,
the off-chip bandwidth for the ME can be reduced by 42% and the size of on-chip
buffer for reference pixels can be reduced by 29% compared to the Level-C data
reuse. Experimental results prove that the power efficiency can be maintained
without significant increase compared to the dedicated hardwire solutions when
the reconfigurable abilities for memory system is adopted. This reconfigurable
architecture platform seems a promising solution for the video processing sincce
the power efficiency and performance can be maintained even the reconfigurable
approach is adopted.
|
author2 |
陳良基 |
author_facet |
陳良基 Yu-Wei Chang 張育瑋 |
author |
Yu-Wei Chang 張育瑋 |
spellingShingle |
Yu-Wei Chang 張育瑋 VLSI System Design and Optimization for Image Coding and Video Processing |
author_sort |
Yu-Wei Chang |
title |
VLSI System Design and Optimization for Image Coding and Video Processing |
title_short |
VLSI System Design and Optimization for Image Coding and Video Processing |
title_full |
VLSI System Design and Optimization for Image Coding and Video Processing |
title_fullStr |
VLSI System Design and Optimization for Image Coding and Video Processing |
title_full_unstemmed |
VLSI System Design and Optimization for Image Coding and Video Processing |
title_sort |
vlsi system design and optimization for image coding and video processing |
publishDate |
2007 |
url |
http://ndltd.ncl.edu.tw/handle/64341007171531617450 |
work_keys_str_mv |
AT yuweichang vlsisystemdesignandoptimizationforimagecodingandvideoprocessing AT zhāngyùwěi vlsisystemdesignandoptimizationforimagecodingandvideoprocessing AT yuweichang yǐngxiàngbiānmǎyǔshìxùnchùlǐzhījītǐdiànlùxìtǒngshèjìzuìjiāhuà AT zhāngyùwěi yǐngxiàngbiānmǎyǔshìxùnchùlǐzhījītǐdiànlùxìtǒngshèjìzuìjiāhuà |
_version_ |
1718146494658248704 |
spelling |
ndltd-TW-095NTU054281072015-12-07T04:04:29Z http://ndltd.ncl.edu.tw/handle/64341007171531617450 VLSI System Design and Optimization for Image Coding and Video Processing 影像編碼與視訊處理之積體電路系統設計最佳化 Yu-Wei Chang 張育瑋 博士 國立臺灣大學 電子工程學研究所 95 Multimedia applications, such as radio, audio, camera phone, digital camera, cam coder, andmobile broadcasting TV, are more and more popular as the technologies of image sensor, communication, VLSI manufacture, and video coding standards are made great progress. The efficient system platform design is more important than the the module design since system-level improvements make more impacts on performance, power, and memory bandwidth than the module-level improvements. For the future embedded platform, the computing engines will converge into three cores, central processing unit (CPU), graphic processing unit (GPU) and video processing unit (VPU), for processing various contents. Among various multimedia applications, image and video coding are always attractive. We discuss research topics for the video processing unit in three aspects of implementations, dedicated hardware, scalable hardware and reconfigurable hardware. The design issues for the dedicated hardware are how to maximize system throughput, minimize data lifetime and optimize dataflow. Besides, the system considerations are also important since the video processing unit is a sub-system in a SoC chip. The scalable hardware means that it allows design-time adaption for architecture designs with unified building blocks and design methodologies. The scalable issue is important for a rapid system extension when specifications are changed A reconfigurable platform for the video processing unit is required to allow runtime adaption for dataflow and computational behaviors of processing elements and memory systems. In this dissertation, we present three efficient system implementations to demonstrate the improvements and novelties in the three aspects of image and video system implementations. The first two system implementation are for JPEG 2000 image coding. Firstly, a 124 MSamples/sec JPEG 2000 codec is implemented on a 20.1 mm2 die with 0.18 μm CMOS technology dissipating 385 mW at 1.8 V and 42 MHz. This chip is capable of processing 1920 × 1080 HD video at 30 fps. The previous uses the tile-level pipeline scheduling between the discrete wavelet transform (DWT) and embedded block coding (EBC). For a tile with size 256×256, it costs 175 KB on-chip SRAM for the architectures using on-chip tile memory or costs 310 MBytes/sec(MB/s) SDRAM bandwidth for the architectures using off-chip tile memory. We proposed a level-switched scheduling to minimize the data lifetime for the tile memory. The proposed scheduling eliminates 175 KB SRAM tile memory for those architectures using on-chip tile memory and reduces 310 MB/s memory bandwidth for those architectures using off-chip tile memory. By use of this scheduling, the coefficients between the DWT and the EBC are transferred with a pixel-pipelined dataflow due to the elimination of tile memory. In this dataflow, no buffer is required between the DWT and the EBC. The coefficients generated by the DWT are encoded by the EBC immediately for the encoding flow or the decoded coefficients by the EBC are inverse-transformed immediately by the DWT for the decoding flow. To enable this scheduling, a level-switched DWT (LS-DWT) and a code-block switched EBC (CS-EBC) are developed. The LSDWT and the CS-EBC process multiple code-blocks in multiple subbands with an interleaving manner to eliminate the tile memory. The encoding and decoding functions are implemented on an unified hardware with a little overheads for the control circuits. Hardware sharing between encoder and decoder reduce 40% silicon costs. The another system for JPEG 2000 is the JPEG 2000 codec with bit-plane scalable EBC architecture. It is implemented on 6.1 mm2 with 0.18 μm CMOS technology dissipating 180 mW at 1.8 V and 60 MHz. It is capable of processxxiii ing 78 MSamples/s for lossy coding at 1 bpp and 50 MSamples/s for lossless coding. Four techniques are used to implement this chip. The pre-compression rate-distortion optimization (pre-RDO) determines truncation points before coding to reduce computations for the embedded block coding (EBC). The dataflow conversion converts the discrete wavelet transform (DWT) coefficients into separate bit-planes and the embedded compression compresses the data in each bitplane. These two algorithms reduce the bandwidth of tile memory, which is used for storing the DWT coefficients, by 40% and 60% at 1 bpp for the encoder and decoder, respectively. The bit-plane parallel context formation algorithm enables the EBC to encode or decode arbitrary numbers of bit-planes in parallel. The bitplane parallel EBC is a scalable architecture and the numbers of bit-plane coders in the EBC can be arbitrarily implemented according to a target specification. We propose an architecture platform design with a reconfigurable memory system. This work also provide an initial solution for the incoming standard, MPEG reconfigurable video coding (RVC). It allows various reconfigurable engines and dedicated accelerators with various access patterns to access data through run-time configurable memory system. The reconfigurable memory system contains three hierarchies, block translation cache, reconfigurable datapath, and physical memories. The increase in physical memory banks provides higher internal bandwidth to the reconfigurable datapath. The reconfigurable datapath allows arbitrary parallel 2D access patterns including row, column, block, and subsample by run-time reconfigurations. The block translation cache uses one tag entry to represent a block of pixels in a frame. Based on this platform, we implement a H.264 encoder capable of processing 1280×720 60 fps at 250 MHz and 1 V is implemented with 90 nm process. By using the reconfigurable memory system, the off-chip bandwidth for the ME can be reduced by 42% and the size of on-chip buffer for reference pixels can be reduced by 29% compared to the Level-C data reuse. Experimental results prove that the power efficiency can be maintained without significant increase compared to the dedicated hardwire solutions when the reconfigurable abilities for memory system is adopted. This reconfigurable architecture platform seems a promising solution for the video processing sincce the power efficiency and performance can be maintained even the reconfigurable approach is adopted. 陳良基 2007 學位論文 ; thesis 187 en_US |