Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding
碩士 === 國立交通大學 === 電信工程系所 === 93 === In the thesis, three new approaches are proposed to improve the drawbacks of three corresponding methods that are used conventionally in the realization of survivor memory unit (SMU) in Viterbi Algorithm (VA) as convolution decoders. For trace-back management (T...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2005
|
Online Access: | http://ndltd.ncl.edu.tw/handle/23131374462736974366 |
id |
ndltd-TW-093NCTU5435006 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-093NCTU54350062016-06-06T04:11:37Z http://ndltd.ncl.edu.tw/handle/23131374462736974366 Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding 針對迴旋編碼的維特比解碼機中倖存路徑器之改良 Guan-Henry Lin 林冠亨 碩士 國立交通大學 電信工程系所 93 In the thesis, three new approaches are proposed to improve the drawbacks of three corresponding methods that are used conventionally in the realization of survivor memory unit (SMU) in Viterbi Algorithm (VA) as convolution decoders. For trace-back management (TBM) method, it has owned the virtues as low power consumption and small circuit area, but suffers long decoding latency. Corresponding to TBM, Stage-Hopping TBM (SH-TBM) is developed. The decoding efficiency can be raised approaching to the performance of register exchange algorithm (REA) as constraint length increases. On the other hand, length of the required memory could be reduced down to about 45% of the length originally required in TBM at most. REA obtains shortest decoding latency, however, with large power consumption and circuit area caused by the required numbers of registers and multiplexers. To ameliorate the disadvantage, Facilitated REA (FREA) is proposed with a concept that multiple of multiplexers can be replaced by a single one without affecting the decoding performance. As for Hybrid method, it is originally designed to balance the trade-off between decoding efficiency and the required quantity of hardware by combining TBM and REA. Therefore, Improved Hybrid method (IHY) naturally inherits the technique used in FREA and SH-TBM. As expected, fewer multiplexers, in some cases only one column of multiplexers, will be needed and traceback operation could be realized faster. Of all three newly proposed methods, decoding unit (DU) is eliminated. Hence, a slight more increase in speed and decrease in hardware could be acquired. In summary, the experimental result simulated by C program shows that proposed methods do not deteriorate the decoding performance anyhow. By graphical methods, analyses and comparisons are made to demonstrate the improvements in hardware reduction and the acceleration in decoding efficiency. Tsern-Huei Lee 李程輝 2005 學位論文 ; thesis 40 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立交通大學 === 電信工程系所 === 93 === In the thesis, three new approaches are proposed to improve the drawbacks of three corresponding methods that are used conventionally in the realization of survivor memory unit (SMU) in Viterbi Algorithm (VA) as convolution decoders.
For trace-back management (TBM) method, it has owned the virtues as low power consumption and small circuit area, but suffers long decoding latency. Corresponding to TBM, Stage-Hopping TBM (SH-TBM) is developed. The decoding efficiency can be raised approaching to the performance of register exchange algorithm (REA) as constraint length increases. On the other hand, length of the required memory could be reduced down to about 45% of the length originally required in TBM at most.
REA obtains shortest decoding latency, however, with large power consumption and circuit area caused by the required numbers of registers and multiplexers. To ameliorate the disadvantage, Facilitated REA (FREA) is proposed with a concept that multiple of multiplexers can be replaced by a single one without affecting the decoding performance.
As for Hybrid method, it is originally designed to balance the trade-off between decoding efficiency and the required quantity of hardware by combining TBM and REA. Therefore, Improved Hybrid method (IHY) naturally inherits the technique used in FREA and SH-TBM. As expected, fewer multiplexers, in some cases only one column of multiplexers, will be needed and traceback operation could be realized faster.
Of all three newly proposed methods, decoding unit (DU) is eliminated. Hence, a slight more increase in speed and decrease in hardware could be acquired.
In summary, the experimental result simulated by C program shows that proposed methods do not deteriorate the decoding performance anyhow. By graphical methods, analyses and comparisons are made to demonstrate the improvements in hardware reduction and the acceleration in decoding efficiency.
|
author2 |
Tsern-Huei Lee |
author_facet |
Tsern-Huei Lee Guan-Henry Lin 林冠亨 |
author |
Guan-Henry Lin 林冠亨 |
spellingShingle |
Guan-Henry Lin 林冠亨 Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding |
author_sort |
Guan-Henry Lin |
title |
Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding |
title_short |
Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding |
title_full |
Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding |
title_fullStr |
Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding |
title_full_unstemmed |
Improved Survivor Memory Unit (SMU) Designs of Viterbi Decoder for Convolutional Coding |
title_sort |
improved survivor memory unit (smu) designs of viterbi decoder for convolutional coding |
publishDate |
2005 |
url |
http://ndltd.ncl.edu.tw/handle/23131374462736974366 |
work_keys_str_mv |
AT guanhenrylin improvedsurvivormemoryunitsmudesignsofviterbidecoderforconvolutionalcoding AT línguānhēng improvedsurvivormemoryunitsmudesignsofviterbidecoderforconvolutionalcoding AT guanhenrylin zhēnduìhuíxuánbiānmǎdewéitèbǐjiěmǎjīzhōngxìngcúnlùjìngqìzhīgǎiliáng AT línguānhēng zhēnduìhuíxuánbiānmǎdewéitèbǐjiěmǎjīzhōngxìngcúnlùjìngqìzhīgǎiliáng |
_version_ |
1718295806808686592 |