A Study of Program Locality for Power-Savings of Memory and Space Reducing of Trace File Compression

博士 === 逢甲大學 === 資訊工程學系 === 102 === When an embedded system is designed, power consumption has to be taken carefully into consideration. According previous works, the power consumed by memory systems accounts for 45% of the total power consumed by an embedded system. Thus, if the power consumed by th...

Full description

Bibliographic Details
Main Author: 顧長榮
Other Authors: 陳青文
Format: Others
Language:en_US
Published: 2014
Online Access:http://ndltd.ncl.edu.tw/handle/43263477718622274139
Description
Summary:博士 === 逢甲大學 === 資訊工程學系 === 102 === When an embedded system is designed, power consumption has to be taken carefully into consideration. According previous works, the power consumed by memory systems accounts for 45% of the total power consumed by an embedded system. Thus, if the power consumed by the memory system were decreased, the total power consumed by a system would decrease significantly. In this paper, we present three methods to design power-aware memory system based on the locality of running programs. First, we focus on reducing the number of memory access times to save power. We use shorter code words to encode the instructions that are frequently executed and then pack continuous code words into a pseudo instruction. Once the decompression engine fetches one pseudo instruction, it can extract multiple instructions. Therefore, the number of memory access times can be efficiently reduced. Second, increasing the cache hit rate can effectively reduce the power consumption of the memory system and improve system performance. Thus, we increased the cache hit rate and reduced the cache-access power consumption by developing a new cache architecture known as a Linked Cache that stores frequently executed instructions. Linked cache has the features of low power consumption and low access delay, similar to a direct mapping cache, and a high cache hit rate similar to a two way-set associative cache. Third, the cache consumes a large amount of power to tag comparisons. Therefore, how to design a cache that does not consume so much power when comparing tags and that has a high hit ratio is an important challenge. In this paper, we propose a novel cache that does not perform tag comparisons in order to save power. In additional, a new architecture is proposed, numerous simulations must be performed to evaluate its performance. Therefore, trace-driven simulation is a simple, fast, and convenient approach to simulate computer architecture. However, trace-driven simulation requires a massive storage space to store the trace files of benchmark programs. In this paper, we propose a novel compression method with a high trace file compression ratio, and provide on-the-fly decompression.