Exploring Adaptive Cache for Reconfigurable VLIW Processor
In this paper, we focus on a very long instruction word (VLIW) processor design that “shares” its cache blocks when switching to different performance modes to alleviate the aforementioned cold starts. The switching trigger cache resizing operations and improper use can lead to...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8725889/ |
id |
doaj-f10f76594d1c42d1ac8dff176aa301e4 |
---|---|
record_format |
Article |
spelling |
doaj-f10f76594d1c42d1ac8dff176aa301e42021-03-30T00:06:53ZengIEEEIEEE Access2169-35362019-01-017726347264610.1109/ACCESS.2019.29195898725889Exploring Adaptive Cache for Reconfigurable VLIW ProcessorSensen Hu0https://orcid.org/0000-0002-0238-7363Jing Haung1National Research Base of Intelligent Manufacturing Service, Chongqing Technology and Business University, Chongqing, ChinaSchool of Foreign Studies, Yangtze University, Jingzhou, ChinaIn this paper, we focus on a very long instruction word (VLIW) processor design that “shares” its cache blocks when switching to different performance modes to alleviate the aforementioned cold starts. The switching trigger cache resizing operations and improper use can lead to less efficient cache performance. We clearly note here that our investigation pertains the local temporal effects of the cache resizing and how we counteract the negative impact of cache misses in such resizing instances. We propose a novel reconfigurable d-cache framework that can dynamically adapt its least recently used (LRU) replacement policy without much hardware overhead. We demonstrate that using our adaptive d-cache, it ensures a smooth cache performance from one cache size to the other. This approach is orthogonal to future research in cache resizing for such architectures that take into account energy consumption and performance of the overall application. Our results show that when compared with a straightforward cache resizing approach, we can achieve a reduction of 10%-63% of cache misses during the switching period. Furthermore, we can also increase the average cache hit rate from 56% to 90% in the cache part that remains live after downsizing using our approach.https://ieeexplore.ieee.org/document/8725889/Adaptive cachecache resizereconfiguration |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Sensen Hu Jing Haung |
spellingShingle |
Sensen Hu Jing Haung Exploring Adaptive Cache for Reconfigurable VLIW Processor IEEE Access Adaptive cache cache resize reconfiguration |
author_facet |
Sensen Hu Jing Haung |
author_sort |
Sensen Hu |
title |
Exploring Adaptive Cache for Reconfigurable VLIW Processor |
title_short |
Exploring Adaptive Cache for Reconfigurable VLIW Processor |
title_full |
Exploring Adaptive Cache for Reconfigurable VLIW Processor |
title_fullStr |
Exploring Adaptive Cache for Reconfigurable VLIW Processor |
title_full_unstemmed |
Exploring Adaptive Cache for Reconfigurable VLIW Processor |
title_sort |
exploring adaptive cache for reconfigurable vliw processor |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2019-01-01 |
description |
In this paper, we focus on a very long instruction word (VLIW) processor design that “shares” its cache blocks when switching to different performance modes to alleviate the aforementioned cold starts. The switching trigger cache resizing operations and improper use can lead to less efficient cache performance. We clearly note here that our investigation pertains the local temporal effects of the cache resizing and how we counteract the negative impact of cache misses in such resizing instances. We propose a novel reconfigurable d-cache framework that can dynamically adapt its least recently used (LRU) replacement policy without much hardware overhead. We demonstrate that using our adaptive d-cache, it ensures a smooth cache performance from one cache size to the other. This approach is orthogonal to future research in cache resizing for such architectures that take into account energy consumption and performance of the overall application. Our results show that when compared with a straightforward cache resizing approach, we can achieve a reduction of 10%-63% of cache misses during the switching period. Furthermore, we can also increase the average cache hit rate from 56% to 90% in the cache part that remains live after downsizing using our approach. |
topic |
Adaptive cache cache resize reconfiguration |
url |
https://ieeexplore.ieee.org/document/8725889/ |
work_keys_str_mv |
AT sensenhu exploringadaptivecacheforreconfigurablevliwprocessor AT jinghaung exploringadaptivecacheforreconfigurablevliwprocessor |
_version_ |
1724188720246030336 |