Competitive cache replacement strategies for a shared cache

We consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized c...

Full description

Bibliographic Details
Main Author: Katti, Anil Kumar
Format: Others
Language:English
Published: 2011
Subjects:
Online Access:http://hdl.handle.net/2152/ETD-UT-2011-05-3584
id ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-ETD-UT-2011-05-3584
record_format oai_dc
spelling ndltd-UTEXAS-oai-repositories.lib.utexas.edu-2152-ETD-UT-2011-05-35842015-09-20T17:01:07ZCompetitive cache replacement strategies for a shared cacheKatti, Anil KumarCache replacement algorithmsCompetitive analysisMulticore systemsShared cacheDisjoint memoryHierarchical cache structureAccess cost modelCache memoryWe consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized cache replacement strategies when processes share memory blocks. Our main result for this case is a deterministic algorithm called GLOBAL-MAXIMA which is optimum up to a constant factor when processes share memory blocks. Our framework is a generalization of the application controlled caching framework in which processes access disjoint sets of memory blocks. We also present a deterministic algorithm called RR-PROC-MARK which exactly matches the lower bound on the competitive ratio of deterministic cache replacement algorithms when processes access disjoint sets of memory blocks. We extend our results to multiple levels of caches and prove that an exclusive cache is better than both inclusive and non-inclusive caches; this validates the experimental findings in the literature. Our results could be applied to shared caches in multicore systems in which processes work together on multithreaded computations like Gaussian elimination paradigm, fast Fourier transform, matrix multiplication, etc. In these computations, processes have full knowledge about their individual request sequences and can share memory blocks.text2011-07-08T18:11:08Z2011-07-08T18:11:08Z2011-052011-07-08May 20112011-07-08T18:11:13Zthesisapplication/pdfhttp://hdl.handle.net/2152/ETD-UT-2011-05-35842152/ETD-UT-2011-05-3584eng
collection NDLTD
language English
format Others
sources NDLTD
topic Cache replacement algorithms
Competitive analysis
Multicore systems
Shared cache
Disjoint memory
Hierarchical cache structure
Access cost model
Cache memory
spellingShingle Cache replacement algorithms
Competitive analysis
Multicore systems
Shared cache
Disjoint memory
Hierarchical cache structure
Access cost model
Cache memory
Katti, Anil Kumar
Competitive cache replacement strategies for a shared cache
description We consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized cache replacement strategies when processes share memory blocks. Our main result for this case is a deterministic algorithm called GLOBAL-MAXIMA which is optimum up to a constant factor when processes share memory blocks. Our framework is a generalization of the application controlled caching framework in which processes access disjoint sets of memory blocks. We also present a deterministic algorithm called RR-PROC-MARK which exactly matches the lower bound on the competitive ratio of deterministic cache replacement algorithms when processes access disjoint sets of memory blocks. We extend our results to multiple levels of caches and prove that an exclusive cache is better than both inclusive and non-inclusive caches; this validates the experimental findings in the literature. Our results could be applied to shared caches in multicore systems in which processes work together on multithreaded computations like Gaussian elimination paradigm, fast Fourier transform, matrix multiplication, etc. In these computations, processes have full knowledge about their individual request sequences and can share memory blocks. === text
author Katti, Anil Kumar
author_facet Katti, Anil Kumar
author_sort Katti, Anil Kumar
title Competitive cache replacement strategies for a shared cache
title_short Competitive cache replacement strategies for a shared cache
title_full Competitive cache replacement strategies for a shared cache
title_fullStr Competitive cache replacement strategies for a shared cache
title_full_unstemmed Competitive cache replacement strategies for a shared cache
title_sort competitive cache replacement strategies for a shared cache
publishDate 2011
url http://hdl.handle.net/2152/ETD-UT-2011-05-3584
work_keys_str_mv AT kattianilkumar competitivecachereplacementstrategiesforasharedcache
_version_ 1716822043845984256