Cache Utilization Aware Scheduling for Multi-core Systems

碩士 === 國立雲林科技大學 === 資訊工程系碩士班 === 100 === A chip multiprocessor (CMP) consists of several cores which can execute tasks independently. Due to the budget and chip area limit, last level cache is usually shared among cores. If tasks are running on different cores access the shared cache intensively and...

Full description

Bibliographic Details
Main Authors: Wen-wei Lu, 呂文瑋
Other Authors: edward chu
Format: Others
Language:en_US
Published: 2012
Online Access:http://ndltd.ncl.edu.tw/handle/73849918468142695946
Description
Summary:碩士 === 國立雲林科技大學 === 資訊工程系碩士班 === 100 === A chip multiprocessor (CMP) consists of several cores which can execute tasks independently. Due to the budget and chip area limit, last level cache is usually shared among cores. If tasks are running on different cores access the shared cache intensively and concurrently, it may lead to high cache miss rate and significant performance degradation. A commonly-used method is to co-schedule a task with good anti-interference ability and a task with poor anti-interference. However, if tasks have similar anti-interference abilities, it becomes difficult to generate a proper task assignment. In this paper, we identify two more indexes, intra-core cache contention and task interference ability, that primarily determine the utilization of shared cached. Based on the indexes, we develop a novel task scheduling, named cache utilization aware scheduling (CUAS), to reduce shared cache contention. CUAS classifies tasks according to their anti-interference ability and interference ability. CUAS then distributes tasks to cores based on the effect of inter-core and intra-core cache contention. We conducted our experiments on an Intel Core2 Quad processor and adopted SPEC CPU2006 benchmark for evaluation. According to our experiment results, CUAS can significantly reduce shared cache contention and reduce total execution time at most 46% compared to existing methods.