Workload Interleaving with Performance Guarantees in Data Centers
In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces perf...
Main Author: | |
---|---|
Format: | Others |
Language: | English |
Published: |
W&M ScholarWorks
2016
|
Subjects: | |
Online Access: | https://scholarworks.wm.edu/etd/1477068022 https://scholarworks.wm.edu/cgi/viewcontent.cgi?article=1040&context=etd |
id |
ndltd-wm.edu-oai-scholarworks.wm.edu-etd-1040 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-wm.edu-oai-scholarworks.wm.edu-etd-10402021-09-18T05:28:58Z Workload Interleaving with Performance Guarantees in Data Centers Yan, Feng In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives. 2016-10-01T07:00:00Z text application/pdf https://scholarworks.wm.edu/etd/1477068022 https://scholarworks.wm.edu/cgi/viewcontent.cgi?article=1040&context=etd © The Author http://creativecommons.org/licenses/by/4.0/ Dissertations, Theses, and Masters Projects English W&M ScholarWorks Computer Sciences |
collection |
NDLTD |
language |
English |
format |
Others
|
sources |
NDLTD |
topic |
Computer Sciences |
spellingShingle |
Computer Sciences Yan, Feng Workload Interleaving with Performance Guarantees in Data Centers |
description |
In the era of global, large scale data centers residing in clouds, many applications and users share the same pool of resources for the purposes of reducing energy and operating costs, and of improving availability and reliability. Along with the above benefits, resource sharing also introduces performance challenges: when multiple workloads access the same resources concurrently, contention may occur and introduce delays in the performance of individual workloads. Providing performance isolation to individual workloads needs effective management methodologies. The challenges of deriving effective management methodologies lie in finding accurate, robust, compact metrics and models to drive algorithms that can meet different performance objectives while achieving efficient utilization of resources. This dissertation proposes a set of methodologies aiming at solving the challenging performance isolation problem in workload interleaving in data centers, focusing on both storage components and computing components. at the storage node level, we focus on methodologies for better interleaving user traffic with background workloads, such as tasks for improving reliability, availability, and power savings. More specifically, a scheduling policy for background workload based on the statistical characteristics of the system busy periods and a methodology that quantitatively estimates the performance impact of power savings are developed. at the storage cluster level, we consider methodologies on how to efficiently conduct work consolidation and schedule asynchronous updates without violating user performance targets. More specifically, we develop a framework that can estimate beforehand the benefits and overheads of each option in order to automate the process of reaching intelligent consolidation decisions while achieving faster eventual consistency. at the computing node level, we focus on improving workload interleaving at off-the-shelf servers as they are the basic building blocks of large-scale data centers. We develop priority scheduling middleware that employs different policies to schedule background tasks based on the instantaneous resource requirements of the high priority applications running on the server node. Finally, at the computing cluster level, we investigate popular computing frameworks for large-scale data intensive distributed processing, such as MapReduce and its Hadoop implementation. We develop a new Hadoop scheduler called DyScale to exploit capabilities offered by heterogeneous cores in order to achieve a variety of performance objectives. |
author |
Yan, Feng |
author_facet |
Yan, Feng |
author_sort |
Yan, Feng |
title |
Workload Interleaving with Performance Guarantees in Data Centers |
title_short |
Workload Interleaving with Performance Guarantees in Data Centers |
title_full |
Workload Interleaving with Performance Guarantees in Data Centers |
title_fullStr |
Workload Interleaving with Performance Guarantees in Data Centers |
title_full_unstemmed |
Workload Interleaving with Performance Guarantees in Data Centers |
title_sort |
workload interleaving with performance guarantees in data centers |
publisher |
W&M ScholarWorks |
publishDate |
2016 |
url |
https://scholarworks.wm.edu/etd/1477068022 https://scholarworks.wm.edu/cgi/viewcontent.cgi?article=1040&context=etd |
work_keys_str_mv |
AT yanfeng workloadinterleavingwithperformanceguaranteesindatacenters |
_version_ |
1719481533983621120 |