Rubik: fast analytical power management for latency-critical systems

Latency-critical workloads (e.g., web search), common in datacenters, require stable tail (e.g., 95th percentile) latencies of a few milliseconds. Servers running these workloads are kept lightly loaded to meet these stringent latency targets. This low utilization wastes billions of dollars in energ...

Full description

Bibliographic Details
Main Authors: Kasture, Harshad (Contributor), Bartolini, Davide Basilio (Contributor), Beckmann, Nathan Zachary (Contributor), Sanchez, Daniel (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery (ACM), 2017-10-27T15:00:48Z.
Subjects:
Online Access:Get fulltext
LEADER 02758 am a22002653u 4500
001 111984
042 |a dc 
100 1 0 |a Kasture, Harshad  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Kasture, Harshad  |e contributor 
100 1 0 |a Bartolini, Davide Basilio  |e contributor 
100 1 0 |a Beckmann, Nathan Zachary  |e contributor 
100 1 0 |a Sanchez, Daniel  |e contributor 
700 1 0 |a Bartolini, Davide Basilio  |e author 
700 1 0 |a Beckmann, Nathan Zachary  |e author 
700 1 0 |a Sanchez, Daniel  |e author 
245 0 0 |a Rubik: fast analytical power management for latency-critical systems 
260 |b Association for Computing Machinery (ACM),   |c 2017-10-27T15:00:48Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/111984 
520 |a Latency-critical workloads (e.g., web search), common in datacenters, require stable tail (e.g., 95th percentile) latencies of a few milliseconds. Servers running these workloads are kept lightly loaded to meet these stringent latency targets. This low utilization wastes billions of dollars in energy and equipment annually. Applying dynamic power management to latency-critical workloads is challenging. The fundamental issue is coping with their inherent short-term variability: requests arrive at unpredictable times and have variable lengths. Without knowledge of the future, prior techniques either adapt slowly and conservatively or rely on application-specific heuristics to maintain tail latency. We propose Rubik, a fine-grain DVFS scheme for latency-critical workloads. Rubik copes with variability through a novel, general, and efficient statistical performance model. This model allows Rubik to adjust frequencies at sub-millisecond granularity to save power while meeting the target tail latency. Rubik saves up to 66% of core power, widely outperforms prior techniques, and requires no application-specific tuning. Beyond saving core power, Rubik robustly adapts to sudden changes in load and system performance. We use this capability to design RubikColoc, a colocation scheme that uses Rubik to allow batch and latency-critical work to share hardware resources more aggressively than prior techniques. RubikColoc reduces datacenter power by up to 31% while using 41% fewer servers than a datacenter that segregates latency-critical and batch work, and achieves 100% core utilization. 
520 |a National Science Foundation (U.S.) (Grant CCF-1318384) 
546 |a en_US 
655 7 |a Article 
773 |t Proceedings of the 48th International Symposium on Microarchitecture (MICRO-48)