Tailbench: a benchmark suite and evaluation methodology for latency-critical applications

Latency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techni...

Full description

Bibliographic Details
Main Authors: Kasture, Harshad (Contributor), Sanchez, Daniel (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE), 2017-12-19T18:02:01Z.
Subjects:
Online Access:Get fulltext
LEADER 02341 am a22002413u 4500
001 112803
042 |a dc 
100 1 0 |a Kasture, Harshad  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Kasture, Harshad  |e contributor 
100 1 0 |a Sanchez, Daniel  |e contributor 
700 1 0 |a Sanchez, Daniel  |e author 
245 0 0 |a Tailbench: a benchmark suite and evaluation methodology for latency-critical applications 
260 |b Institute of Electrical and Electronics Engineers (IEEE),   |c 2017-12-19T18:02:01Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/112803 
520 |a Latency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techniques that target tail latency. However, research in this area is hampered by the lack of a comprehensive suite of latency-critical benchmarks. We present TailBench, a benchmark suite and evaluation methodology that makes latency-critical workloads as easy to run and characterize as conventional, throughput-oriented ones. TailBench includes eight applications that span a wide range of latency requirements and domains, and a harness that implements a robust and statistically sound load-testing methodology. The modular design of the TailBench harness facilitates multiple load-testing scenarios, ranging from multi-node configurations that capture network overheads, to simplified single-node configurations that allow measuring tail latency in simulation. Validation results show that the simplified configurations are accurate for most applications. This flexibility enables rapid prototyping of hardware and software techniques for latency-critical workloads. 
520 |a National Science Foundation (U.S.) (CCF-1318384) 
520 |a Qatar Computing Research Institute 
520 |a Google (Firm) (Google Research Award) 
546 |a en_US 
655 7 |a Article 
773 |t 2016 IEEE International Symposium on Workload Characterization (IISWC)