|
|
|
|
LEADER |
01836 am a22002173u 4500 |
001 |
131021 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Kaffes, K
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
700 |
1 |
0 |
|a Chong, T
|e author
|
700 |
1 |
0 |
|a Humphries, JT
|e author
|
700 |
1 |
0 |
|a Belay, Adam M
|e author
|
700 |
1 |
0 |
|a Mazières, D
|e author
|
700 |
1 |
0 |
|a Kozyrakis, C
|e author
|
245 |
0 |
0 |
|a Shinjuku: Preemptive scheduling for µsecond-scale tail latency
|
260 |
|
|
|b Association for Computing Machinery (ACM)/ USENIX Association,
|c 2021-06-17T19:20:12Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/131021
|
520 |
|
|
|a The recently proposed dataplanes for microsecond scale applications, such as IX and ZygOS, use non-preemptive policies to schedule requests to cores. For the many real-world scenarios where request service times follow distributions with high dispersion or a heavy tail, they allow short requests to be blocked behind long requests, which leads to poor tail latency. Shinjuku is a single-address space operating system that uses hardware support for virtualization to make preemption practical at the microsecond scale. This allows Shinjuku to implement centralized scheduling policies that preempt requests as often as every 5µsec and work well for both light and heavy tailed request service time distributions. We demonstrate that Shinjuku provides significant tail latency and throughput improvements over IX and ZygOS for a wide range of workload scenarios. For the case of a RocksDB server processing both point and range queries, Shinjuku achieves up to 6.6× higher throughput and 88% lower tail latency.
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t Proceedings of the 16th USENIX Symposium on Networked Systems Design and Implementation
|