|
|
|
|
LEADER |
03306 am a22002293u 4500 |
001 |
124289 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Mukkara, Anurag
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
700 |
1 |
0 |
|a Beckmann, Nathan
|e author
|
700 |
1 |
0 |
|a Abeydeera, Maleen Hasanka
|e author
|
700 |
1 |
0 |
|a Ma, Xiaosong
|e author
|
700 |
1 |
0 |
|a Sanchez Martin, Daniel
|e author
|
245 |
0 |
0 |
|a Exploiting Locality in Graph Analytics through Hardware-Accelerated Traversal Scheduling
|
260 |
|
|
|b Institute of Electrical and Electronics Engineers (IEEE),
|c 2020-03-24T18:50:57Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/124289
|
520 |
|
|
|a Graph processing is increasingly bottlenecked by main memory accesses. On-chip caches are of little help because the irregular structure of graphs causes seemingly random memory references. However, most real-world graphs offer significant potential locality-it is just hard to predict ahead of time. In practice, graphs have well-connected regions where relatively few vertices share edges with many common neighbors. If these vertices were processed together, graph processing would enjoy significant data reuse. Hence, a graph's traversal schedule largely determines its locality. This paper explores online traversal scheduling strategies that exploit the community structure of real-world graphs to improve locality. Software graph processing frameworks use simple, locality-oblivious scheduling because, on general-purpose cores, the benefits of locality-Aware scheduling are outweighed by its overheads. Software frameworks rely on offline preprocessing to improve locality. Unfortunately, preprocessing is so expensive that its costs often negate any benefits from improved locality. Recent graph processing accelerators have inherited this design. Our insight is that this misses an opportunity: Hardware acceleration allows for more sophisticated, online locality-Aware scheduling than can be realized in software, letting systems significantly improve locality without any preprocessing. To exploit this insight, we present bounded depth-first scheduling (BDFS), a simple online locality-Aware scheduling strategy. BDFS restricts each core to explore one small, connected region of the graph at a time, improving locality on graphs with good community structure. We then present HATS, a hardware-Accelerated traversal scheduler that adds just 0.4% area and 0.2% power over general-purpose cores. We evaluate BDFS and HATS on several algorithms using large real-world graphs. On a simulated 16-core system, BDFS reduces main memory accesses by up to 2.4x and by 30% on average. However, BDFS is too expensive in software and degrades performance by 21% on average. HATS eliminates these overheads, allowing BDFS to improve performance by 83% on average (up to 3.1x) over a locality-oblivious software implementation and by 31% on average (up to 2.1x) over specialized prefetchers.
|
520 |
|
|
|a National Science Foundation (Grant CAREER-1452994)
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)
|