Link-time optimization speedup

This paper proposes the two different approaches to speed-up program build: making link-time optimization work in parallel and lightweight optimization approach. The former technique is achieved by scaling LTO system. The latter makes link to work faster because of using summaries to manage some of...

Full description

Bibliographic Details
Main Authors: K. . Dolgorukova, S. . Arishin
Format: Article
Language:English
Published: Ivannikov Institute for System Programming of the Russian Academy of Sciences 2018-10-01
Series:Труды Института системного программирования РАН
Subjects:
Online Access:https://ispranproceedings.elpub.ru/jour/article/view/177
id doaj-387869bed6004df39c1b86faa83daba7
record_format Article
spelling doaj-387869bed6004df39c1b86faa83daba72020-11-25T00:44:17Zeng Ivannikov Institute for System Programming of the Russian Academy of SciencesТруды Института системного программирования РАН2079-81562220-64262018-10-0128517519810.15514/ISPRAS-2016-28(5)-11177Link-time optimization speedupK. . Dolgorukova0S. . Arishin1Институт системного программирования РАНИнститут системного программирования РАНThis paper proposes the two different approaches to speed-up program build: making link-time optimization work in parallel and lightweight optimization approach. The former technique is achieved by scaling LTO system. The latter makes link to work faster because of using summaries to manage some of interprocedural optimization passes instead of full IR code in memory. The problem of horizontal LTO system scaling leads to the problem of partition of the large task to several subtasks that can be performed concurrently. The problem is complicated by the compiler pipeline model: interprocedural optimization passes works consequentially and depends on previous performed ones. That means we can divide just data on which passes works, not passes themselves. We need to divide IR code to sub-independent parts and run LTO on them in parallel. We use program call graph analysis to divide a program to parts. Therefore, our goal is to divide call graph that is one of NP-compete problems. Nevertheless, the choice of the dividing algorithm strongly depends on properties of divided graph. The main goal of our investigation is to find lightweight graph partition algorithm that works efficiently on call graphs of real programs and that does not spoil LTO performance achievements after optimizing code pieces separately. This paper proposes new graph partition algorithm for program call graphs, results of comparing this one with two other methods on SPEC CPU2000 benchmark and implementation of the algorithm in scalable LLVM-based LTO system. The implementation of this approach in LTO system shows 31% link speedup and 3% of performance degradation for 4 threads. The lightweight optimization shows 0,5% speedup for single run in lazy code loading mode.https://ispranproceedings.elpub.ru/jour/article/view/177компиляторыоптимизация времени связываниямасштабированиеразбиение графов
collection DOAJ
language English
format Article
sources DOAJ
author K. . Dolgorukova
S. . Arishin
spellingShingle K. . Dolgorukova
S. . Arishin
Link-time optimization speedup
Труды Института системного программирования РАН
компиляторы
оптимизация времени связывания
масштабирование
разбиение графов
author_facet K. . Dolgorukova
S. . Arishin
author_sort K. . Dolgorukova
title Link-time optimization speedup
title_short Link-time optimization speedup
title_full Link-time optimization speedup
title_fullStr Link-time optimization speedup
title_full_unstemmed Link-time optimization speedup
title_sort link-time optimization speedup
publisher Ivannikov Institute for System Programming of the Russian Academy of Sciences
series Труды Института системного программирования РАН
issn 2079-8156
2220-6426
publishDate 2018-10-01
description This paper proposes the two different approaches to speed-up program build: making link-time optimization work in parallel and lightweight optimization approach. The former technique is achieved by scaling LTO system. The latter makes link to work faster because of using summaries to manage some of interprocedural optimization passes instead of full IR code in memory. The problem of horizontal LTO system scaling leads to the problem of partition of the large task to several subtasks that can be performed concurrently. The problem is complicated by the compiler pipeline model: interprocedural optimization passes works consequentially and depends on previous performed ones. That means we can divide just data on which passes works, not passes themselves. We need to divide IR code to sub-independent parts and run LTO on them in parallel. We use program call graph analysis to divide a program to parts. Therefore, our goal is to divide call graph that is one of NP-compete problems. Nevertheless, the choice of the dividing algorithm strongly depends on properties of divided graph. The main goal of our investigation is to find lightweight graph partition algorithm that works efficiently on call graphs of real programs and that does not spoil LTO performance achievements after optimizing code pieces separately. This paper proposes new graph partition algorithm for program call graphs, results of comparing this one with two other methods on SPEC CPU2000 benchmark and implementation of the algorithm in scalable LLVM-based LTO system. The implementation of this approach in LTO system shows 31% link speedup and 3% of performance degradation for 4 threads. The lightweight optimization shows 0,5% speedup for single run in lazy code loading mode.
topic компиляторы
оптимизация времени связывания
масштабирование
разбиение графов
url https://ispranproceedings.elpub.ru/jour/article/view/177
work_keys_str_mv AT kdolgorukova linktimeoptimizationspeedup
AT sarishin linktimeoptimizationspeedup
_version_ 1725275282764464128