Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing

Task mapping and scheduling are two very difficult problems that must be addressed when a sequential program is transformed into a parallel program. Since these problems are NP‐hard, compiler writers have opted to concentrate their efforts on optimizations that produce immediate gains in performance...

Full description

Bibliographic Details
Main Authors: Clayton S. Ferner, Robert G. Babb II
Format: Article
Language:English
Published: Hindawi Limited 1999-01-01
Series:Scientific Programming
Online Access:http://dx.doi.org/10.1155/1999/898723
id doaj-5e287fd6a9044e98b45fe253967448cb
record_format Article
spelling doaj-5e287fd6a9044e98b45fe253967448cb2021-07-02T07:54:58ZengHindawi LimitedScientific Programming1058-92441875-919X1999-01-0171476510.1155/1999/898723Automatic Choice of Scheduling Heuristics for Parallel/Distributed ComputingClayton S. Ferner0Robert G. Babb II1Lucent Technologies Inc., 11900 N. Pecos St., Westminster, CO 80234, USADepartment of Mathematics and Computer Science, University of Denver, Denver, CO 80208, USATask mapping and scheduling are two very difficult problems that must be addressed when a sequential program is transformed into a parallel program. Since these problems are NP‐hard, compiler writers have opted to concentrate their efforts on optimizations that produce immediate gains in performance. As a result, current parallelizing compilers either use very simple methods to deal with task scheduling or they simply ignore it altogether. Unfortunately, the programmer does not have this luxury. The burden of repartitioning or rescheduling, should the compiler produce inefficient parallel code, lies entirely with the programmer. We were able to create an algorithm (called a metaheuristic), which automatically chooses a scheduling heuristic for each input program. The metaheuristic produces better schedules in general than the heuristics upon which it is based. This technique was tested on a suite of real scientific programs written in SISAL and simulated on four different network configurations. Averaged over all of the test cases, the metaheuristic out‐performed all eight underlying scheduling algorithms; beating the best one by 2%, 12%, 13%, and 3% on the four separate network configurations. It is able to do this, not always by picking the best heuristic, but rather by avoiding the heuristics when they would produce very poor schedules. For example, while the metaheuristic only picked the best algorithm about 50% of the time for the 100 Gbps Ethernet, its worst decision was only 49% away from optimal. In contrast, the best of the eight scheduling algorithms was optimal 30% of the time, but its worst decision was 844% away from optimal.http://dx.doi.org/10.1155/1999/898723
collection DOAJ
language English
format Article
sources DOAJ
author Clayton S. Ferner
Robert G. Babb II
spellingShingle Clayton S. Ferner
Robert G. Babb II
Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing
Scientific Programming
author_facet Clayton S. Ferner
Robert G. Babb II
author_sort Clayton S. Ferner
title Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing
title_short Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing
title_full Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing
title_fullStr Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing
title_full_unstemmed Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing
title_sort automatic choice of scheduling heuristics for parallel/distributed computing
publisher Hindawi Limited
series Scientific Programming
issn 1058-9244
1875-919X
publishDate 1999-01-01
description Task mapping and scheduling are two very difficult problems that must be addressed when a sequential program is transformed into a parallel program. Since these problems are NP‐hard, compiler writers have opted to concentrate their efforts on optimizations that produce immediate gains in performance. As a result, current parallelizing compilers either use very simple methods to deal with task scheduling or they simply ignore it altogether. Unfortunately, the programmer does not have this luxury. The burden of repartitioning or rescheduling, should the compiler produce inefficient parallel code, lies entirely with the programmer. We were able to create an algorithm (called a metaheuristic), which automatically chooses a scheduling heuristic for each input program. The metaheuristic produces better schedules in general than the heuristics upon which it is based. This technique was tested on a suite of real scientific programs written in SISAL and simulated on four different network configurations. Averaged over all of the test cases, the metaheuristic out‐performed all eight underlying scheduling algorithms; beating the best one by 2%, 12%, 13%, and 3% on the four separate network configurations. It is able to do this, not always by picking the best heuristic, but rather by avoiding the heuristics when they would produce very poor schedules. For example, while the metaheuristic only picked the best algorithm about 50% of the time for the 100 Gbps Ethernet, its worst decision was only 49% away from optimal. In contrast, the best of the eight scheduling algorithms was optimal 30% of the time, but its worst decision was 844% away from optimal.
url http://dx.doi.org/10.1155/1999/898723
work_keys_str_mv AT claytonsferner automaticchoiceofschedulingheuristicsforparalleldistributedcomputing
AT robertgbabbii automaticchoiceofschedulingheuristicsforparalleldistributedcomputing
_version_ 1721335347409846272