Direct sparse matrix methods for interior point algorithms.

Recent advances in linear programming solution methodology have focused on interior point algorithms. These are powerful new methods, achieving significant reductions in computer time for large LPs and solving problems significantly larger than previously possible. This dissertation describes the im...

Full description

Bibliographic Details
Main Author: Jung, Ho-Won.
Other Authors: Marsten, Roy E.
Language:en
Published: The University of Arizona. 1990
Subjects:
Online Access:http://hdl.handle.net/10150/185133
id ndltd-arizona.edu-oai-arizona.openrepository.com-10150-185133
record_format oai_dc
spelling ndltd-arizona.edu-oai-arizona.openrepository.com-10150-1851332015-10-23T04:30:49Z Direct sparse matrix methods for interior point algorithms. Jung, Ho-Won. Marsten, Roy E. Saltzman, Matthew J. Kannan, Pallassana Sheng, Olivia R. Goldberg, Jeffrey B. Sparse matrices Trees (Graph theory) Linear programming. Recent advances in linear programming solution methodology have focused on interior point algorithms. These are powerful new methods, achieving significant reductions in computer time for large LPs and solving problems significantly larger than previously possible. This dissertation describes the implementation of interior point algorithms. It focuses on applications of direct sparse matrix methods to sparse symmetric positive definite systems of linear equations on scalar computers and vector supercomputers. The most computationally intensive step in each iteration of any interior point algorithm is the numerical factorization of a sparse symmetric positive definite matrix. In large problems or relatively dense problems, 80-90% or more of computational time is spent in this step. This study concentrates on solution methods for such linear systems. It is based on modifications and extensions of graph theory applied to sparse matrices. The row and column permutation of a sparse symmetric positive definite matrix dramatically affects the performance of solution algorithms. Various reordering methods are considered to find the best ordering for various numerical factorization methods and computer architectures. It is assumed that the reordering method will follow the fill-preserving rule, i.e., not allow additional fill-ins beyond that provided by the initial ordering. To follow this rule, a modular approach is used. In this approach, the matrix is first permuted by using any minimum degree heuristic, and then the permuted matrix is again reordered according to a specific reordering objective. Results of different reordering methods are described. There are several ways to compute the Cholesky factor of a symmetric positive definite matrix. A column Cholesky algorithm is a popular method for dense and sparse matrix factorization on serial and parallel computers. Applying this algorithm to a sparse matrix requires the use of sparse vector operations. Graph theory is applied to reduce sparse vector computations. A second and relatively new algorithm is the multifrontal algorithm. This method uses dense operations for sparse matrix computation at the expense of some data manipulation. The performance of the column Cholesky and multifrontal algorithms in the numerical factorization of a sparse symmetric positive definite matrix on an IBM 3090 vector supercomputer is described. 1990 text Dissertation-Reproduction (electronic) http://hdl.handle.net/10150/185133 704729031 9100551 en Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. The University of Arizona.
collection NDLTD
language en
sources NDLTD
topic Sparse matrices
Trees (Graph theory)
Linear programming.
spellingShingle Sparse matrices
Trees (Graph theory)
Linear programming.
Jung, Ho-Won.
Direct sparse matrix methods for interior point algorithms.
description Recent advances in linear programming solution methodology have focused on interior point algorithms. These are powerful new methods, achieving significant reductions in computer time for large LPs and solving problems significantly larger than previously possible. This dissertation describes the implementation of interior point algorithms. It focuses on applications of direct sparse matrix methods to sparse symmetric positive definite systems of linear equations on scalar computers and vector supercomputers. The most computationally intensive step in each iteration of any interior point algorithm is the numerical factorization of a sparse symmetric positive definite matrix. In large problems or relatively dense problems, 80-90% or more of computational time is spent in this step. This study concentrates on solution methods for such linear systems. It is based on modifications and extensions of graph theory applied to sparse matrices. The row and column permutation of a sparse symmetric positive definite matrix dramatically affects the performance of solution algorithms. Various reordering methods are considered to find the best ordering for various numerical factorization methods and computer architectures. It is assumed that the reordering method will follow the fill-preserving rule, i.e., not allow additional fill-ins beyond that provided by the initial ordering. To follow this rule, a modular approach is used. In this approach, the matrix is first permuted by using any minimum degree heuristic, and then the permuted matrix is again reordered according to a specific reordering objective. Results of different reordering methods are described. There are several ways to compute the Cholesky factor of a symmetric positive definite matrix. A column Cholesky algorithm is a popular method for dense and sparse matrix factorization on serial and parallel computers. Applying this algorithm to a sparse matrix requires the use of sparse vector operations. Graph theory is applied to reduce sparse vector computations. A second and relatively new algorithm is the multifrontal algorithm. This method uses dense operations for sparse matrix computation at the expense of some data manipulation. The performance of the column Cholesky and multifrontal algorithms in the numerical factorization of a sparse symmetric positive definite matrix on an IBM 3090 vector supercomputer is described.
author2 Marsten, Roy E.
author_facet Marsten, Roy E.
Jung, Ho-Won.
author Jung, Ho-Won.
author_sort Jung, Ho-Won.
title Direct sparse matrix methods for interior point algorithms.
title_short Direct sparse matrix methods for interior point algorithms.
title_full Direct sparse matrix methods for interior point algorithms.
title_fullStr Direct sparse matrix methods for interior point algorithms.
title_full_unstemmed Direct sparse matrix methods for interior point algorithms.
title_sort direct sparse matrix methods for interior point algorithms.
publisher The University of Arizona.
publishDate 1990
url http://hdl.handle.net/10150/185133
work_keys_str_mv AT junghowon directsparsematrixmethodsforinteriorpointalgorithms
_version_ 1718097556066533376