Direct multiplicative methods for sparse matrices. Newton methods
We consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing t...
Main Author: | |
---|---|
Format: | Article |
Language: | Russian |
Published: |
Institute of Computer Science
2017-10-01
|
Series: | Компьютерные исследования и моделирование |
Subjects: | |
Online Access: | http://crm.ics.org.ru/uploads/crmissues/crm_2017_5/2017_05_09_01.pdf |
id |
doaj-bfffa7396dc7415ca3057ad2531cdcc3 |
---|---|
record_format |
Article |
spelling |
doaj-bfffa7396dc7415ca3057ad2531cdcc32020-11-25T00:44:05ZrusInstitute of Computer ScienceКомпьютерные исследования и моделирование2076-76332077-68532017-10-019567970310.20537/2076-7633-2017-9-5-679-7032616Direct multiplicative methods for sparse matrices. Newton methodsAnastasiya Borisovna SviridenkoWe consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing the accuracy of the results. Moreover, changes in the position of the next processed row of the matrix are not made, what allows using static data storage formats. Linear system solving by a direct multiplicative algorithm is, like the solving with $LU$-decomposition, just another scheme of the Gaussian elimination method implementation. In this paper, this algorithm is the basis for solving the following problems: Problem 1. Setting the descent direction in Newtonian methods of unconditional optimization by integrating one of the known techniques of constructing an essentially positive definite matrix. This approach allows us to weaken or remove additional specific difficulties caused by the need to solve large equation systems with sparse matrices presented in a packed form. Problem 2. Construction of a new mathematical formulation of the problem of quadratic programming and a new form of specifying necessary and sufficient optimality conditions. They are quite simple and can be used to construct mathematical programming methods, for example, to find the minimum of a quadratic function on a polyhedral set of constraints, based on solving linear equations systems, which dimension is not higher than the number of variables of the objective function. Problem 3. Construction of a continuous analogue of the problem of minimizing a real quadratic polynomial in Boolean variables and a new form of defining necessary and sufficient conditions of optimality for the development of methods for solving them in polynomial time. As a result, the original problem is reduced to the problem of finding the minimum distance between the origin and the angular point of a convex polyhedron, which is a perturbation of the $n$-dimensional cube and is described by a system of double linear inequalities with an upper triangular matrix of coefficients with units on the main diagonal. Only two faces are subject to investigation, one of which or both contains the vertices closest to the origin. To calculate them, it is sufficient to solve $4n - 4$ linear equations systems and choose among them all the nearest equidistant vertices in polynomial time. The problem of minimizing a quadratic polynomial is $NP$-hard, since an $NP$-hard problem about a vertex covering for an arbitrary graph comes down to it. It follows therefrom that $P = NP$, which is based on the development beyond the limits of integer optimization methods.http://crm.ics.org.ru/uploads/crmissues/crm_2017_5/2017_05_09_01.pdf$NP$-hard problemsparse matricesNewton methodsdirect multiplication algorithmthe direction of descenta new mathematical formulationnecessary and sufficient conditions of optimalityminimization pseudo Boolean functionspseudo Boolean programminglinear programming |
collection |
DOAJ |
language |
Russian |
format |
Article |
sources |
DOAJ |
author |
Anastasiya Borisovna Sviridenko |
spellingShingle |
Anastasiya Borisovna Sviridenko Direct multiplicative methods for sparse matrices. Newton methods Компьютерные исследования и моделирование $NP$-hard problem sparse matrices Newton methods direct multiplication algorithm the direction of descent a new mathematical formulation necessary and sufficient conditions of optimality minimization pseudo Boolean functions pseudo Boolean programming linear programming |
author_facet |
Anastasiya Borisovna Sviridenko |
author_sort |
Anastasiya Borisovna Sviridenko |
title |
Direct multiplicative methods for sparse matrices. Newton methods |
title_short |
Direct multiplicative methods for sparse matrices. Newton methods |
title_full |
Direct multiplicative methods for sparse matrices. Newton methods |
title_fullStr |
Direct multiplicative methods for sparse matrices. Newton methods |
title_full_unstemmed |
Direct multiplicative methods for sparse matrices. Newton methods |
title_sort |
direct multiplicative methods for sparse matrices. newton methods |
publisher |
Institute of Computer Science |
series |
Компьютерные исследования и моделирование |
issn |
2076-7633 2077-6853 |
publishDate |
2017-10-01 |
description |
We consider a numerically stable direct multiplicative algorithm of solving linear equations systems, which takes into account the sparseness of matrices presented in a packed form. The advantage of the algorithm is the ability to minimize the filling of the main rows of multipliers without losing the accuracy of the results. Moreover, changes in the position of the next processed row of the matrix are not made, what allows using static data storage formats. Linear system solving by a direct multiplicative algorithm is, like the solving with $LU$-decomposition, just another scheme of the Gaussian elimination method implementation.
In this paper, this algorithm is the basis for solving the following problems:
Problem 1. Setting the descent direction in Newtonian methods of unconditional optimization by integrating one of the known techniques of constructing an essentially positive definite matrix. This approach allows us to weaken or remove additional specific difficulties caused by the need to solve large equation systems with sparse matrices presented in a packed form.
Problem 2. Construction of a new mathematical formulation of the problem of quadratic programming and a new form of specifying necessary and sufficient optimality conditions. They are quite simple and can be used to construct mathematical programming methods, for example, to find the minimum of a quadratic function on a polyhedral set of constraints, based on solving linear equations systems, which dimension is not higher than the number of variables of the objective function.
Problem 3. Construction of a continuous analogue of the problem of minimizing a real quadratic polynomial in Boolean variables and a new form of defining necessary and sufficient conditions of optimality for the development of methods for solving them in polynomial time. As a result, the original problem is reduced to the problem of finding the minimum distance between the origin and the angular point of a convex polyhedron, which is a perturbation of the $n$-dimensional cube and is described by a system of double linear inequalities with an upper triangular matrix of coefficients with units on the main diagonal. Only two faces are subject to investigation, one of which or both contains the vertices closest to the origin. To calculate them, it is sufficient to solve $4n - 4$ linear equations systems and choose among them all the nearest equidistant vertices in polynomial time. The problem of minimizing a quadratic polynomial is $NP$-hard, since an $NP$-hard problem about a vertex covering for an arbitrary graph comes down to it. It follows therefrom that $P = NP$, which is based on the development beyond the limits of integer optimization methods. |
topic |
$NP$-hard problem sparse matrices Newton methods direct multiplication algorithm the direction of descent a new mathematical formulation necessary and sufficient conditions of optimality minimization pseudo Boolean functions pseudo Boolean programming linear programming |
url |
http://crm.ics.org.ru/uploads/crmissues/crm_2017_5/2017_05_09_01.pdf |
work_keys_str_mv |
AT anastasiyaborisovnasviridenko directmultiplicativemethodsforsparsematricesnewtonmethods |
_version_ |
1725276605525262336 |