Efficient Numerical Methods for High-Dimensional Approximation Problems

In the field of uncertainty quantification, the effects of parameter uncertainties on scientific simulations may be studied by integrating or approximating a quantity of interest as a function over the parameter space. If this is done numerically, using regular grids with a fixed resolution, the req...

Full description

Bibliographic Details
Main Author: Wolfers, Sören
Other Authors: Tempone, Raul
Language:en
Published: 2019
Subjects:
UQ
Online Access:Wolfers, S. (2019). Efficient Numerical Methods for High-Dimensional Approximation Problems. <i>KAUST Research Repository</i>. https://doi.org/10.25781/KAUST-KGFH7
http://hdl.handle.net/10754/630974
id ndltd-kaust.edu.sa-oai-repository.kaust.edu.sa-10754-630974
record_format oai_dc
spelling ndltd-kaust.edu.sa-oai-repository.kaust.edu.sa-10754-6309742021-01-20T05:09:08Z Efficient Numerical Methods for High-Dimensional Approximation Problems Wolfers, Sören Tempone, Raul Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division Keyes, David E. Mai, Paul Martin Gobet, Emmanuel UQ mathematical finance numerical analysis approximation theory In the field of uncertainty quantification, the effects of parameter uncertainties on scientific simulations may be studied by integrating or approximating a quantity of interest as a function over the parameter space. If this is done numerically, using regular grids with a fixed resolution, the required computational work increases exponentially with respect to the number of uncertain parameters – a phenomenon known as the curse of dimensionality. We study two methods that can help break this curse: discrete least squares polynomial approximation and kernel-based approximation. For the former, we adaptively determine sparse polynomial bases and use evaluations in random, quasi-optimally distributed evaluation nodes; for the latter, we use evaluations in sparse grids, as introduced by Smolyak. To mitigate the additional cost of solving differential equations at each evaluation node, we extend multilevel methods to the approximation of response surfaces. For this purpose, we provide a general analysis that exhibits multilevel algorithms as special cases of an abstract version of Smolyak’s algorithm. In financial mathematics, high-dimensional approximation problems occur in the pricing of derivatives with multiple underlying assets. The value function of American options can theoretically be determined backwards in time using the dynamic programming principle. Numerical implementations, however, face the curse of dimensionality because each asset corresponds to a dimension in the domain of the value function. Lack of regularity of the value function at the optimal exercise boundary further increases the computational complexity. As an alternative, we propose a novel method that determines an optimal exercise strategy as the solution of a stochastic optimization problem and subsequently computes the option value by simple Monte Carlo simulation. For this purpose, we represent the American option price as the supremum of the expected payoff over a set of randomized exercise strategies. Unlike the corresponding classical representation over subsets of Euclidean space, this relax- ation gives rise to a well-behaved objective function that can be globally optimized using standard optimization routines. 2019-02-06T21:40:39Z 2019-02-06T21:40:39Z 2019-02-06 Dissertation Wolfers, S. (2019). Efficient Numerical Methods for High-Dimensional Approximation Problems. <i>KAUST Research Repository</i>. https://doi.org/10.25781/KAUST-KGFH7 10.25781/KAUST-KGFH7 http://hdl.handle.net/10754/630974 en
collection NDLTD
language en
sources NDLTD
topic UQ
mathematical finance
numerical analysis
approximation theory
spellingShingle UQ
mathematical finance
numerical analysis
approximation theory
Wolfers, Sören
Efficient Numerical Methods for High-Dimensional Approximation Problems
description In the field of uncertainty quantification, the effects of parameter uncertainties on scientific simulations may be studied by integrating or approximating a quantity of interest as a function over the parameter space. If this is done numerically, using regular grids with a fixed resolution, the required computational work increases exponentially with respect to the number of uncertain parameters – a phenomenon known as the curse of dimensionality. We study two methods that can help break this curse: discrete least squares polynomial approximation and kernel-based approximation. For the former, we adaptively determine sparse polynomial bases and use evaluations in random, quasi-optimally distributed evaluation nodes; for the latter, we use evaluations in sparse grids, as introduced by Smolyak. To mitigate the additional cost of solving differential equations at each evaluation node, we extend multilevel methods to the approximation of response surfaces. For this purpose, we provide a general analysis that exhibits multilevel algorithms as special cases of an abstract version of Smolyak’s algorithm. In financial mathematics, high-dimensional approximation problems occur in the pricing of derivatives with multiple underlying assets. The value function of American options can theoretically be determined backwards in time using the dynamic programming principle. Numerical implementations, however, face the curse of dimensionality because each asset corresponds to a dimension in the domain of the value function. Lack of regularity of the value function at the optimal exercise boundary further increases the computational complexity. As an alternative, we propose a novel method that determines an optimal exercise strategy as the solution of a stochastic optimization problem and subsequently computes the option value by simple Monte Carlo simulation. For this purpose, we represent the American option price as the supremum of the expected payoff over a set of randomized exercise strategies. Unlike the corresponding classical representation over subsets of Euclidean space, this relax- ation gives rise to a well-behaved objective function that can be globally optimized using standard optimization routines.
author2 Tempone, Raul
author_facet Tempone, Raul
Wolfers, Sören
author Wolfers, Sören
author_sort Wolfers, Sören
title Efficient Numerical Methods for High-Dimensional Approximation Problems
title_short Efficient Numerical Methods for High-Dimensional Approximation Problems
title_full Efficient Numerical Methods for High-Dimensional Approximation Problems
title_fullStr Efficient Numerical Methods for High-Dimensional Approximation Problems
title_full_unstemmed Efficient Numerical Methods for High-Dimensional Approximation Problems
title_sort efficient numerical methods for high-dimensional approximation problems
publishDate 2019
url Wolfers, S. (2019). Efficient Numerical Methods for High-Dimensional Approximation Problems. <i>KAUST Research Repository</i>. https://doi.org/10.25781/KAUST-KGFH7
http://hdl.handle.net/10754/630974
work_keys_str_mv AT wolferssoren efficientnumericalmethodsforhighdimensionalapproximationproblems
_version_ 1719373537172520960