GPU Computing in Financial Engineering
GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. We explore efficient implementations for two main financial problems on GPU: pricing, and computing sensitivities (Greeks). Since most Monte Carlo al...
Other Authors: | |
---|---|
Format: | Others |
Language: | English English |
Published: |
Florida State University
|
Subjects: | |
Online Access: | http://purl.flvc.org/fsu/fd/FSU_migr_etd-9526 |
id |
ndltd-fsu.edu-oai-fsu.digital.flvc.org-fsu_273626 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-fsu.edu-oai-fsu.digital.flvc.org-fsu_2736262020-06-23T03:08:19Z GPU Computing in Financial Engineering Xu, Linlin (authoraut) Ökten, Giray (professor directing dissertation) Sinha, Debajyoti (university representative) Bellenot, Steven F. (committee member) Gallivan, Kyle A. (committee member) Kercheval, Alec N. (committee member) Florida State University (degree granting institution) College of Arts and Sciences (degree granting college) Department of Mathematics (degree granting department) Text text Florida State University Florida State University English eng 1 online resource (83 pages) computer application/pdf GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. We explore efficient implementations for two main financial problems on GPU: pricing, and computing sensitivities (Greeks). Since most Monte Carlo algorithms are embarrassingly parallel, Monte Carlo has become a focal point in GPU computing. GPU speed-up examples reported in the literature often involve Monte Carlo algorithms, and there are software tools commercially available that help migrate Monte Carlo financial pricing models to GPU. We present a survey of Monte Carlo and randomized quasi-Monte Carlo methods, and discuss existing (quasi) Monte Carlo sequences in NVIDIA's GPU CURAND libraries. We discuss specific features of GPU architecture relevant for developing efficient (quasi) Monte Carlo methods. We introduce a recent randomized quasi-Monte Carlo method, and compare it with some of the existing implementations on GPU, when they are used in pricing caplets in the LIBOR market model and mortgage backed securities. We then develop a cache-aware implementation of a 3D parabolic PDE solver on GPU. We apply the well-known Craig-Sneyd scheme and derive the corresponding discretization. We discuss memory hierarchy of GPU and suggest a data structure that is suitable for GPU's caching system. We compare the performance of the PDE solver on CPU and GPU. Finally, we consider sensitivity analysis for financial problems via Monte Carlo and PDE methods. We review three commonly used methods and point out their advantages and disadvantages. We present a survey of automatic differentiation (AD), and show the challenges faced in memory consumption when AD is applied in financial problems. We discuss two optimization techniques that help reduce memory footprint significantly. We conduct the sensitivity analysis for the LIBOR market model and suggest an optimization for its AD implementation on GPU. We also apply AD to a 3D parabolic PDE and use GPU to reduce the execution time. A Dissertation submitted to the Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Summer Semester 2015. July 8, 2015. Automatic Differentiation, GPU parallel computing, LIBOR market model, Monte Carlo, PDE, Random number generator Includes bibliographical references. Giray Okten, Professor Directing Dissertation; Debajyoti Sinha, University Representative; Steve Bellenot, Committee Member; Kyle Gallivan, Committee Member; Alec Kercheval, Committee Member. Mathematics FSU_migr_etd-9526 http://purl.flvc.org/fsu/fd/FSU_migr_etd-9526 This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). The copyright in theses and dissertations completed at Florida State University is held by the students who author them. http://diginole.lib.fsu.edu/islandora/object/fsu%3A273626/datastream/TN/view/GPU%20Computing%20in%20Financial%20Engineering.jpg |
collection |
NDLTD |
language |
English English |
format |
Others
|
sources |
NDLTD |
topic |
Mathematics |
spellingShingle |
Mathematics GPU Computing in Financial Engineering |
description |
GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. We explore efficient implementations for two main financial problems on GPU: pricing, and computing sensitivities (Greeks). Since most Monte Carlo algorithms are embarrassingly parallel, Monte Carlo has become a focal point in GPU computing. GPU speed-up examples reported in the literature often involve Monte Carlo algorithms, and there are software tools commercially available that help migrate Monte Carlo financial pricing models to GPU. We present a survey of Monte Carlo and randomized quasi-Monte Carlo methods, and discuss existing (quasi) Monte Carlo sequences in NVIDIA's GPU CURAND libraries. We discuss specific features of GPU architecture relevant for developing efficient (quasi) Monte Carlo methods. We introduce a recent randomized quasi-Monte Carlo method, and compare it with some of the existing implementations on GPU, when they are used in pricing caplets in the LIBOR market model and mortgage backed securities. We then develop a cache-aware implementation of a 3D parabolic PDE solver on GPU. We apply the well-known Craig-Sneyd scheme and derive the corresponding discretization. We discuss memory hierarchy of GPU and suggest a data structure that is suitable for GPU's caching system. We compare the performance of the PDE solver on CPU and GPU. Finally, we consider sensitivity analysis for financial problems via Monte Carlo and PDE methods. We review three commonly used methods and point out their advantages and disadvantages. We present a survey of automatic differentiation (AD), and show the challenges faced in memory consumption when AD is applied in financial problems. We discuss two optimization techniques that help reduce memory footprint significantly. We conduct the sensitivity analysis for the LIBOR market model and suggest an optimization for its AD implementation on GPU. We also apply AD to a 3D parabolic PDE and use GPU to reduce the execution time. === A Dissertation submitted to the Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. === Summer Semester 2015. === July 8, 2015. === Automatic Differentiation, GPU parallel computing, LIBOR market model, Monte Carlo, PDE, Random number generator === Includes bibliographical references. === Giray Okten, Professor Directing Dissertation; Debajyoti Sinha, University Representative; Steve Bellenot, Committee Member; Kyle Gallivan, Committee Member; Alec Kercheval, Committee Member. |
author2 |
Xu, Linlin (authoraut) |
author_facet |
Xu, Linlin (authoraut) |
title |
GPU Computing in Financial Engineering |
title_short |
GPU Computing in Financial Engineering |
title_full |
GPU Computing in Financial Engineering |
title_fullStr |
GPU Computing in Financial Engineering |
title_full_unstemmed |
GPU Computing in Financial Engineering |
title_sort |
gpu computing in financial engineering |
publisher |
Florida State University |
url |
http://purl.flvc.org/fsu/fd/FSU_migr_etd-9526 |
_version_ |
1719323124404584448 |