Stochastic Methods in Optimization and Machine Learning
Stochastic methods are indispensable to the modeling, analysis and design of complex systems involving randomness. In this thesis, we show how simulation techniques and simulation-based computational methods can be applied to a wide spectrum of applied domains including engineering, optimization and...
Main Author: | |
---|---|
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://doi.org/10.7916/d8-ngq8-9s10 |
id |
ndltd-columbia.edu-oai-academiccommons.columbia.edu-10.7916-d8-ngq8-9s10 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-columbia.edu-oai-academiccommons.columbia.edu-10.7916-d8-ngq8-9s102021-04-21T05:02:37ZStochastic Methods in Optimization and Machine LearningLi, Fengpei2021ThesesOperations researchArtificial intelligenceMachine learningStochastic processes--Computer programsStochastic methods are indispensable to the modeling, analysis and design of complex systems involving randomness. In this thesis, we show how simulation techniques and simulation-based computational methods can be applied to a wide spectrum of applied domains including engineering, optimization and machine learning. Moreover, we show how analytical tools in statistics and computer science including empirical processes, probably approximately correct learning, and hypothesis testing can be used in these contexts to provide new theoretical results. In particular, we apply these techniques and present how our results can create new methodologies or improve upon existing state-of-the-art in three areas: decision making under uncertainty (chance-constrained programming, stochastic programming), machine learning (covariate shift, reinforcement learning) and estimation problems arising from optimization (gradient estimate of composite functions) or stochastic systems (solution of stochastic PDE). The work in the above three areas will be organized into six chapters, where each area contains two chapters. In Chapter 2, we study how to obtain feasible solutions for chance-constrained programming using data-driven, sampling-based scenario optimization (SO) approach. When the data size is insufficient to statistically support a desired level of feasibility guarantee, we explore how to leverage parametric information, distributionally robust optimization and Monte Carlo simulation to obtain a feasible solution of chance-constrained programming in small-sample situations. In Chapter 3, We investigate the feasibility of sample average approximation (SAA) for general stochastic optimization problems, including two-stage stochastic programming without the relatively complete recourse. We utilize results from the Vapnik-Chervonenkis (VC) dimension and Probably Approximately Correct learning to provide a general framework. In Chapter 4, we design a robust importance re-weighting method for estimation/learning problem in the covariate shift setting that improves the best-know rate. In Chapter 5, we develop a model-free reinforcement learning approach to solve constrained Markov decision processes (MDP). We propose a two-stage procedure that generates policies with simultaneous guarantees on near-optimality and feasibility. In Chapter 6, we use multilevel Monte Carlo to construct unbiased estimators for expectations of random parabolic PDE. We obtain estimators with finite variance and finite expected computational cost, but bypassing the curse of dimensionality. In Chapter 7, we introduce unbiased gradient simulation algorithms for solving stochastic composition optimization (SCO) problems. We show that the unbiased gradients generated by our algorithms have finite variance and finite expected computational cost.Englishhttps://doi.org/10.7916/d8-ngq8-9s10 |
collection |
NDLTD |
language |
English |
sources |
NDLTD |
topic |
Operations research Artificial intelligence Machine learning Stochastic processes--Computer programs |
spellingShingle |
Operations research Artificial intelligence Machine learning Stochastic processes--Computer programs Li, Fengpei Stochastic Methods in Optimization and Machine Learning |
description |
Stochastic methods are indispensable to the modeling, analysis and design of complex systems involving randomness. In this thesis, we show how simulation techniques and simulation-based computational methods can be applied to a wide spectrum of applied domains including engineering, optimization and machine learning. Moreover, we show how analytical tools in statistics and computer science including empirical processes, probably approximately correct learning, and hypothesis testing can be used in these contexts to provide new theoretical results. In particular, we apply these techniques and present how our results can create new methodologies or improve upon existing state-of-the-art in three areas: decision making under uncertainty (chance-constrained programming, stochastic programming), machine learning (covariate shift, reinforcement learning) and estimation problems arising from optimization (gradient estimate of composite functions) or stochastic systems (solution of stochastic PDE).
The work in the above three areas will be organized into six chapters, where each area contains two chapters. In Chapter 2, we study how to obtain feasible solutions for chance-constrained programming using data-driven, sampling-based scenario optimization (SO) approach. When the data size is insufficient to statistically support a desired level of feasibility guarantee, we explore how to leverage parametric information, distributionally robust optimization and Monte Carlo simulation to obtain a feasible solution of chance-constrained programming in small-sample situations.
In Chapter 3, We investigate the feasibility of sample average approximation (SAA) for general stochastic optimization problems, including two-stage stochastic programming without the relatively complete recourse. We utilize results from the Vapnik-Chervonenkis (VC) dimension and Probably Approximately Correct learning to provide a general framework.
In Chapter 4, we design a robust importance re-weighting method for estimation/learning problem in the covariate shift setting that improves the best-know rate. In Chapter 5, we develop a model-free reinforcement learning approach to solve constrained Markov decision processes (MDP). We propose a two-stage procedure that generates policies with simultaneous guarantees on near-optimality and feasibility.
In Chapter 6, we use multilevel Monte Carlo to construct unbiased estimators for expectations of random parabolic PDE. We obtain estimators with finite variance and finite expected computational cost, but bypassing the curse of dimensionality. In Chapter 7, we introduce unbiased gradient simulation algorithms for solving stochastic composition optimization (SCO) problems. We show that the unbiased gradients generated by our algorithms have finite variance and finite expected computational cost. |
author |
Li, Fengpei |
author_facet |
Li, Fengpei |
author_sort |
Li, Fengpei |
title |
Stochastic Methods in Optimization and Machine Learning |
title_short |
Stochastic Methods in Optimization and Machine Learning |
title_full |
Stochastic Methods in Optimization and Machine Learning |
title_fullStr |
Stochastic Methods in Optimization and Machine Learning |
title_full_unstemmed |
Stochastic Methods in Optimization and Machine Learning |
title_sort |
stochastic methods in optimization and machine learning |
publishDate |
2021 |
url |
https://doi.org/10.7916/d8-ngq8-9s10 |
work_keys_str_mv |
AT lifengpei stochasticmethodsinoptimizationandmachinelearning |
_version_ |
1719397614155202560 |