Summary: | In many sequential decision problems, uncertainty is revealed over time and we need to make decisions in the face of uncertainty. This is a fundamental problem arising in many applications such as facility location, resource allocation and capacity planning under demand uncertainty. Robust optimization is an approach to model uncertainty where we optimize over the worst-case realization of parameters within an uncertainty set. While computing an optimal solution in dynamic robust optimization is usually intractable, affine policies (or linear decision rules) are widely used as an approximate solution approach. However, there is a stark contrast between the observed good empirical performance and the bad worst-case theoretical performance bounds. In the first part of this thesis, we address this stark contrast between theory and practice. In particular, we introduce a probabilistic approach in Chapter 2 to analyze the performance of affine policies on randomly generated instances and show they are near-optimal with high probability under reasonable assumptions. In Chapter 3, we study these policies under important models of uncertainty such as budget of uncertainty sets and intersection of budgeted sets and show that affine policies give an optimal approximation matching the hardness of approximation. In the second part of the thesis and based on our analysis of affine policies, we design new tractable policies for dynamic robust optimization. In particular, in Chapter 4, we present a tractable framework to design piecewise affine policies that can be computed efficiently and improve over affine policies for many instances. In Chapter 5, we introduce extended affine policies and threshold policies and show that their performance guarantees are significantly better than previous policies. Finally, in Chapter 6, we study piecewise static policies and their limitations for solving some classes of dynamic robust optimization problems.
|