Convex Optimization for Assignment and Generalized Linear Regression Problems
This thesis considers optimization techniques with applications in assignment and generalized linear regression problems. The first part of the thesis investigates the worst-case robust counterparts of combinatorial optimization problems with least squares (LS) cost functions, where the uncertainty...
Main Author: | |
---|---|
Format: | Doctoral Thesis |
Language: | English |
Published: |
KTH, Signalbehandling
2014
|
Online Access: | http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150338 http://nbn-resolving.de/urn:isbn:978-91--7595-236-9 |
Summary: | This thesis considers optimization techniques with applications in assignment and generalized linear regression problems. The first part of the thesis investigates the worst-case robust counterparts of combinatorial optimization problems with least squares (LS) cost functions, where the uncertainty lies on the linear transformation of the design variables. We consider the case of ellipsoidal uncertainty, and prove that the worst case robust LS optimization problem, although NP-hard, is still amenable to convexrelaxation based on semidefinite optimization. We motivate our proposed relaxation using Lagrangian duality, and illustrate that the tightness of the Lagrange bidual relaxation is strongly dependent on the description of the feasible region of the worst-case robust LS problem. The results arising from this analysis are applicable to a broad range of assignment problems. The second part of the thesis considers combinatorial optimization problems arising specifically in the context of conference program formation. We start by arguing that both papers and reviewers can be represented as feature vectors in a suitable keyword space. This enables rigorous mathematical formulation of the conference formation process. The first problem, paper-to-session assignment, is formulated as a capacitatedk-means clustering problem. We formally prove that it is NP-hard and propose a variety of approximate solutions, ranging from alternating optimization to semidefinite relaxation. Suitable convex relaxation methods are proposed for the paper-to-reviewer assignment problem as well. Our methods are tested using real conference data for both problems, and show very promising results. In a related but distinct research direction, the third part of the thesis focuses on preference measurement applications: Review profiling, i.e., determining the reviewer’s expertise (and thus identifying the associated feature vector for the reviewer) on the basis of their past and present review preferences, or ‘bids’, is an excellent example of preference measurement. We argue that the need for robust preference measurement is apparent in modern applications. Using conjoint analysis (CA) as a basis, we propose a new statistical model for choice-based preference measurement, a part of preference analysis where data are only expressed in the form of binary choices. The model uses deterministic auxiliary variables to account for outliers and to detect the salient features that influence decisions. Contributions include conditions for statistical identifiability, derivation of the pertinent Cramér-Rao Lower Bound (CRLB), and ML consistency conditions for the proposed nonlinear model. The proposed ML approach lends itself naturally to ℓ1-type convex relaxations which are well-suited for distributed implementation, based on the alternating direction method of multipliers (ADMM). A particular decomposition is advocated which bypasses the apparent need for outlier variable communication, thus maintaining scalability. In the last part of the thesis we argue that this modeling has greater intellectual merits than preference measurement, and explain how related ideas can be put in the context of generalized linear regression models, drawing links between ℓ1-methods, stochastic convex optimization, and the field of robust statistics. === <p>QC 20140902</p> |
---|