Optimizing Optimization: Scalable Convex Programming with Proximal Operators
Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algo...
Main Author: | |
---|---|
Format: | Others |
Published: |
Research Showcase @ CMU
2016
|
Subjects: | |
Online Access: | http://repository.cmu.edu/dissertations/785 http://repository.cmu.edu/cgi/viewcontent.cgi?article=1824&context=dissertations |
id |
ndltd-cmu.edu-oai-repository.cmu.edu-dissertations-1824 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-cmu.edu-oai-repository.cmu.edu-dissertations-18242017-02-23T03:43:11Z Optimizing Optimization: Scalable Convex Programming with Proximal Operators Wytock, Matt Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algorithms required for implementation. Especially as datasets grow in size, there is a significant gap in speed and scalability between general solvers and specialized algorithms. This thesis addresses this gap with a new model for convex programming based on an intermediate representation of convex problems as a sum of functions with efficient proximal operators. This representation serves two purposes: 1) many problems can be expressed in terms of functions with simple proximal operators, and 2) the proximal operator form serves as a general interface to any specialized algorithm that can incorporate additional `2-regularization. On a single CPU core, numerical results demonstrate that the prox-affine form results in significantly faster algorithms than existing general solvers based on conic forms. In addition, splitting problems into separable sums is attractive from the perspective of distributing solver work amongst multiple cores and machines. We apply large-scale convex programming to several problems arising from building the next-generation, information-enabled electrical grid. In these problems (as is common in many domains) large, high-dimensional datasets present opportunities for novel data-driven solutions. We present approaches based on convex models for several problems: probabilistic forecasting of electricity generation and demand, preventing failures in microgrids and source separation for whole-home energy disaggregation. 2016-03-01T08:00:00Z text application/pdf http://repository.cmu.edu/dissertations/785 http://repository.cmu.edu/cgi/viewcontent.cgi?article=1824&context=dissertations Dissertations Research Showcase @ CMU convex optimization proximal operator operator splitting Newton method sparsity graphical model |
collection |
NDLTD |
format |
Others
|
sources |
NDLTD |
topic |
convex optimization proximal operator operator splitting Newton method sparsity graphical model |
spellingShingle |
convex optimization proximal operator operator splitting Newton method sparsity graphical model Wytock, Matt Optimizing Optimization: Scalable Convex Programming with Proximal Operators |
description |
Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algorithms required for implementation. Especially as datasets grow in size, there is a significant gap in speed and scalability between general solvers and specialized algorithms. This thesis addresses this gap with a new model for convex programming based on an intermediate representation of convex problems as a sum of functions with efficient proximal operators. This representation serves two purposes: 1) many problems can be expressed in terms of functions with simple proximal operators, and 2) the proximal operator form serves as a general interface to any specialized algorithm that can incorporate additional `2-regularization. On a single CPU core, numerical results demonstrate that the prox-affine form results in significantly faster algorithms than existing general solvers based on conic forms. In addition, splitting problems into separable sums is attractive from the perspective of distributing solver work amongst multiple cores and machines. We apply large-scale convex programming to several problems arising from building the next-generation, information-enabled electrical grid. In these problems (as is common in many domains) large, high-dimensional datasets present opportunities for novel data-driven solutions. We present approaches based on convex models for several problems: probabilistic forecasting of electricity generation and demand, preventing failures in microgrids and source separation for whole-home energy disaggregation. |
author |
Wytock, Matt |
author_facet |
Wytock, Matt |
author_sort |
Wytock, Matt |
title |
Optimizing Optimization: Scalable Convex Programming with Proximal Operators |
title_short |
Optimizing Optimization: Scalable Convex Programming with Proximal Operators |
title_full |
Optimizing Optimization: Scalable Convex Programming with Proximal Operators |
title_fullStr |
Optimizing Optimization: Scalable Convex Programming with Proximal Operators |
title_full_unstemmed |
Optimizing Optimization: Scalable Convex Programming with Proximal Operators |
title_sort |
optimizing optimization: scalable convex programming with proximal operators |
publisher |
Research Showcase @ CMU |
publishDate |
2016 |
url |
http://repository.cmu.edu/dissertations/785 http://repository.cmu.edu/cgi/viewcontent.cgi?article=1824&context=dissertations |
work_keys_str_mv |
AT wytockmatt optimizingoptimizationscalableconvexprogrammingwithproximaloperators |
_version_ |
1718416522442964992 |