Unifying Low-Rank Models for Visual Learning

Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard-rank constraints have trad...

Full description

Bibliographic Details
Main Author: Cabral, Ricardo da Silveira
Format: Others
Published: Research Showcase @ CMU 2015
Subjects:
Online Access:http://repository.cmu.edu/dissertations/506
http://repository.cmu.edu/cgi/viewcontent.cgi?article=1506&context=dissertations
id ndltd-cmu.edu-oai-repository.cmu.edu-dissertations-1506
record_format oai_dc
spelling ndltd-cmu.edu-oai-repository.cmu.edu-dissertations-15062015-08-08T03:29:29Z Unifying Low-Rank Models for Visual Learning Cabral, Ricardo da Silveira Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard-rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matrices of fixed rank. Whilst factorization approaches lead to efficient and kernelizable optimization algorithms, they have been shown to be NP-Hard in presence of missing data. Inspired by recent work in compressed sensing, hard-rank constraints have been replaced by soft-rank constraints, such as the nuclear norm regularizer. Vis-a-vis hard-rank approaches, soft-rank models are convex even in presence of missing data: but how is convex optimization solving a NP-Hard problem? This thesis addresses this question by analyzing the relationship between hard and soft rank constraints in the unsupervised factorization with missing data problem. Moreover, we extend soft rank models to weakly supervised and fully supervised learning problems in computer vision. There are four main contributions of our work: (1) The analysis of a new unified low-rank model for matrix factorization with missing data. Our model subsumes soft and hard-rank approaches and merges advantages from previous formulations, such as efficient algorithms and kernelization. It also provides justifications on the choice of algorithms and regions that guarantee convergence to global minima. (2) A deterministic \rank continuation" strategy for the NP-hard unsupervised factorization with missing data problem, that is highly competitive with the state-of-the-art and often achieves globally optimal solutions. In preliminary work, we show that this optimization strategy is applicable to other NP-hard problems which are typically relaxed to convex semidentite programs (e.g., MAX-CUT, quadratic assignment problem). (3) A new soft-rank fully supervised robust regression model. This convex model is able to deal with noise, outliers and missing data in the input variables. (4) A new soft-rank model for weakly supervised image classification and localization. Unlike existing multiple-instance approaches for this problem, our model is convex. 2015-02-01T08:00:00Z text application/pdf http://repository.cmu.edu/dissertations/506 http://repository.cmu.edu/cgi/viewcontent.cgi?article=1506&context=dissertations Dissertations Research Showcase @ CMU Computer vision Machine learning Low-rank matrices Convex optimization Bilinear fac-torization Augmented lagrange multiplier method
collection NDLTD
format Others
sources NDLTD
topic Computer vision
Machine learning
Low-rank matrices
Convex optimization
Bilinear fac-torization
Augmented lagrange multiplier method
spellingShingle Computer vision
Machine learning
Low-rank matrices
Convex optimization
Bilinear fac-torization
Augmented lagrange multiplier method
Cabral, Ricardo da Silveira
Unifying Low-Rank Models for Visual Learning
description Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard-rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matrices of fixed rank. Whilst factorization approaches lead to efficient and kernelizable optimization algorithms, they have been shown to be NP-Hard in presence of missing data. Inspired by recent work in compressed sensing, hard-rank constraints have been replaced by soft-rank constraints, such as the nuclear norm regularizer. Vis-a-vis hard-rank approaches, soft-rank models are convex even in presence of missing data: but how is convex optimization solving a NP-Hard problem? This thesis addresses this question by analyzing the relationship between hard and soft rank constraints in the unsupervised factorization with missing data problem. Moreover, we extend soft rank models to weakly supervised and fully supervised learning problems in computer vision. There are four main contributions of our work: (1) The analysis of a new unified low-rank model for matrix factorization with missing data. Our model subsumes soft and hard-rank approaches and merges advantages from previous formulations, such as efficient algorithms and kernelization. It also provides justifications on the choice of algorithms and regions that guarantee convergence to global minima. (2) A deterministic \rank continuation" strategy for the NP-hard unsupervised factorization with missing data problem, that is highly competitive with the state-of-the-art and often achieves globally optimal solutions. In preliminary work, we show that this optimization strategy is applicable to other NP-hard problems which are typically relaxed to convex semidentite programs (e.g., MAX-CUT, quadratic assignment problem). (3) A new soft-rank fully supervised robust regression model. This convex model is able to deal with noise, outliers and missing data in the input variables. (4) A new soft-rank model for weakly supervised image classification and localization. Unlike existing multiple-instance approaches for this problem, our model is convex.
author Cabral, Ricardo da Silveira
author_facet Cabral, Ricardo da Silveira
author_sort Cabral, Ricardo da Silveira
title Unifying Low-Rank Models for Visual Learning
title_short Unifying Low-Rank Models for Visual Learning
title_full Unifying Low-Rank Models for Visual Learning
title_fullStr Unifying Low-Rank Models for Visual Learning
title_full_unstemmed Unifying Low-Rank Models for Visual Learning
title_sort unifying low-rank models for visual learning
publisher Research Showcase @ CMU
publishDate 2015
url http://repository.cmu.edu/dissertations/506
http://repository.cmu.edu/cgi/viewcontent.cgi?article=1506&context=dissertations
work_keys_str_mv AT cabralricardodasilveira unifyinglowrankmodelsforvisuallearning
_version_ 1716816280967708672