Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data
Real world data is not random: The variability in the data-sets that arise in computer vision, signal processing and other areas is often highly constrained and governed by a number of degrees of freedom that is much smaller than the superficial dimensionality of the data. Unsupervised learning meth...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_ca |
Published: |
2008
|
Subjects: | |
Online Access: | http://hdl.handle.net/1807/11118 |
id |
ndltd-TORONTO-oai-tspace.library.utoronto.ca-1807-11118 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TORONTO-oai-tspace.library.utoronto.ca-1807-111182013-04-19T19:51:54ZNon-linear Latent Factor Models for Revealing Structure in High-dimensional DataMemisevic, RolandMachine Learning0800Real world data is not random: The variability in the data-sets that arise in computer vision, signal processing and other areas is often highly constrained and governed by a number of degrees of freedom that is much smaller than the superficial dimensionality of the data. Unsupervised learning methods can be used to automatically discover the “true”, underlying structure in such data-sets and are therefore a central component in many systems that deal with high-dimensional data. In this thesis we develop several new approaches to modeling the low-dimensional structure in data. We introduce a new non-parametric framework for latent variable modelling, that in contrast to previous methods generalizes learned embeddings beyond the training data and its latent representatives. We show that the computational complexity for learning and applying the model is much smaller than that of existing methods, and we illustrate its applicability on several problems. We also show how we can introduce supervision signals into latent variable models using conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent representation and to untangle the factors that contribute to the variability in the data. We develop a model that uses conditional latent variables to extract rich distributed representations of image transformations, and we describe a new model for learning transformation features in structured supervised learning problems.Hinton, Geoffrey2008-032008-07-28T20:52:24ZNO_RESTRICTION2008-07-28T20:52:24Z2008-07-28T20:52:24ZThesis7564985 bytesapplication/pdfhttp://hdl.handle.net/1807/11118en_ca |
collection |
NDLTD |
language |
en_ca |
format |
Others
|
sources |
NDLTD |
topic |
Machine Learning 0800 |
spellingShingle |
Machine Learning 0800 Memisevic, Roland Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data |
description |
Real world data is not random: The variability in the data-sets that arise in computer vision,
signal processing and other areas is often highly constrained and governed by a number of
degrees of freedom that is much smaller than the superficial dimensionality of the data.
Unsupervised learning methods can be used to automatically discover the “true”, underlying
structure in such data-sets and are therefore a central component in many systems that deal
with high-dimensional data.
In this thesis we develop several new approaches to modeling the low-dimensional structure
in data. We introduce a new non-parametric framework for latent variable modelling, that in
contrast to previous methods generalizes learned embeddings beyond the training data and its
latent representatives. We show that the computational complexity for learning and applying
the model is much smaller than that of existing methods, and we illustrate its applicability
on several problems.
We also show how we can introduce supervision signals into latent variable models using
conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent
representation and to untangle the factors that contribute to the variability in the data. We
develop a model that uses conditional latent variables to extract rich distributed representations
of image transformations, and we describe a new model for learning transformation
features in structured supervised learning problems. |
author2 |
Hinton, Geoffrey |
author_facet |
Hinton, Geoffrey Memisevic, Roland |
author |
Memisevic, Roland |
author_sort |
Memisevic, Roland |
title |
Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data |
title_short |
Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data |
title_full |
Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data |
title_fullStr |
Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data |
title_full_unstemmed |
Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data |
title_sort |
non-linear latent factor models for revealing structure in high-dimensional data |
publishDate |
2008 |
url |
http://hdl.handle.net/1807/11118 |
work_keys_str_mv |
AT memisevicroland nonlinearlatentfactormodelsforrevealingstructureinhighdimensionaldata |
_version_ |
1716581542019465216 |