Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications

abstract: Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared...

Full description

Bibliographic Details
Other Authors: Chen, Jianhui (Author)
Format: Doctoral Thesis
Language:English
Published: 2011
Subjects:
Online Access:http://hdl.handle.net/2286/R.I.9391
id ndltd-asu.edu-item-9391
record_format oai_dc
spelling ndltd-asu.edu-item-93912018-06-22T03:02:00Z Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications abstract: Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms. Dissertation/Thesis Chen, Jianhui (Author) Ye, Jieping (Advisor) Kumar, Sudhir (Committee member) Liu, Huan (Committee member) Xue, Guoliang (Committee member) Arizona State University (Publisher) Computer Science Machine Learning Multi-Task Learning Structured Regularization eng 143 pages Ph.D. Computer Science 2011 Doctoral Dissertation http://hdl.handle.net/2286/R.I.9391 http://rightsstatements.org/vocab/InC/1.0/ All Rights Reserved 2011
collection NDLTD
language English
format Doctoral Thesis
sources NDLTD
topic Computer Science
Machine Learning
Multi-Task Learning
Structured Regularization
spellingShingle Computer Science
Machine Learning
Multi-Task Learning
Structured Regularization
Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications
description abstract: Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms. === Dissertation/Thesis === Ph.D. Computer Science 2011
author2 Chen, Jianhui (Author)
author_facet Chen, Jianhui (Author)
title Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications
title_short Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications
title_full Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications
title_fullStr Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications
title_full_unstemmed Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications
title_sort multi-task learning via structured regularization: formulations, algorithms, and applications
publishDate 2011
url http://hdl.handle.net/2286/R.I.9391
_version_ 1718699708116893696