Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation
Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for doma...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi Limited
2016-01-01
|
Series: | Computational Intelligence and Neuroscience |
Online Access: | http://dx.doi.org/10.1155/2016/7046563 |
id |
doaj-32cc065271114f038558830823b613ae |
---|---|
record_format |
Article |
spelling |
doaj-32cc065271114f038558830823b613ae2020-11-24T21:33:07ZengHindawi LimitedComputational Intelligence and Neuroscience1687-52651687-52732016-01-01201610.1155/2016/70465637046563Generalization Bounds Derived IPM-Based Regularization for Domain AdaptationJuan Meng0Guyu Hu1Dong Li2Yanyan Zhang3Zhisong Pan4College of Command Information System, PLA University of Science and Technology, Nanjing 210007, ChinaCollege of Command Information System, PLA University of Science and Technology, Nanjing 210007, ChinaCollege of Command Information System, PLA University of Science and Technology, Nanjing 210007, ChinaCollege of Command Information System, PLA University of Science and Technology, Nanjing 210007, ChinaCollege of Command Information System, PLA University of Science and Technology, Nanjing 210007, ChinaDomain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method.http://dx.doi.org/10.1155/2016/7046563 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Juan Meng Guyu Hu Dong Li Yanyan Zhang Zhisong Pan |
spellingShingle |
Juan Meng Guyu Hu Dong Li Yanyan Zhang Zhisong Pan Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation Computational Intelligence and Neuroscience |
author_facet |
Juan Meng Guyu Hu Dong Li Yanyan Zhang Zhisong Pan |
author_sort |
Juan Meng |
title |
Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation |
title_short |
Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation |
title_full |
Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation |
title_fullStr |
Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation |
title_full_unstemmed |
Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation |
title_sort |
generalization bounds derived ipm-based regularization for domain adaptation |
publisher |
Hindawi Limited |
series |
Computational Intelligence and Neuroscience |
issn |
1687-5265 1687-5273 |
publishDate |
2016-01-01 |
description |
Domain adaptation has received much attention as a major
form of transfer learning. One issue that should be considered in
domain adaptation is the gap between source domain and
target domain. In order to improve the generalization ability
of domain adaption methods, we proposed a framework
for domain adaptation combining source and target data,
with a new regularizer which takes generalization bounds
into account. This regularization term considers integral
probability metric (IPM) as the distance between the
source domain and the target domain and thus can bound
up the testing error of an existing predictor from the
formula. Since the computation of IPM only involves
two distributions, this generalization term is independent
with specific classifiers. With popular learning models,
the empirical risk minimization is expressed as a general
convex optimization problem and thus can be solved effectively
by existing tools. Empirical studies on synthetic data for
regression and real-world data for classification show the
effectiveness of this method. |
url |
http://dx.doi.org/10.1155/2016/7046563 |
work_keys_str_mv |
AT juanmeng generalizationboundsderivedipmbasedregularizationfordomainadaptation AT guyuhu generalizationboundsderivedipmbasedregularizationfordomainadaptation AT dongli generalizationboundsderivedipmbasedregularizationfordomainadaptation AT yanyanzhang generalizationboundsderivedipmbasedregularizationfordomainadaptation AT zhisongpan generalizationboundsderivedipmbasedregularizationfordomainadaptation |
_version_ |
1725954766363688960 |