Source Model Selection for Deep Learning in the Time Series Domain

Transfer Learning aims to transfer knowledge from a source task to a target task. We focus on a situation when there is a large number of available source models, and we are interested in choosing a single source model that can maximize the predictive performance in the target domain. Existing metho...

Full description

Bibliographic Details
Main Authors: Amiel Meiseles, Lior Rokach
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8949507/
id doaj-2d3e31db22054f4b95162bb2f9849405
record_format Article
spelling doaj-2d3e31db22054f4b95162bb2f98494052021-03-30T02:24:47ZengIEEEIEEE Access2169-35362020-01-0186190620010.1109/ACCESS.2019.29637428949507Source Model Selection for Deep Learning in the Time Series DomainAmiel Meiseles0https://orcid.org/0000-0002-3134-3271Lior Rokach1https://orcid.org/0000-0002-6956-3341Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Be’er Sheva, IsraelDepartment of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Be’er Sheva, IsraelTransfer Learning aims to transfer knowledge from a source task to a target task. We focus on a situation when there is a large number of available source models, and we are interested in choosing a single source model that can maximize the predictive performance in the target domain. Existing methods compute some form of “similarity” between the source task data and the target task data. They then select the most similar source task and use the model trained on it for transfer learning. Previous methods do not account for the fact that it is the model parameters that are transferred rather than the data. Therefore, the “similarity” of the source data does not directly influence transfer learning performance. In addition, we would like the possibility of confidently selecting a source model even when the data it was trained on is not available, for example, due to privacy or copyright constraints. We propose to use the truncated source models as encoders for the target data. We then select a source model based on how well it clusters the target data in the latent encoding space, which we calculate using the Mean Silhouette Coefficient. We prove that if the encodings achieve a Mean Silhouette Coefficient of 1, optimal classification can be achieved using just the final layer of the target network. We evaluate our method using the University of California, Riverside (UCR) time series archive and show that the proposed method achieves comparable results to previous work, without using the source data.https://ieeexplore.ieee.org/document/8949507/Deep learningmodel selectiontime seriestransfer learning
collection DOAJ
language English
format Article
sources DOAJ
author Amiel Meiseles
Lior Rokach
spellingShingle Amiel Meiseles
Lior Rokach
Source Model Selection for Deep Learning in the Time Series Domain
IEEE Access
Deep learning
model selection
time series
transfer learning
author_facet Amiel Meiseles
Lior Rokach
author_sort Amiel Meiseles
title Source Model Selection for Deep Learning in the Time Series Domain
title_short Source Model Selection for Deep Learning in the Time Series Domain
title_full Source Model Selection for Deep Learning in the Time Series Domain
title_fullStr Source Model Selection for Deep Learning in the Time Series Domain
title_full_unstemmed Source Model Selection for Deep Learning in the Time Series Domain
title_sort source model selection for deep learning in the time series domain
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2020-01-01
description Transfer Learning aims to transfer knowledge from a source task to a target task. We focus on a situation when there is a large number of available source models, and we are interested in choosing a single source model that can maximize the predictive performance in the target domain. Existing methods compute some form of “similarity” between the source task data and the target task data. They then select the most similar source task and use the model trained on it for transfer learning. Previous methods do not account for the fact that it is the model parameters that are transferred rather than the data. Therefore, the “similarity” of the source data does not directly influence transfer learning performance. In addition, we would like the possibility of confidently selecting a source model even when the data it was trained on is not available, for example, due to privacy or copyright constraints. We propose to use the truncated source models as encoders for the target data. We then select a source model based on how well it clusters the target data in the latent encoding space, which we calculate using the Mean Silhouette Coefficient. We prove that if the encodings achieve a Mean Silhouette Coefficient of 1, optimal classification can be achieved using just the final layer of the target network. We evaluate our method using the University of California, Riverside (UCR) time series archive and show that the proposed method achieves comparable results to previous work, without using the source data.
topic Deep learning
model selection
time series
transfer learning
url https://ieeexplore.ieee.org/document/8949507/
work_keys_str_mv AT amielmeiseles sourcemodelselectionfordeeplearninginthetimeseriesdomain
AT liorrokach sourcemodelselectionfordeeplearninginthetimeseriesdomain
_version_ 1724185164196610048