A Survey on Contrastive Self-Supervised Learning
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has rece...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-12-01
|
Series: | Technologies |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7080/9/1/2 |
id |
doaj-f5e61d6454e74c808b80fbf70c0cffee |
---|---|
record_format |
Article |
spelling |
doaj-f5e61d6454e74c808b80fbf70c0cffee2020-12-29T00:02:39ZengMDPI AGTechnologies2227-70802021-12-0192210.3390/technologies9010002A Survey on Contrastive Self-Supervised LearningAshish Jaiswal0Ashwin Ramesh Babu1Mohammad Zaki Zadeh2Debapriya Banerjee3Fillia Makedon4Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USASelf-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.https://www.mdpi.com/2227-7080/9/1/2contrastive learningself-supervised learningdiscriminative learningimage/video classificationobject detectionunsupervised learning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ashish Jaiswal Ashwin Ramesh Babu Mohammad Zaki Zadeh Debapriya Banerjee Fillia Makedon |
spellingShingle |
Ashish Jaiswal Ashwin Ramesh Babu Mohammad Zaki Zadeh Debapriya Banerjee Fillia Makedon A Survey on Contrastive Self-Supervised Learning Technologies contrastive learning self-supervised learning discriminative learning image/video classification object detection unsupervised learning |
author_facet |
Ashish Jaiswal Ashwin Ramesh Babu Mohammad Zaki Zadeh Debapriya Banerjee Fillia Makedon |
author_sort |
Ashish Jaiswal |
title |
A Survey on Contrastive Self-Supervised Learning |
title_short |
A Survey on Contrastive Self-Supervised Learning |
title_full |
A Survey on Contrastive Self-Supervised Learning |
title_fullStr |
A Survey on Contrastive Self-Supervised Learning |
title_full_unstemmed |
A Survey on Contrastive Self-Supervised Learning |
title_sort |
survey on contrastive self-supervised learning |
publisher |
MDPI AG |
series |
Technologies |
issn |
2227-7080 |
publishDate |
2021-12-01 |
description |
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress. |
topic |
contrastive learning self-supervised learning discriminative learning image/video classification object detection unsupervised learning |
url |
https://www.mdpi.com/2227-7080/9/1/2 |
work_keys_str_mv |
AT ashishjaiswal asurveyoncontrastiveselfsupervisedlearning AT ashwinrameshbabu asurveyoncontrastiveselfsupervisedlearning AT mohammadzakizadeh asurveyoncontrastiveselfsupervisedlearning AT debapriyabanerjee asurveyoncontrastiveselfsupervisedlearning AT filliamakedon asurveyoncontrastiveselfsupervisedlearning AT ashishjaiswal surveyoncontrastiveselfsupervisedlearning AT ashwinrameshbabu surveyoncontrastiveselfsupervisedlearning AT mohammadzakizadeh surveyoncontrastiveselfsupervisedlearning AT debapriyabanerjee surveyoncontrastiveselfsupervisedlearning AT filliamakedon surveyoncontrastiveselfsupervisedlearning |
_version_ |
1724368214186524672 |