![]() ![]() Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. ![]() Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |