A Meta Learning-Based Approach for Zero-Shot Co-Training

The lack of labeled data is one of the main obstacles to the application of machine learning algorithms in a variety of domains. Semi-supervised learning, where additional samples are automatically labeled, is a common and cost-effective approach to address this challenge. A popular semi-supervised...

Full description

Bibliographic Details
Main Authors: Guy Zaks, Gilad Katz
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9555627/
_version_ 1819002148499750912
author Guy Zaks
Gilad Katz
author_facet Guy Zaks
Gilad Katz
author_sort Guy Zaks
collection DOAJ
description The lack of labeled data is one of the main obstacles to the application of machine learning algorithms in a variety of domains. Semi-supervised learning, where additional samples are automatically labeled, is a common and cost-effective approach to address this challenge. A popular semi-supervised labeling approach is co-training, where two views of the data &#x2013; achieved by the training of two learning models on different feature subsets &#x2013; iteratively provide each other with additional newly-labeled samples. Despite being effective in many cases, existing co-training algorithms often suffer from low labeling accuracy and a heuristic sample-selection strategy that hurt their performance. We propose <italic>Co</italic>-training using <italic>Met</italic>a-learning (CoMet), a novel approach that addresses many of the shortcomings of existing co-training methods. Instead of employing a greedy labeling approach of individual samples, CoMet evaluates batches of samples and is thus able to select samples that complement each other. Additionally, our approach employs a meta-learning approach that enables it to leverage insights from previously-evaluated datasets and apply these insights to other datasets. Extensive evaluation on 35 datasets shows CoMet significantly outperforms other leading co-training approaches, particularly when the amount of available labeled data is very small. Moreover, our analysis shows that CoMet&#x2019;s labeling accuracy and consistency of performance are also superior to those of existing approaches.
first_indexed 2024-12-20T23:00:30Z
format Article
id doaj.art-e8c7cfc74f4940ecba62eff0d32920f3
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-20T23:00:30Z
publishDate 2021-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-e8c7cfc74f4940ecba62eff0d32920f32022-12-21T19:24:01ZengIEEEIEEE Access2169-35362021-01-01914665314666610.1109/ACCESS.2021.31169729555627A Meta Learning-Based Approach for Zero-Shot Co-TrainingGuy Zaks0Gilad Katz1https://orcid.org/0000-0001-9478-7550Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Be&#x2019;er Sheva, IsraelDepartment of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Be&#x2019;er Sheva, IsraelThe lack of labeled data is one of the main obstacles to the application of machine learning algorithms in a variety of domains. Semi-supervised learning, where additional samples are automatically labeled, is a common and cost-effective approach to address this challenge. A popular semi-supervised labeling approach is co-training, where two views of the data &#x2013; achieved by the training of two learning models on different feature subsets &#x2013; iteratively provide each other with additional newly-labeled samples. Despite being effective in many cases, existing co-training algorithms often suffer from low labeling accuracy and a heuristic sample-selection strategy that hurt their performance. We propose <italic>Co</italic>-training using <italic>Met</italic>a-learning (CoMet), a novel approach that addresses many of the shortcomings of existing co-training methods. Instead of employing a greedy labeling approach of individual samples, CoMet evaluates batches of samples and is thus able to select samples that complement each other. Additionally, our approach employs a meta-learning approach that enables it to leverage insights from previously-evaluated datasets and apply these insights to other datasets. Extensive evaluation on 35 datasets shows CoMet significantly outperforms other leading co-training approaches, particularly when the amount of available labeled data is very small. Moreover, our analysis shows that CoMet&#x2019;s labeling accuracy and consistency of performance are also superior to those of existing approaches.https://ieeexplore.ieee.org/document/9555627/Co-trainingsemi-supervised learningmeta-learning
spellingShingle Guy Zaks
Gilad Katz
A Meta Learning-Based Approach for Zero-Shot Co-Training
IEEE Access
Co-training
semi-supervised learning
meta-learning
title A Meta Learning-Based Approach for Zero-Shot Co-Training
title_full A Meta Learning-Based Approach for Zero-Shot Co-Training
title_fullStr A Meta Learning-Based Approach for Zero-Shot Co-Training
title_full_unstemmed A Meta Learning-Based Approach for Zero-Shot Co-Training
title_short A Meta Learning-Based Approach for Zero-Shot Co-Training
title_sort meta learning based approach for zero shot co training
topic Co-training
semi-supervised learning
meta-learning
url https://ieeexplore.ieee.org/document/9555627/
work_keys_str_mv AT guyzaks ametalearningbasedapproachforzeroshotcotraining
AT giladkatz ametalearningbasedapproachforzeroshotcotraining
AT guyzaks metalearningbasedapproachforzeroshotcotraining
AT giladkatz metalearningbasedapproachforzeroshotcotraining