Confident Learning: Estimating Uncertainty in Dataset Labels

<jats:p>Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the...

Full description

Bibliographic Details
Main Authors: Northcutt, Curtis, Jiang, Lu, Chuang, Isaac
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: AI Access Foundation 2022
Online Access:https://hdl.handle.net/1721.1/142946
_version_ 1826194819775463424
author Northcutt, Curtis
Jiang, Lu
Chuang, Isaac
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Northcutt, Curtis
Jiang, Lu
Chuang, Isaac
author_sort Northcutt, Curtis
collection MIT
description <jats:p>Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 missile images are mislabeled as their parent class projectile), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.</jats:p>
first_indexed 2024-09-23T10:02:40Z
format Article
id mit-1721.1/142946
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T10:02:40Z
publishDate 2022
publisher AI Access Foundation
record_format dspace
spelling mit-1721.1/1429462023-07-28T20:13:59Z Confident Learning: Estimating Uncertainty in Dataset Labels Northcutt, Curtis Jiang, Lu Chuang, Isaac Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science <jats:p>Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 missile images are mislabeled as their parent class projectile), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.</jats:p> 2022-06-10T16:49:05Z 2022-06-10T16:49:05Z 2021 2022-06-10T16:38:40Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/142946 Northcutt, Curtis, Jiang, Lu and Chuang, Isaac. 2021. "Confident Learning: Estimating Uncertainty in Dataset Labels." Journal of Artificial Intelligence Research, 70. en 10.1613/JAIR.1.12125 Journal of Artificial Intelligence Research Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf AI Access Foundation Journal of Artificial Intelligence Research
spellingShingle Northcutt, Curtis
Jiang, Lu
Chuang, Isaac
Confident Learning: Estimating Uncertainty in Dataset Labels
title Confident Learning: Estimating Uncertainty in Dataset Labels
title_full Confident Learning: Estimating Uncertainty in Dataset Labels
title_fullStr Confident Learning: Estimating Uncertainty in Dataset Labels
title_full_unstemmed Confident Learning: Estimating Uncertainty in Dataset Labels
title_short Confident Learning: Estimating Uncertainty in Dataset Labels
title_sort confident learning estimating uncertainty in dataset labels
url https://hdl.handle.net/1721.1/142946
work_keys_str_mv AT northcuttcurtis confidentlearningestimatinguncertaintyindatasetlabels
AT jianglu confidentlearningestimatinguncertaintyindatasetlabels
AT chuangisaac confidentlearningestimatinguncertaintyindatasetlabels