Confident Learning: Estimating Uncertainty in Dataset Labels
<jats:p>Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the...
Main Authors: | Northcutt, Curtis, Jiang, Lu, Chuang, Isaac |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Article |
Language: | English |
Published: |
AI Access Foundation
2022
|
Online Access: | https://hdl.handle.net/1721.1/142946 |
Similar Items
-
Learning with confident examples: Rank pruning for robust classification with noisy labels
by: Chuang, Isaac L., et al.
Published: (2021) -
Confident Learning for Machines and Humans
by: Northcutt, Curtis George
Published: (2022) -
Calculating Confidence Interval Estimation Using Maximum Likelihood Estimator For Dataset
by: Mokhtar, Siti Fairus, et al.
Published: (2021) -
Classification with noisy labels : "Multiple Account" cheating detection in Open Online Courses
by: Northcutt, Curtis George
Published: (2017) -
Active learning with confidence-based answers for crowdsourcing labeling tasks
by: Song, Jinhua, et al.
Published: (2020)