Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence
We present a principled framework for inferring pixel labels in weakly-annotated image datasets. Most previous, example-based approaches to computer vision rely on a large corpus of densely labeled images. However, for large, modern image datasets, such labels are expensive to obtain and are often u...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer US
2017
|
Online Access: | http://hdl.handle.net/1721.1/106941 https://orcid.org/0000-0002-2231-7995 |
_version_ | 1811095918866333696 |
---|---|
author | Rubinstein, Michael Liu, Ce Freeman, William T. |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Rubinstein, Michael Liu, Ce Freeman, William T. |
author_sort | Rubinstein, Michael |
collection | MIT |
description | We present a principled framework for inferring pixel labels in weakly-annotated image datasets. Most previous, example-based approaches to computer vision rely on a large corpus of densely labeled images. However, for large, modern image datasets, such labels are expensive to obtain and are often unavailable. We establish a large-scale graphical model spanning all labeled and unlabeled images, then solve it to infer pixel labels jointly for all images in the dataset while enforcing consistent annotations over similar visual patterns. This model requires significantly less labeled data and assists in resolving ambiguities by propagating inferred annotations from images with stronger local visual evidences to images with weaker local evidences. We apply our proposed framework to two computer vision problems, namely image annotation with semantic segmentation, and object discovery and co-segmentation (segmenting multiple images containing a common object). Extensive numerical evaluations and comparisons show that our method consistently outperforms the state-of-the-art in automatic annotation and semantic labeling, while requiring significantly less labeled data. In contrast to previous co-segmentation techniques, our method manages to discover and segment objects well even in the presence of substantial amounts of noise images (images not containing the common object), as typical for datasets collected from Internet search. |
first_indexed | 2024-09-23T16:34:04Z |
format | Article |
id | mit-1721.1/106941 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T16:34:04Z |
publishDate | 2017 |
publisher | Springer US |
record_format | dspace |
spelling | mit-1721.1/1069412022-09-29T20:11:18Z Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence Rubinstein, Michael Liu, Ce Freeman, William T. Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Freeman, William T. We present a principled framework for inferring pixel labels in weakly-annotated image datasets. Most previous, example-based approaches to computer vision rely on a large corpus of densely labeled images. However, for large, modern image datasets, such labels are expensive to obtain and are often unavailable. We establish a large-scale graphical model spanning all labeled and unlabeled images, then solve it to infer pixel labels jointly for all images in the dataset while enforcing consistent annotations over similar visual patterns. This model requires significantly less labeled data and assists in resolving ambiguities by propagating inferred annotations from images with stronger local visual evidences to images with weaker local evidences. We apply our proposed framework to two computer vision problems, namely image annotation with semantic segmentation, and object discovery and co-segmentation (segmenting multiple images containing a common object). Extensive numerical evaluations and comparisons show that our method consistently outperforms the state-of-the-art in automatic annotation and semantic labeling, while requiring significantly less labeled data. In contrast to previous co-segmentation techniques, our method manages to discover and segment objects well even in the presence of substantial amounts of noise images (images not containing the common object), as typical for datasets collected from Internet search. 2017-02-15T16:20:20Z 2017-02-15T16:20:20Z 2016-03 2013-07 2017-02-02T15:21:12Z Article http://purl.org/eprint/type/JournalArticle 0920-5691 1573-1405 http://hdl.handle.net/1721.1/106941 Rubinstein, Michael, Ce Liu, and William T. Freeman. “Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence.” International Journal of Computer Vision 119.1 (2016): 23–45. https://orcid.org/0000-0002-2231-7995 en http://dx.doi.org/10.1007/s11263-016-0894-5 International Journal of Computer Vision Creative Commons Attribution http://creativecommons.org/licenses/by/4.0/ The Author(s) application/pdf Springer US Springer US |
spellingShingle | Rubinstein, Michael Liu, Ce Freeman, William T. Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence |
title | Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence |
title_full | Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence |
title_fullStr | Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence |
title_full_unstemmed | Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence |
title_short | Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence |
title_sort | joint inference in weakly annotated image datasets via dense correspondence |
url | http://hdl.handle.net/1721.1/106941 https://orcid.org/0000-0002-2231-7995 |
work_keys_str_mv | AT rubinsteinmichael jointinferenceinweaklyannotatedimagedatasetsviadensecorrespondence AT liuce jointinferenceinweaklyannotatedimagedatasetsviadensecorrespondence AT freemanwilliamt jointinferenceinweaklyannotatedimagedatasetsviadensecorrespondence |