Learning New Dimensions of Human Visual Similarity using Synthetic Data
Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object poses, and semantic content. In this thesis, we develop a...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2023
|
Online Access: | https://hdl.handle.net/1721.1/151511 |
_version_ | 1826190364907667456 |
---|---|
author | Fu, Stephanie |
author2 | Isola, Phillip |
author_facet | Isola, Phillip Fu, Stephanie |
author_sort | Fu, Stephanie |
collection | MIT |
description | Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object poses, and semantic content. In this thesis, we develop a perceptual metric that assesses images holistically. Our first step is to collect a new dataset of human similarity judgments over image pairs that are alike in diverse ways. Critical to this dataset is that judgments are nearly automatic and shared by all observers. To achieve this we use recent text-to-image models to create synthetic pairs that are perturbed along various dimensions. We observe that popular perceptual metrics fall short of explaining our new data and introduce a new metric, DreamSim, tuned to better align with human perception. We analyze how our metric is affected by different visual attributes, and find that it focuses heavily on foreground objects and semantic content while also being sensitive to color and layout. Notably, despite being trained on synthetic data, our metric generalizes to real images, giving strong results on retrieval and reconstruction tasks. Furthermore, our metric outperforms both prior learned metrics and recent large vision models on these tasks. |
first_indexed | 2024-09-23T08:39:09Z |
format | Thesis |
id | mit-1721.1/151511 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T08:39:09Z |
publishDate | 2023 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1515112023-08-01T04:18:42Z Learning New Dimensions of Human Visual Similarity using Synthetic Data Fu, Stephanie Isola, Phillip Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object poses, and semantic content. In this thesis, we develop a perceptual metric that assesses images holistically. Our first step is to collect a new dataset of human similarity judgments over image pairs that are alike in diverse ways. Critical to this dataset is that judgments are nearly automatic and shared by all observers. To achieve this we use recent text-to-image models to create synthetic pairs that are perturbed along various dimensions. We observe that popular perceptual metrics fall short of explaining our new data and introduce a new metric, DreamSim, tuned to better align with human perception. We analyze how our metric is affected by different visual attributes, and find that it focuses heavily on foreground objects and semantic content while also being sensitive to color and layout. Notably, despite being trained on synthetic data, our metric generalizes to real images, giving strong results on retrieval and reconstruction tasks. Furthermore, our metric outperforms both prior learned metrics and recent large vision models on these tasks. M.Eng. 2023-07-31T19:45:18Z 2023-07-31T19:45:18Z 2023-06 2023-06-06T16:34:32.802Z Thesis https://hdl.handle.net/1721.1/151511 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Fu, Stephanie Learning New Dimensions of Human Visual Similarity using Synthetic Data |
title | Learning New Dimensions of Human Visual Similarity using Synthetic Data |
title_full | Learning New Dimensions of Human Visual Similarity using Synthetic Data |
title_fullStr | Learning New Dimensions of Human Visual Similarity using Synthetic Data |
title_full_unstemmed | Learning New Dimensions of Human Visual Similarity using Synthetic Data |
title_short | Learning New Dimensions of Human Visual Similarity using Synthetic Data |
title_sort | learning new dimensions of human visual similarity using synthetic data |
url | https://hdl.handle.net/1721.1/151511 |
work_keys_str_mv | AT fustephanie learningnewdimensionsofhumanvisualsimilarityusingsyntheticdata |