Learning New Dimensions of Human Visual Similarity using Synthetic Data
Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object poses, and semantic content. In this thesis, we develop a...
Main Author: | Fu, Stephanie |
---|---|
Other Authors: | Isola, Phillip |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2023
|
Online Access: | https://hdl.handle.net/1721.1/151511 |
Similar Items
-
Visual similarity using deep learning
by: Nguyen, Hoang Son
Published: (2017) -
Visual Representation Learning from Synthetic Data
by: Fan, Lijie
Published: (2024) -
Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning
by: Hamilton, Mark
Published: (2022) -
Similarity is closeness: Using distributional semantic spaces to model similarity in visual and linguistic metaphors
by: Bolognesi, M, et al.
Published: (2017) -
DataSHIELD – new directions and dimensions
by: Wilson, R, et al.
Published: (2017)