Word Representation Learning in Multimodal Pre-Trained Transformers: An Intrinsic Evaluation

AbstractThis study carries out a systematic intrinsic evaluation of the semantic representations learned by state-of-the-art pre-trained multimodal Transformers. These representations are claimed to be task-agnostic and shown to help on many downstream language-and-vision tasks. Howe...

Full description

Bibliographic Details
Main Authors: Sandro Pezzelle, Ece Takmaz, Raquel Fernández
Format: Article
Language:English
Published: The MIT Press 2021-01-01
Series:Transactions of the Association for Computational Linguistics
Online Access:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00443/108935/Word-Representation-Learning-in-Multimodal-Pre