Learning Numerosity Representations with Transformers: Number Generation Tasks and Out-of-Distribution Generalization

One of the most rapidly advancing areas of deep learning research aims at creating models that learn to disentangle the latent factors of variation from a data distribution. However, modeling joint probability mass functions is usually prohibitive, which motivates the use of conditional models assum...

Full description

Bibliographic Details
Main Authors: Tommaso Boccato, Alberto Testolin, Marco Zorzi
Format: Article
Language:English
Published: MDPI AG 2021-07-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/23/7/857
Description
Summary:One of the most rapidly advancing areas of deep learning research aims at creating models that learn to disentangle the latent factors of variation from a data distribution. However, modeling joint probability mass functions is usually prohibitive, which motivates the use of conditional models assuming that some information is given as input. In the domain of numerical cognition, deep learning architectures have successfully demonstrated that approximate numerosity representations can emerge in multi-layer networks that build latent representations of a set of images with a varying number of items. However, existing models have focused on tasks requiring to conditionally estimate numerosity information from a <i>given image</i>. Here, we focus on a set of much more challenging tasks, which require to conditionally generate synthetic images containing a <i>given number</i> of items. We show that attention-based architectures operating at the pixel level can learn to produce well-formed images approximately containing a specific number of items, even when the target numerosity was not present in the training distribution.
ISSN:1099-4300