Task-driven convolutional recurrent models of the visual system
Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of temporally-averaged responses of neurons in the primate brain's visual system. However, biological visual system...
Main Authors: | Kubilius, Jonas, Kar, Kohitij, DiCarlo, James |
---|---|
Other Authors: | McGovern Institute for Brain Research at MIT |
Format: | Article |
Language: | English |
Published: |
IEEE
2020
|
Online Access: | https://hdl.handle.net/1721.1/126698 |
Similar Items
-
Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior
by: Kar, Kohitij, et al.
Published: (2020) -
Fast Recurrent Processing via Ventrolateral Prefrontal Cortex Is Needed by the Primate Ventral Stream for Robust Core Visual Object Recognition
by: Kar, Kohitij, et al.
Published: (2021) -
Recurrent Connections in the Primate Ventral Visual Stream Mediate a Trade-Off Between Task Performance and Network Size During Core Object Recognition
by: Nayebi, Aran, et al.
Published: (2023) -
Neural population control via deep image synthesis
by: Bashivan, Pouya, et al.
Published: (2020) -
Are Topographic Deep Convolutional Neural Networks Better Models of the Ventral Visual Stream?
by: Jozwik, Kamila Maria, et al.
Published: (2021)