What have we learned from deep representations for action recognition?

As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing what two-stream models...

Full description

Bibliographic Details
Main Authors: Feichtenhofer, C, Pinz, A, Wildes, R, Zisserman, A
Format: Conference item
Published: Institute of Electrical and Electronics Engineers 2018
_version_ 1797067638158917632
author Feichtenhofer, C
Pinz, A
Wildes, R
Zisserman, A
author_facet Feichtenhofer, C
Pinz, A
Wildes, R
Zisserman, A
author_sort Feichtenhofer, C
collection OXFORD
description As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing what two-stream models have learned in order to recognize actions in video. We show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncracies of training data and to explain failure cases of the system.
first_indexed 2024-03-06T21:59:12Z
format Conference item
id oxford-uuid:4e03a2a0-0124-4cde-bb99-a77ee88664b2
institution University of Oxford
last_indexed 2024-03-06T21:59:12Z
publishDate 2018
publisher Institute of Electrical and Electronics Engineers
record_format dspace
spelling oxford-uuid:4e03a2a0-0124-4cde-bb99-a77ee88664b22022-03-26T15:58:41ZWhat have we learned from deep representations for action recognition?Conference itemhttp://purl.org/coar/resource_type/c_5794uuid:4e03a2a0-0124-4cde-bb99-a77ee88664b2Symplectic Elements at OxfordInstitute of Electrical and Electronics Engineers2018Feichtenhofer, CPinz, AWildes, RZisserman, AAs the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing what two-stream models have learned in order to recognize actions in video. We show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncracies of training data and to explain failure cases of the system.
spellingShingle Feichtenhofer, C
Pinz, A
Wildes, R
Zisserman, A
What have we learned from deep representations for action recognition?
title What have we learned from deep representations for action recognition?
title_full What have we learned from deep representations for action recognition?
title_fullStr What have we learned from deep representations for action recognition?
title_full_unstemmed What have we learned from deep representations for action recognition?
title_short What have we learned from deep representations for action recognition?
title_sort what have we learned from deep representations for action recognition
work_keys_str_mv AT feichtenhoferc whathavewelearnedfromdeeprepresentationsforactionrecognition
AT pinza whathavewelearnedfromdeeprepresentationsforactionrecognition
AT wildesr whathavewelearnedfromdeeprepresentationsforactionrecognition
AT zissermana whathavewelearnedfromdeeprepresentationsforactionrecognition