Learning discriminative space–time action parts from weakly labelled videos

Current state-of-the-art action classification methods aggregate space–time features globally, from the entire video clip under consideration. However, the features extracted may in part be due to irrelevant scene context, or movements shared amongst multiple action classes. This motivates learning...

Full description

Bibliographic Details
Main Authors: Sapienza, M, Cuzzolin, F, Torr, PHS
Format: Journal article
Language:English
Published: Springer 2013