Learning discriminative space–time action parts from weakly labelled videos

Current state-of-the-art action classification methods aggregate space–time features globally, from the entire video clip under consideration. However, the features extracted may in part be due to irrelevant scene context, or movements shared amongst multiple action classes. This motivates learning...

Ausführliche Beschreibung

Bibliographische Detailangaben
Hauptverfasser: Sapienza, M, Cuzzolin, F, Torr, PHS
Format: Journal article
Sprache:English
Veröffentlicht: Springer 2013