Learning discriminative space–time action parts from weakly labelled videos

Current state-of-the-art action classification methods aggregate space–time features globally, from the entire video clip under consideration. However, the features extracted may in part be due to irrelevant scene context, or movements shared amongst multiple action classes. This motivates learning...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Sapienza, M, Cuzzolin, F, Torr, PHS
Định dạng: Journal article
Ngôn ngữ:English
Được phát hành: Springer 2013