Learning discriminative space–time action parts from weakly labelled videos
Current state-of-the-art action classification methods aggregate space–time features globally, from the entire video clip under consideration. However, the features extracted may in part be due to irrelevant scene context, or movements shared amongst multiple action classes. This motivates learning...
Những tác giả chính: | , , |
---|---|
Định dạng: | Journal article |
Ngôn ngữ: | English |
Được phát hành: |
Springer
2013
|