DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors
In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflecto...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | http://www.mdpi.com/1424-8220/19/2/282 |
_version_ | 1798039309653114880 |
---|---|
author | Anargyros Chatzitofis Dimitrios Zarpalas Stefanos Kollias Petros Daras |
author_facet | Anargyros Chatzitofis Dimitrios Zarpalas Stefanos Kollias Petros Daras |
author_sort | Anargyros Chatzitofis |
collection | DOAJ |
description | In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject’s motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4 . 5 % in total 3D PCK accuracy. |
first_indexed | 2024-04-11T21:52:07Z |
format | Article |
id | doaj.art-6735dd0a675949ae87f043b5bae5692a |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-04-11T21:52:07Z |
publishDate | 2019-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-6735dd0a675949ae87f043b5bae5692a2022-12-22T04:01:12ZengMDPI AGSensors1424-82202019-01-0119228210.3390/s19020282s19020282DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-ReflectorsAnargyros Chatzitofis0Dimitrios Zarpalas1Stefanos Kollias2Petros Daras3Centre for Research and Technology Hellas, Information Technologies Institute, 6th km Charilaou-Thermi, 57001 Thermi, Thessaloniki, GreeceCentre for Research and Technology Hellas, Information Technologies Institute, 6th km Charilaou-Thermi, 57001 Thermi, Thessaloniki, GreeceNational Technical University of Athens, School of Electrical and Computer Engineering, Zografou Campus, Iroon Polytechniou 9, 15780 Zografou, Athens, GreeceCentre for Research and Technology Hellas, Information Technologies Institute, 6th km Charilaou-Thermi, 57001 Thermi, Thessaloniki, GreeceIn this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject’s motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4 . 5 % in total 3D PCK accuracy.http://www.mdpi.com/1424-8220/19/2/282motion capturedeep learningretro-reflectorsretro-reflective markersmultiple depth sensorslow-costdeep mocapdepth data3D data3D visionoptical mocapmarker-based mocap |
spellingShingle | Anargyros Chatzitofis Dimitrios Zarpalas Stefanos Kollias Petros Daras DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors Sensors motion capture deep learning retro-reflectors retro-reflective markers multiple depth sensors low-cost deep mocap depth data 3D data 3D vision optical mocap marker-based mocap |
title | DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors |
title_full | DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors |
title_fullStr | DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors |
title_full_unstemmed | DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors |
title_short | DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors |
title_sort | deepmocap deep optical motion capture using multiple depth sensors and retro reflectors |
topic | motion capture deep learning retro-reflectors retro-reflective markers multiple depth sensors low-cost deep mocap depth data 3D data 3D vision optical mocap marker-based mocap |
url | http://www.mdpi.com/1424-8220/19/2/282 |
work_keys_str_mv | AT anargyroschatzitofis deepmocapdeepopticalmotioncaptureusingmultipledepthsensorsandretroreflectors AT dimitrioszarpalas deepmocapdeepopticalmotioncaptureusingmultipledepthsensorsandretroreflectors AT stefanoskollias deepmocapdeepopticalmotioncaptureusingmultipledepthsensorsandretroreflectors AT petrosdaras deepmocapdeepopticalmotioncaptureusingmultipledepthsensorsandretroreflectors |