Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments
Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of th...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2022-07-01
|
Series: | Frontiers in Artificial Intelligence |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2022.941825/full |
_version_ | 1811288390896713728 |
---|---|
author | Caleb Vatral Gautam Biswas Clayton Cohn Eduardo Davalos Naveeduddin Mohammed |
author_facet | Caleb Vatral Gautam Biswas Clayton Cohn Eduardo Davalos Naveeduddin Mohammed |
author_sort | Caleb Vatral |
collection | DOAJ |
description | Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers. |
first_indexed | 2024-04-13T03:36:30Z |
format | Article |
id | doaj.art-dbea5b257ea64336a3fe27d6af235b78 |
institution | Directory Open Access Journal |
issn | 2624-8212 |
language | English |
last_indexed | 2024-04-13T03:36:30Z |
publishDate | 2022-07-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Artificial Intelligence |
spelling | doaj.art-dbea5b257ea64336a3fe27d6af235b782022-12-22T03:04:19ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122022-07-01510.3389/frai.2022.941825941825Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environmentsCaleb VatralGautam BiswasClayton CohnEduardo DavalosNaveeduddin MohammedSimulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.https://www.frontiersin.org/articles/10.3389/frai.2022.941825/fulldistributed cognitionlearning analytics (LA)multimodal datasimulation based training (SBT)mixed reality (MR)DiCoT |
spellingShingle | Caleb Vatral Gautam Biswas Clayton Cohn Eduardo Davalos Naveeduddin Mohammed Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments Frontiers in Artificial Intelligence distributed cognition learning analytics (LA) multimodal data simulation based training (SBT) mixed reality (MR) DiCoT |
title | Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments |
title_full | Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments |
title_fullStr | Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments |
title_full_unstemmed | Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments |
title_short | Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments |
title_sort | using the dicot framework for integrated multimodal analysis in mixed reality training environments |
topic | distributed cognition learning analytics (LA) multimodal data simulation based training (SBT) mixed reality (MR) DiCoT |
url | https://www.frontiersin.org/articles/10.3389/frai.2022.941825/full |
work_keys_str_mv | AT calebvatral usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments AT gautambiswas usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments AT claytoncohn usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments AT eduardodavalos usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments AT naveeduddinmohammed usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments |