Multimodal region-based behavioral modeling for suicide risk screening

Introduction: Suicide is a leading cause of death around the world, interpolating a huge suffering to the families and communities of the individuals. Such pain and suffering are preventable with early screening and monitoring. However, current suicide risk identification relies on self-disclosure a...

Full description

Bibliographic Details
Main Authors: Alghowinem, Sharifa, Zhang, Xiajie, Breazeal, Cynthia, Park, Hae Won
Format: Article
Language:en_US
Published: Frontiers Media SA 2023
Subjects:
Online Access:https://hdl.handle.net/1721.1/153254
_version_ 1826203200233930752
author Alghowinem, Sharifa
Zhang, Xiajie
Breazeal, Cynthia
Park, Hae Won
author_facet Alghowinem, Sharifa
Zhang, Xiajie
Breazeal, Cynthia
Park, Hae Won
author_sort Alghowinem, Sharifa
collection MIT
description Introduction: Suicide is a leading cause of death around the world, interpolating a huge suffering to the families and communities of the individuals. Such pain and suffering are preventable with early screening and monitoring. However, current suicide risk identification relies on self-disclosure and/or the clinician's judgment.Research question/statement: Therefore, we investigate acoustic and nonverbal behavioral markers that are associated with different levels of suicide risks through a multimodal approach for suicide risk detection.Given the differences in the behavioral dynamics between subregions of facial expressions and body gestures in terms of timespans, we propose a novel region-based multimodal fusion. Methods: We used a newly collected video interview dataset of young Japanese who are at risk of suicide to extract engineered features and deep representations from the speech, regions of the face (i.e., eyes, nose, mouth), regions of the body (i.e., shoulders, arms, legs), as well as the overall combined regions of face and body. Results: The results confirmed that behavioral dynamics differs between regions, where some regions benefit from a shorter timespans, while other regions benefit from longer ones. Therefore, a region-based multimodal approach is more informative in terms of behavioral markers and accounts for both subtle and strong behaviors. Our region-based multimodal results outperformed the single modality, reaching a sample-level accuracy of 96% compared with the highest single modality that reached sample-level accuracy of 80%. Interpretation of the behavioral markers, showed the higher the suicide risk levels, the lower the expressivity, movement and energy observed from the subject. Moreover, the high-risk suicide group express more disgust and contact avoidance, while the low-risk suicide group express self-soothing and anxiety behaviors. Discussion: Even though multimodal analysis is a powerful tool to enhance the model performance and its reliability, it is important to ensure through a careful selection that a strong behavioral modality (e.g., body movement) does not dominate another subtle modality (e.g., eye blink). Despite the small sample size, our unique dataset and the current results adds a new cultural dimension to the research on nonverbal markers of suicidal risks. Given a larger dataset, future work on this method can be useful in helping psychiatrists with the assessment of suicide risk and could have several applications to identify those at risk.
first_indexed 2024-09-23T12:32:54Z
format Article
id mit-1721.1/153254
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T12:32:54Z
publishDate 2023
publisher Frontiers Media SA
record_format dspace
spelling mit-1721.1/1532542023-12-23T03:33:33Z Multimodal region-based behavioral modeling for suicide risk screening Alghowinem, Sharifa Zhang, Xiajie Breazeal, Cynthia Park, Hae Won Computer Science Applications Computer Vision and Pattern Recognition Human-Computer Interaction Computer Science (miscellaneous) Introduction: Suicide is a leading cause of death around the world, interpolating a huge suffering to the families and communities of the individuals. Such pain and suffering are preventable with early screening and monitoring. However, current suicide risk identification relies on self-disclosure and/or the clinician's judgment.Research question/statement: Therefore, we investigate acoustic and nonverbal behavioral markers that are associated with different levels of suicide risks through a multimodal approach for suicide risk detection.Given the differences in the behavioral dynamics between subregions of facial expressions and body gestures in terms of timespans, we propose a novel region-based multimodal fusion. Methods: We used a newly collected video interview dataset of young Japanese who are at risk of suicide to extract engineered features and deep representations from the speech, regions of the face (i.e., eyes, nose, mouth), regions of the body (i.e., shoulders, arms, legs), as well as the overall combined regions of face and body. Results: The results confirmed that behavioral dynamics differs between regions, where some regions benefit from a shorter timespans, while other regions benefit from longer ones. Therefore, a region-based multimodal approach is more informative in terms of behavioral markers and accounts for both subtle and strong behaviors. Our region-based multimodal results outperformed the single modality, reaching a sample-level accuracy of 96% compared with the highest single modality that reached sample-level accuracy of 80%. Interpretation of the behavioral markers, showed the higher the suicide risk levels, the lower the expressivity, movement and energy observed from the subject. Moreover, the high-risk suicide group express more disgust and contact avoidance, while the low-risk suicide group express self-soothing and anxiety behaviors. Discussion: Even though multimodal analysis is a powerful tool to enhance the model performance and its reliability, it is important to ensure through a careful selection that a strong behavioral modality (e.g., body movement) does not dominate another subtle modality (e.g., eye blink). Despite the small sample size, our unique dataset and the current results adds a new cultural dimension to the research on nonverbal markers of suicidal risks. Given a larger dataset, future work on this method can be useful in helping psychiatrists with the assessment of suicide risk and could have several applications to identify those at risk. 2023-12-22T21:06:04Z 2023-12-22T21:06:04Z 2023-04-20 Article http://purl.org/eprint/type/JournalArticle 2624-9898 https://hdl.handle.net/1721.1/153254 Alghowinem S, Zhang X, Breazeal C and Park HW (2023) Multimodal region-based behavioral modeling for suicide risk screening. Front. Comput. Sci. 5:990426. en_US 10.3389/fcomp.2023.990426 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ application/pdf Frontiers Media SA Frontiers Media SA
spellingShingle Computer Science Applications
Computer Vision and Pattern Recognition
Human-Computer Interaction
Computer Science (miscellaneous)
Alghowinem, Sharifa
Zhang, Xiajie
Breazeal, Cynthia
Park, Hae Won
Multimodal region-based behavioral modeling for suicide risk screening
title Multimodal region-based behavioral modeling for suicide risk screening
title_full Multimodal region-based behavioral modeling for suicide risk screening
title_fullStr Multimodal region-based behavioral modeling for suicide risk screening
title_full_unstemmed Multimodal region-based behavioral modeling for suicide risk screening
title_short Multimodal region-based behavioral modeling for suicide risk screening
title_sort multimodal region based behavioral modeling for suicide risk screening
topic Computer Science Applications
Computer Vision and Pattern Recognition
Human-Computer Interaction
Computer Science (miscellaneous)
url https://hdl.handle.net/1721.1/153254
work_keys_str_mv AT alghowinemsharifa multimodalregionbasedbehavioralmodelingforsuicideriskscreening
AT zhangxiajie multimodalregionbasedbehavioralmodelingforsuicideriskscreening
AT breazealcynthia multimodalregionbasedbehavioralmodelingforsuicideriskscreening
AT parkhaewon multimodalregionbasedbehavioralmodelingforsuicideriskscreening