Multimodal region-based behavioral modeling for suicide risk screening
IntroductionSuicide is a leading cause of death around the world, interpolating a huge suffering to the families and communities of the individuals. Such pain and suffering are preventable with early screening and monitoring. However, current suicide risk identification relies on self-disclosure and...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-04-01
|
Series: | Frontiers in Computer Science |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fcomp.2023.990426/full |
_version_ | 1797843704646467584 |
---|---|
author | Sharifa Alghowinem Sharifa Alghowinem Xiajie Zhang Cynthia Breazeal Hae Won Park |
author_facet | Sharifa Alghowinem Sharifa Alghowinem Xiajie Zhang Cynthia Breazeal Hae Won Park |
author_sort | Sharifa Alghowinem |
collection | DOAJ |
description | IntroductionSuicide is a leading cause of death around the world, interpolating a huge suffering to the families and communities of the individuals. Such pain and suffering are preventable with early screening and monitoring. However, current suicide risk identification relies on self-disclosure and/or the clinician's judgment.Research question/statmentTherefore, we investigate acoustic and nonverbal behavioral markers that are associated with different levels of suicide risks through a multimodal approach for suicide risk detection.Given the differences in the behavioral dynamics between subregions of facial expressions and body gestures in terms of timespans, we propose a novel region-based multimodal fusion.MethodsWe used a newly collected video interview dataset of young Japanese who are at risk of suicide to extract engineered features and deep representations from the speech, regions of the face (i.e., eyes, nose, mouth), regions of the body (i.e., shoulders, arms, legs), as well as the overall combined regions of face and body.ResultsThe results confirmed that behavioral dynamics differs between regions, where some regions benefit from a shorter timespans, while other regions benefit from longer ones. Therefore, a region-based multimodal approach is more informative in terms of behavioral markers and accounts for both subtle and strong behaviors. Our region-based multimodal results outperformed the single modality, reaching a sample-level accuracy of 96% compared with the highest single modality that reached sample-level accuracy of 80%. Interpretation of the behavioral markers, showed the higher the suicide risk levels, the lower the expressivity, movement and energy observed from the subject. Moreover, the high-risk suicide group express more disgust and contact avoidance, while the low-risk suicide group express self-soothing and anxiety behaviors.DiscussionEven though multimodal analysis is a powerful tool to enhance the model performance and its reliability, it is important to ensure through a careful selection that a strong behavioral modality (e.g., body movement) does not dominate another subtle modality (e.g., eye blink). Despite the small sample size, our unique dataset and the current results adds a new cultural dimension to the research on nonverbal markers of suicidal risks. Given a larger dataset, future work on this method can be useful in helping psychiatrists with the assessment of suicide risk and could have several applications to identify those at risk. |
first_indexed | 2024-04-09T17:10:09Z |
format | Article |
id | doaj.art-47c939ba08fb42d4bcd73c12ebda1f74 |
institution | Directory Open Access Journal |
issn | 2624-9898 |
language | English |
last_indexed | 2024-04-09T17:10:09Z |
publishDate | 2023-04-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Computer Science |
spelling | doaj.art-47c939ba08fb42d4bcd73c12ebda1f742023-04-20T05:59:35ZengFrontiers Media S.A.Frontiers in Computer Science2624-98982023-04-01510.3389/fcomp.2023.990426990426Multimodal region-based behavioral modeling for suicide risk screeningSharifa Alghowinem0Sharifa Alghowinem1Xiajie Zhang2Cynthia Breazeal3Hae Won Park4Personal Robotics Group, Media Lab, Massachusetts Institute of Technology, Cambridge, MA, United StatesComputer and Information Sciences College, Prince Sultan University, Riyadh, Saudi ArabiaPersonal Robotics Group, Media Lab, Massachusetts Institute of Technology, Cambridge, MA, United StatesPersonal Robotics Group, Media Lab, Massachusetts Institute of Technology, Cambridge, MA, United StatesPersonal Robotics Group, Media Lab, Massachusetts Institute of Technology, Cambridge, MA, United StatesIntroductionSuicide is a leading cause of death around the world, interpolating a huge suffering to the families and communities of the individuals. Such pain and suffering are preventable with early screening and monitoring. However, current suicide risk identification relies on self-disclosure and/or the clinician's judgment.Research question/statmentTherefore, we investigate acoustic and nonverbal behavioral markers that are associated with different levels of suicide risks through a multimodal approach for suicide risk detection.Given the differences in the behavioral dynamics between subregions of facial expressions and body gestures in terms of timespans, we propose a novel region-based multimodal fusion.MethodsWe used a newly collected video interview dataset of young Japanese who are at risk of suicide to extract engineered features and deep representations from the speech, regions of the face (i.e., eyes, nose, mouth), regions of the body (i.e., shoulders, arms, legs), as well as the overall combined regions of face and body.ResultsThe results confirmed that behavioral dynamics differs between regions, where some regions benefit from a shorter timespans, while other regions benefit from longer ones. Therefore, a region-based multimodal approach is more informative in terms of behavioral markers and accounts for both subtle and strong behaviors. Our region-based multimodal results outperformed the single modality, reaching a sample-level accuracy of 96% compared with the highest single modality that reached sample-level accuracy of 80%. Interpretation of the behavioral markers, showed the higher the suicide risk levels, the lower the expressivity, movement and energy observed from the subject. Moreover, the high-risk suicide group express more disgust and contact avoidance, while the low-risk suicide group express self-soothing and anxiety behaviors.DiscussionEven though multimodal analysis is a powerful tool to enhance the model performance and its reliability, it is important to ensure through a careful selection that a strong behavioral modality (e.g., body movement) does not dominate another subtle modality (e.g., eye blink). Despite the small sample size, our unique dataset and the current results adds a new cultural dimension to the research on nonverbal markers of suicidal risks. Given a larger dataset, future work on this method can be useful in helping psychiatrists with the assessment of suicide risk and could have several applications to identify those at risk.https://www.frontiersin.org/articles/10.3389/fcomp.2023.990426/fullsuicide risk screeningnonverbal behaviorspeech prosodyregion-based behavior analysismultimodal fusiondeep learning automatic suicide risk screening |
spellingShingle | Sharifa Alghowinem Sharifa Alghowinem Xiajie Zhang Cynthia Breazeal Hae Won Park Multimodal region-based behavioral modeling for suicide risk screening Frontiers in Computer Science suicide risk screening nonverbal behavior speech prosody region-based behavior analysis multimodal fusion deep learning automatic suicide risk screening |
title | Multimodal region-based behavioral modeling for suicide risk screening |
title_full | Multimodal region-based behavioral modeling for suicide risk screening |
title_fullStr | Multimodal region-based behavioral modeling for suicide risk screening |
title_full_unstemmed | Multimodal region-based behavioral modeling for suicide risk screening |
title_short | Multimodal region-based behavioral modeling for suicide risk screening |
title_sort | multimodal region based behavioral modeling for suicide risk screening |
topic | suicide risk screening nonverbal behavior speech prosody region-based behavior analysis multimodal fusion deep learning automatic suicide risk screening |
url | https://www.frontiersin.org/articles/10.3389/fcomp.2023.990426/full |
work_keys_str_mv | AT sharifaalghowinem multimodalregionbasedbehavioralmodelingforsuicideriskscreening AT sharifaalghowinem multimodalregionbasedbehavioralmodelingforsuicideriskscreening AT xiajiezhang multimodalregionbasedbehavioralmodelingforsuicideriskscreening AT cynthiabreazeal multimodalregionbasedbehavioralmodelingforsuicideriskscreening AT haewonpark multimodalregionbasedbehavioralmodelingforsuicideriskscreening |