Establishing a machine learning model for predicting nutritional risk through facial feature recognition
BackgroundMalnutrition affects many worldwide, necessitating accurate and timely nutritional risk assessment. This study aims to develop and validate a machine learning model using facial feature recognition for predicting nutritional risk. This innovative approach seeks to offer a non-invasive, eff...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-09-01
|
Series: | Frontiers in Nutrition |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnut.2023.1219193/full |
_version_ | 1797682390154346496 |
---|---|
author | Jingmin Wang Chengyuan He Zhiwen Long |
author_facet | Jingmin Wang Chengyuan He Zhiwen Long |
author_sort | Jingmin Wang |
collection | DOAJ |
description | BackgroundMalnutrition affects many worldwide, necessitating accurate and timely nutritional risk assessment. This study aims to develop and validate a machine learning model using facial feature recognition for predicting nutritional risk. This innovative approach seeks to offer a non-invasive, efficient method for early identification and intervention, ultimately improving health outcomes.MethodsWe gathered medical examination data and facial images from 949 patients across multiple hospitals to predict nutritional status. In this multicenter investigation, facial images underwent preprocessing via face alignment and cropping. Orbital fat pads were isolated using the U-net model, with the histogram of oriented gradient (HOG) method employed for feature extraction. Standardized HOG features were subjected to principal component analysis (PCA) for dimensionality reduction. A support vector machine (SVM) classification model was utilized for NRS-2002 detection. Our approach established a non-linear mapping between facial features and NRS-2002 nutritional risk scores, providing an innovative method for evaluating patient nutritional status.ResultsIn context of orbital fat pad area segmentation with U-net model, the averaged dice coefficient is 88.3%. Our experimental results show that the proposed method to predict NRS-2002 scores achieves an accuracy of 73.1%. We also grouped the samples by gender, age, and the location of the hospital where the data were collected to evaluate the classification accuracy in different subsets. The classification accuracy rate for the elderly group was 85%, while the non-elderly group exhibited a classification accuracy rate of 71.1%; Furthermore, the classification accuracy rate for males and females were 69.2 and 78.6%, respectively. Hospitals located in remote areas, such as Tibet and Yunnan, yielded a classification accuracy rate of 76.5% for collected patient samples, whereas hospitals in non-remote areas achieved a classification accuracy rate of 71.1%.ConclusionThe attained accuracy rate of 73.1% holds significant implications for the feasibility of the method. While not impeccable, this level of accuracy highlights the potential for further improvements. The development of this algorithm has the potential to revolutionize nutritional risk assessment by providing healthcare professionals and individuals with a non-invasive, cost-effective, and easily accessible tool. |
first_indexed | 2024-03-11T23:59:03Z |
format | Article |
id | doaj.art-490d5fcfac69407abe4dca77f4f96715 |
institution | Directory Open Access Journal |
issn | 2296-861X |
language | English |
last_indexed | 2024-03-11T23:59:03Z |
publishDate | 2023-09-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Nutrition |
spelling | doaj.art-490d5fcfac69407abe4dca77f4f967152023-09-18T05:36:12ZengFrontiers Media S.A.Frontiers in Nutrition2296-861X2023-09-011010.3389/fnut.2023.12191931219193Establishing a machine learning model for predicting nutritional risk through facial feature recognitionJingmin Wang0Chengyuan He1Zhiwen Long2College of International Engineering, Xi’an University of Technology, Xi’an, ChinaRecovery Plus Clinic, Chengdu, ChinaRecovery Plus Clinic, Chengdu, ChinaBackgroundMalnutrition affects many worldwide, necessitating accurate and timely nutritional risk assessment. This study aims to develop and validate a machine learning model using facial feature recognition for predicting nutritional risk. This innovative approach seeks to offer a non-invasive, efficient method for early identification and intervention, ultimately improving health outcomes.MethodsWe gathered medical examination data and facial images from 949 patients across multiple hospitals to predict nutritional status. In this multicenter investigation, facial images underwent preprocessing via face alignment and cropping. Orbital fat pads were isolated using the U-net model, with the histogram of oriented gradient (HOG) method employed for feature extraction. Standardized HOG features were subjected to principal component analysis (PCA) for dimensionality reduction. A support vector machine (SVM) classification model was utilized for NRS-2002 detection. Our approach established a non-linear mapping between facial features and NRS-2002 nutritional risk scores, providing an innovative method for evaluating patient nutritional status.ResultsIn context of orbital fat pad area segmentation with U-net model, the averaged dice coefficient is 88.3%. Our experimental results show that the proposed method to predict NRS-2002 scores achieves an accuracy of 73.1%. We also grouped the samples by gender, age, and the location of the hospital where the data were collected to evaluate the classification accuracy in different subsets. The classification accuracy rate for the elderly group was 85%, while the non-elderly group exhibited a classification accuracy rate of 71.1%; Furthermore, the classification accuracy rate for males and females were 69.2 and 78.6%, respectively. Hospitals located in remote areas, such as Tibet and Yunnan, yielded a classification accuracy rate of 76.5% for collected patient samples, whereas hospitals in non-remote areas achieved a classification accuracy rate of 71.1%.ConclusionThe attained accuracy rate of 73.1% holds significant implications for the feasibility of the method. While not impeccable, this level of accuracy highlights the potential for further improvements. The development of this algorithm has the potential to revolutionize nutritional risk assessment by providing healthcare professionals and individuals with a non-invasive, cost-effective, and easily accessible tool.https://www.frontiersin.org/articles/10.3389/fnut.2023.1219193/fullsupport vector machineU-nethistogram of oriented gradientnutritionNRS-2002 |
spellingShingle | Jingmin Wang Chengyuan He Zhiwen Long Establishing a machine learning model for predicting nutritional risk through facial feature recognition Frontiers in Nutrition support vector machine U-net histogram of oriented gradient nutrition NRS-2002 |
title | Establishing a machine learning model for predicting nutritional risk through facial feature recognition |
title_full | Establishing a machine learning model for predicting nutritional risk through facial feature recognition |
title_fullStr | Establishing a machine learning model for predicting nutritional risk through facial feature recognition |
title_full_unstemmed | Establishing a machine learning model for predicting nutritional risk through facial feature recognition |
title_short | Establishing a machine learning model for predicting nutritional risk through facial feature recognition |
title_sort | establishing a machine learning model for predicting nutritional risk through facial feature recognition |
topic | support vector machine U-net histogram of oriented gradient nutrition NRS-2002 |
url | https://www.frontiersin.org/articles/10.3389/fnut.2023.1219193/full |
work_keys_str_mv | AT jingminwang establishingamachinelearningmodelforpredictingnutritionalriskthroughfacialfeaturerecognition AT chengyuanhe establishingamachinelearningmodelforpredictingnutritionalriskthroughfacialfeaturerecognition AT zhiwenlong establishingamachinelearningmodelforpredictingnutritionalriskthroughfacialfeaturerecognition |