Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling
During the 2019 coronavirus disease pandemic, robotic-based systems for swab sampling were developed to reduce burdens on healthcare workers and their risk of infection. Teleoperated sampling systems are especially appreciated as they fundamentally prevent contact with suspected COVID-19 patients. H...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-10-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/23/20/8443 |
_version_ | 1797572359010385920 |
---|---|
author | Suhun Jung Yonghwan Moon Jeongryul Kim Keri Kim |
author_facet | Suhun Jung Yonghwan Moon Jeongryul Kim Keri Kim |
author_sort | Suhun Jung |
collection | DOAJ |
description | During the 2019 coronavirus disease pandemic, robotic-based systems for swab sampling were developed to reduce burdens on healthcare workers and their risk of infection. Teleoperated sampling systems are especially appreciated as they fundamentally prevent contact with suspected COVID-19 patients. However, the limited field of view of the installed cameras prevents the operator from recognizing the position and deformation of the swab inserted into the nasal cavity, which highly decreases the operating performance. To overcome this limitation, this study proposes a visual feedback system that monitors and reconstructs the shape of an NP swab using augmented reality (AR). The sampling device contained three load cells and measured the interaction force applied to the swab, while the shape information was captured using a motion-tracking program. These datasets were used to train a one-dimensional convolution neural network (1DCNN) model, which estimated the coordinates of three feature points of the swab in 2D X–Y plane. Based on these points, the virtual shape of the swab, reflecting the curvature of the actual one, was reconstructed and overlaid on the visual display. The accuracy of the 1DCNN model was evaluated on a 2D plane under ten different bending conditions. The results demonstrate that the x-values of the predicted points show errors of under 0.590 mm from <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>P</mi></mrow><mrow><mn>0</mn></mrow></msub></mrow></semantics></math></inline-formula>, while those of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>P</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>P</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></semantics></math></inline-formula> show a biased error of about −1.5 mm with constant standard deviations. For the y-values, the error of all feature points under positive bending is uniformly estimated with under 1 mm of difference, when the error under negative bending increases depending on the amount of deformation. Finally, experiments using a collaborative robot validate its ability to visualize the actual swab’s position and deformation on the camera image of 2D and 3D phantoms. |
first_indexed | 2024-03-10T20:55:09Z |
format | Article |
id | doaj.art-e3b53c198b214d4b86c7b8fce58b89ff |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-10T20:55:09Z |
publishDate | 2023-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-e3b53c198b214d4b86c7b8fce58b89ff2023-11-19T18:03:00ZengMDPI AGSensors1424-82202023-10-012320844310.3390/s23208443Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab SamplingSuhun Jung0Yonghwan Moon1Jeongryul Kim2Keri Kim3Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of KoreaSchool of Mechanical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of KoreaArtificial Intelligence and Robot Institute, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of KoreaAugmented Safety System with Intelligence Sensing and Tracking, Korea Institute of Science and Technology, 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of KoreaDuring the 2019 coronavirus disease pandemic, robotic-based systems for swab sampling were developed to reduce burdens on healthcare workers and their risk of infection. Teleoperated sampling systems are especially appreciated as they fundamentally prevent contact with suspected COVID-19 patients. However, the limited field of view of the installed cameras prevents the operator from recognizing the position and deformation of the swab inserted into the nasal cavity, which highly decreases the operating performance. To overcome this limitation, this study proposes a visual feedback system that monitors and reconstructs the shape of an NP swab using augmented reality (AR). The sampling device contained three load cells and measured the interaction force applied to the swab, while the shape information was captured using a motion-tracking program. These datasets were used to train a one-dimensional convolution neural network (1DCNN) model, which estimated the coordinates of three feature points of the swab in 2D X–Y plane. Based on these points, the virtual shape of the swab, reflecting the curvature of the actual one, was reconstructed and overlaid on the visual display. The accuracy of the 1DCNN model was evaluated on a 2D plane under ten different bending conditions. The results demonstrate that the x-values of the predicted points show errors of under 0.590 mm from <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>P</mi></mrow><mrow><mn>0</mn></mrow></msub></mrow></semantics></math></inline-formula>, while those of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>P</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>P</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></semantics></math></inline-formula> show a biased error of about −1.5 mm with constant standard deviations. For the y-values, the error of all feature points under positive bending is uniformly estimated with under 1 mm of difference, when the error under negative bending increases depending on the amount of deformation. Finally, experiments using a collaborative robot validate its ability to visualize the actual swab’s position and deformation on the camera image of 2D and 3D phantoms.https://www.mdpi.com/1424-8220/23/20/8443nasopharyngeal swab testingload cellfiducial markeraugmented reality1-dimensional convolution neural network |
spellingShingle | Suhun Jung Yonghwan Moon Jeongryul Kim Keri Kim Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling Sensors nasopharyngeal swab testing load cell fiducial marker augmented reality 1-dimensional convolution neural network |
title | Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling |
title_full | Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling |
title_fullStr | Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling |
title_full_unstemmed | Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling |
title_short | Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling |
title_sort | deep neural network based visual feedback system for nasopharyngeal swab sampling |
topic | nasopharyngeal swab testing load cell fiducial marker augmented reality 1-dimensional convolution neural network |
url | https://www.mdpi.com/1424-8220/23/20/8443 |
work_keys_str_mv | AT suhunjung deepneuralnetworkbasedvisualfeedbacksystemfornasopharyngealswabsampling AT yonghwanmoon deepneuralnetworkbasedvisualfeedbacksystemfornasopharyngealswabsampling AT jeongryulkim deepneuralnetworkbasedvisualfeedbacksystemfornasopharyngealswabsampling AT kerikim deepneuralnetworkbasedvisualfeedbacksystemfornasopharyngealswabsampling |