Ethical issues of implementing artificial intelligence in medicine

Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The black box problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers f...

Full description

Bibliographic Details
Main Author: Maxim I. Konkov
Format: Article
Language:English
Published: Eco-Vector 2023-06-01
Series:Digital Diagnostics
Subjects:
Online Access:https://jdigitaldiagnostics.com/DD/article/viewFile/430348/125000
_version_ 1797755252301103104
author Maxim I. Konkov
author_facet Maxim I. Konkov
author_sort Maxim I. Konkov
collection DOAJ
description Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The black box problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the black box problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a black box, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durn writes, Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a black box of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the black box algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.
first_indexed 2024-03-12T17:44:10Z
format Article
id doaj.art-f1f6ecbf9f00432d803c366642c2309f
institution Directory Open Access Journal
issn 2712-8490
2712-8962
language English
last_indexed 2024-03-12T17:44:10Z
publishDate 2023-06-01
publisher Eco-Vector
record_format Article
series Digital Diagnostics
spelling doaj.art-f1f6ecbf9f00432d803c366642c2309f2023-08-03T20:08:47ZengEco-VectorDigital Diagnostics2712-84902712-89622023-06-0141S707210.17816/DD43034876505Ethical issues of implementing artificial intelligence in medicineMaxim I. Konkov0https://orcid.org/0009-0002-2803-1020N.I. Pirogov Russian National Research Medical UniversityArtificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The black box problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the black box problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a black box, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durn writes, Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a black box of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the black box algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.https://jdigitaldiagnostics.com/DD/article/viewFile/430348/125000artificial intelligencedoctorspatients“black box”
spellingShingle Maxim I. Konkov
Ethical issues of implementing artificial intelligence in medicine
Digital Diagnostics
artificial intelligence
doctors
patients
“black box”
title Ethical issues of implementing artificial intelligence in medicine
title_full Ethical issues of implementing artificial intelligence in medicine
title_fullStr Ethical issues of implementing artificial intelligence in medicine
title_full_unstemmed Ethical issues of implementing artificial intelligence in medicine
title_short Ethical issues of implementing artificial intelligence in medicine
title_sort ethical issues of implementing artificial intelligence in medicine
topic artificial intelligence
doctors
patients
“black box”
url https://jdigitaldiagnostics.com/DD/article/viewFile/430348/125000
work_keys_str_mv AT maximikonkov ethicalissuesofimplementingartificialintelligenceinmedicine