Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI
Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these chal...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-10-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/25/10/1429 |
_version_ | 1797573896106409984 |
---|---|
author | Michael Mylrea Nikki Robinson |
author_facet | Michael Mylrea Nikki Robinson |
author_sort | Michael Mylrea |
collection | DOAJ |
description | Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an “entropy lens” to root the study in information theory and enhance transparency and trust in “black box” AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human–machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework’s ability to measure trust in the design and management of AI systems. |
first_indexed | 2024-03-10T21:16:30Z |
format | Article |
id | doaj.art-3f32a0aa2b55403b94281eccb6c75d9d |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-10T21:16:30Z |
publishDate | 2023-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-3f32a0aa2b55403b94281eccb6c75d9d2023-11-19T16:24:46ZengMDPI AGEntropy1099-43002023-10-012510142910.3390/e25101429Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AIMichael Mylrea0Nikki Robinson1Department of Computer Science & Engineering, Institute of Data Science and Computing, University of Miami, Coral Gables, FL 33146, USADepartment of Computer and Data Science, Capitol Technology University, Laurel, ME 20708, USARecent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an “entropy lens” to root the study in information theory and enhance transparency and trust in “black box” AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human–machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework’s ability to measure trust in the design and management of AI systems.https://www.mdpi.com/1099-4300/25/10/1429trustworthy AIexplainable AI (XAI)artificial general intelligence (AGI)entropyinformation theoryautonomous human–machine teams and systems (A-HMT-S) |
spellingShingle | Michael Mylrea Nikki Robinson Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI Entropy trustworthy AI explainable AI (XAI) artificial general intelligence (AGI) entropy information theory autonomous human–machine teams and systems (A-HMT-S) |
title | Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI |
title_full | Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI |
title_fullStr | Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI |
title_full_unstemmed | Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI |
title_short | Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI |
title_sort | artificial intelligence ai trust framework and maturity model applying an entropy lens to improve security privacy and ethical ai |
topic | trustworthy AI explainable AI (XAI) artificial general intelligence (AGI) entropy information theory autonomous human–machine teams and systems (A-HMT-S) |
url | https://www.mdpi.com/1099-4300/25/10/1429 |
work_keys_str_mv | AT michaelmylrea artificialintelligenceaitrustframeworkandmaturitymodelapplyinganentropylenstoimprovesecurityprivacyandethicalai AT nikkirobinson artificialintelligenceaitrustframeworkandmaturitymodelapplyinganentropylenstoimprovesecurityprivacyandethicalai |