Understanding and Avoiding AI Failures: A Practical Guide
As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct at...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-06-01
|
Series: | Philosophies |
Subjects: | |
Online Access: | https://www.mdpi.com/2409-9287/6/3/53 |
_version_ | 1797226598975406080 |
---|---|
author | Robert Williams Roman Yampolskiy |
author_facet | Robert Williams Roman Yampolskiy |
author_sort | Robert Williams |
collection | DOAJ |
description | As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems. |
first_indexed | 2024-03-09T04:48:25Z |
format | Article |
id | doaj.art-fa594111b1774afeb5d00d1bd38bddb2 |
institution | Directory Open Access Journal |
issn | 2409-9287 |
language | English |
last_indexed | 2024-04-24T14:27:28Z |
publishDate | 2021-06-01 |
publisher | MDPI AG |
record_format | Article |
series | Philosophies |
spelling | doaj.art-fa594111b1774afeb5d00d1bd38bddb22024-04-03T04:30:15ZengMDPI AGPhilosophies2409-92872021-06-01635310.3390/philosophies6030053Understanding and Avoiding AI Failures: A Practical GuideRobert Williams0Roman Yampolskiy1Speed School of Engineering, University of Louisville, Louisville, KY 40292, USASpeed School of Engineering, University of Louisville, Louisville, KY 40292, USAAs AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems.https://www.mdpi.com/2409-9287/6/3/53AI safetynormal accident theoryrisk analysis |
spellingShingle | Robert Williams Roman Yampolskiy Understanding and Avoiding AI Failures: A Practical Guide Philosophies AI safety normal accident theory risk analysis |
title | Understanding and Avoiding AI Failures: A Practical Guide |
title_full | Understanding and Avoiding AI Failures: A Practical Guide |
title_fullStr | Understanding and Avoiding AI Failures: A Practical Guide |
title_full_unstemmed | Understanding and Avoiding AI Failures: A Practical Guide |
title_short | Understanding and Avoiding AI Failures: A Practical Guide |
title_sort | understanding and avoiding ai failures a practical guide |
topic | AI safety normal accident theory risk analysis |
url | https://www.mdpi.com/2409-9287/6/3/53 |
work_keys_str_mv | AT robertwilliams understandingandavoidingaifailuresapracticalguide AT romanyampolskiy understandingandavoidingaifailuresapracticalguide |