Addressing uncertainty in the safety assurance of machine-learning

There is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen...

Full description

Bibliographic Details
Main Authors: Simon Burton, Benjamin Herd
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-04-01
Series:Frontiers in Computer Science
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fcomp.2023.1132580/full
_version_ 1797851343911649280
author Simon Burton
Benjamin Herd
author_facet Simon Burton
Benjamin Herd
author_sort Simon Burton
collection DOAJ
description There is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen as one of the most challenging hurdles to their widespread deployment for such applications. In this paper we explore the factors which make the safety assurance of ML such a challenging task. In particular we address the impact of uncertainty on the confidence in ML safety assurance arguments. We show how this uncertainty is related to complexity in the ML models as well as the inherent complexity of the tasks that they are designed to implement. Based on definitions of uncertainty as well as an exemplary assurance argument structure, we examine typical weaknesses in the argument and how these can be addressed. The analysis combines an understanding of causes of insufficiencies in ML models with a systematic analysis of the types of asserted context, asserted evidence and asserted inference within the assurance argument. This leads to a systematic identification of requirements on the assurance argument structure as well as supporting evidence. We conclude that a combination of qualitative arguments combined with quantitative evidence are required to build a robust argument for safety-related properties of ML functions that is continuously refined to reduce residual and emerging uncertainties in the arguments after the function has been deployed into the target environment.
first_indexed 2024-04-09T19:15:17Z
format Article
id doaj.art-011d5899e7cd401781b079da8758c8e4
institution Directory Open Access Journal
issn 2624-9898
language English
last_indexed 2024-04-09T19:15:17Z
publishDate 2023-04-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Computer Science
spelling doaj.art-011d5899e7cd401781b079da8758c8e42023-04-06T05:36:40ZengFrontiers Media S.A.Frontiers in Computer Science2624-98982023-04-01510.3389/fcomp.2023.11325801132580Addressing uncertainty in the safety assurance of machine-learningSimon BurtonBenjamin HerdThere is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen as one of the most challenging hurdles to their widespread deployment for such applications. In this paper we explore the factors which make the safety assurance of ML such a challenging task. In particular we address the impact of uncertainty on the confidence in ML safety assurance arguments. We show how this uncertainty is related to complexity in the ML models as well as the inherent complexity of the tasks that they are designed to implement. Based on definitions of uncertainty as well as an exemplary assurance argument structure, we examine typical weaknesses in the argument and how these can be addressed. The analysis combines an understanding of causes of insufficiencies in ML models with a systematic analysis of the types of asserted context, asserted evidence and asserted inference within the assurance argument. This leads to a systematic identification of requirements on the assurance argument structure as well as supporting evidence. We conclude that a combination of qualitative arguments combined with quantitative evidence are required to build a robust argument for safety-related properties of ML functions that is continuously refined to reduce residual and emerging uncertainties in the arguments after the function has been deployed into the target environment.https://www.frontiersin.org/articles/10.3389/fcomp.2023.1132580/fullmachine learningsafetyassurance argumentscyber-physical systemsuncertaintycomplexity
spellingShingle Simon Burton
Benjamin Herd
Addressing uncertainty in the safety assurance of machine-learning
Frontiers in Computer Science
machine learning
safety
assurance arguments
cyber-physical systems
uncertainty
complexity
title Addressing uncertainty in the safety assurance of machine-learning
title_full Addressing uncertainty in the safety assurance of machine-learning
title_fullStr Addressing uncertainty in the safety assurance of machine-learning
title_full_unstemmed Addressing uncertainty in the safety assurance of machine-learning
title_short Addressing uncertainty in the safety assurance of machine-learning
title_sort addressing uncertainty in the safety assurance of machine learning
topic machine learning
safety
assurance arguments
cyber-physical systems
uncertainty
complexity
url https://www.frontiersin.org/articles/10.3389/fcomp.2023.1132580/full
work_keys_str_mv AT simonburton addressinguncertaintyinthesafetyassuranceofmachinelearning
AT benjaminherd addressinguncertaintyinthesafetyassuranceofmachinelearning