Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities

Introduction: Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure environments in which researchers can access sensitive p...

Full description

Bibliographic Details
Main Authors: Esma Mansouri-Benssassi, Simon Rogers, Smarti Reel, Maeve Malone, Jim Smith, Felix Ritchie, Emily Jefferson
Format: Article
Language:English
Published: Elsevier 2023-04-01
Series:Heliyon
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2405844023023502
_version_ 1797836986898186240
author Esma Mansouri-Benssassi
Simon Rogers
Smarti Reel
Maeve Malone
Jim Smith
Felix Ritchie
Emily Jefferson
author_facet Esma Mansouri-Benssassi
Simon Rogers
Smarti Reel
Maeve Malone
Jim Smith
Felix Ritchie
Emily Jefferson
author_sort Esma Mansouri-Benssassi
collection DOAJ
description Introduction: Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure environments in which researchers can access sensitive personal data and develop AI (in particular machine learning (ML)) models. However, currently few TREs support the training of ML models in part due to a gap in the practical decision-making guidance for TREs in handling model disclosure. Specifically, the training of ML models creates a need to disclose new types of outputs from TREs. Although TREs have clear policies for the disclosure of statistical outputs, the extent to which trained models can leak personal training data once released is not well understood. Background: We review, for a general audience, different types of ML models and their applicability within healthcare. We explain the outputs from training a ML model and how trained ML models can be vulnerable to external attacks to discover personal data encoded within the model. Risks: We present the challenges for disclosure control of trained ML models in the context of training and exporting models from TREs. We provide insights and analyse methods that could be introduced within TREs to mitigate the risk of privacy breaches when disclosing trained models. Discussion: Although specific guidelines and policies exist for statistical disclosure controls in TREs, they do not satisfactorily address these new types of output requests; i.e., trained ML models. There is significant potential for new interdisciplinary research opportunities in developing and adapting policies and tools for safely disclosing ML outputs from TREs.
first_indexed 2024-04-09T15:18:46Z
format Article
id doaj.art-e2e2097fa5994d5ead2858c4a24434f9
institution Directory Open Access Journal
issn 2405-8440
language English
last_indexed 2024-04-09T15:18:46Z
publishDate 2023-04-01
publisher Elsevier
record_format Article
series Heliyon
spelling doaj.art-e2e2097fa5994d5ead2858c4a24434f92023-04-29T14:55:29ZengElsevierHeliyon2405-84402023-04-0194e15143Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunitiesEsma Mansouri-Benssassi0Simon Rogers1Smarti Reel2Maeve Malone3Jim Smith4Felix Ritchie5Emily Jefferson6University of Dundee, United KingdomNHS National Services Scotland, United KingdomUniversity of Dundee, United KingdomUniversity of Dundee, United KingdomUniversity of the West of England, United KingdomUniversity of the West of England, United KingdomUniversity of Dundee, United Kingdom; Health Data Research (HDR), United Kingdom; Corresponding author. University of Dundee, United Kingdom.Introduction: Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure environments in which researchers can access sensitive personal data and develop AI (in particular machine learning (ML)) models. However, currently few TREs support the training of ML models in part due to a gap in the practical decision-making guidance for TREs in handling model disclosure. Specifically, the training of ML models creates a need to disclose new types of outputs from TREs. Although TREs have clear policies for the disclosure of statistical outputs, the extent to which trained models can leak personal training data once released is not well understood. Background: We review, for a general audience, different types of ML models and their applicability within healthcare. We explain the outputs from training a ML model and how trained ML models can be vulnerable to external attacks to discover personal data encoded within the model. Risks: We present the challenges for disclosure control of trained ML models in the context of training and exporting models from TREs. We provide insights and analyse methods that could be introduced within TREs to mitigate the risk of privacy breaches when disclosing trained models. Discussion: Although specific guidelines and policies exist for statistical disclosure controls in TREs, they do not satisfactorily address these new types of output requests; i.e., trained ML models. There is significant potential for new interdisciplinary research opportunities in developing and adapting policies and tools for safely disclosing ML outputs from TREs.http://www.sciencedirect.com/science/article/pii/S2405844023023502Trusted research environmentSafe havenAIMachine learningData privacyDisclosure control
spellingShingle Esma Mansouri-Benssassi
Simon Rogers
Smarti Reel
Maeve Malone
Jim Smith
Felix Ritchie
Emily Jefferson
Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities
Heliyon
Trusted research environment
Safe haven
AI
Machine learning
Data privacy
Disclosure control
title Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities
title_full Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities
title_fullStr Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities
title_full_unstemmed Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities
title_short Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities
title_sort disclosure control of machine learning models from trusted research environments tre new challenges and opportunities
topic Trusted research environment
Safe haven
AI
Machine learning
Data privacy
Disclosure control
url http://www.sciencedirect.com/science/article/pii/S2405844023023502
work_keys_str_mv AT esmamansouribenssassi disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities
AT simonrogers disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities
AT smartireel disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities
AT maevemalone disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities
AT jimsmith disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities
AT felixritchie disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities
AT emilyjefferson disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities