GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).

Objectives To assess a range of tools and methods to support Trusted Research Environments (TREs) to assess output from AI methods for potentially identifiable information, investigate the legal and ethical implications and controls, and produce a set of guidelines and recommendations to support all...

Full description

Bibliographic Details
Main Authors: Emily Jefferson, Christian Cole, Alba Crespi Boixader, Simon Rogers, Maeve Malone, Felix Ritchie, Jim Smith, Francesco Tava, Angela Daly, Jillian Beggs, Antony Chuter
Format: Article
Language:English
Published: Swansea University 2022-08-01
Series:International Journal of Population Data Science
Subjects:
Online Access:https://ijpds.org/article/view/2005
_version_ 1797430640311795712
author Emily Jefferson
Christian Cole
Alba Crespi Boixader
Simon Rogers
Maeve Malone
Felix Ritchie
Jim Smith
Francesco Tava
Angela Daly
Jillian Beggs
Antony Chuter
author_facet Emily Jefferson
Christian Cole
Alba Crespi Boixader
Simon Rogers
Maeve Malone
Felix Ritchie
Jim Smith
Francesco Tava
Angela Daly
Jillian Beggs
Antony Chuter
author_sort Emily Jefferson
collection DOAJ
description Objectives To assess a range of tools and methods to support Trusted Research Environments (TREs) to assess output from AI methods for potentially identifiable information, investigate the legal and ethical implications and controls, and produce a set of guidelines and recommendations to support all TREs with export controls of AI algorithms. Approach TREs provide secure facilities to analyse confidential personal data, with staff checking outputs for disclosure risk before publication. Artificial intelligence (AI) has high potential to improve the linking and analysis of population data, and TREs are well suited to supporting AI modelling. However, TRE governance focuses on classical statistical data analysis. The size and complexity of AI models presents significant challenges for the disclosure-checking process. Models may be susceptible to external hacking: complicated methods to reverse engineer the learning process to find out about the data used for training, with more potential to lead to re-identification than conventional statistical methods. Results GRAIMatter is: • Quantitatively assessing the risk of disclosure from different AI models exploring different models, hyper-parameter settings and training algorithms over common data types • Evaluating a range of tools to determine effectiveness for disclosure control • Assessing the legal and ethical implications of TREs supporting AI development and identifying aspects of existing legal and regulatory frameworks requiring reform. • Running 4 PPIE workshops to understand their priorities and beliefs around safeguarding and securing data • Developing a set of recommendations including • suggested open-source toolsets for TREs to use to measure and reduce disclosure risk • descriptions of the technical and legal controls and policies TREs should implement across the 5 Safes to support AI algorithm disclosure control • training implications for both TRE staff and how they validate researchers Conclusion GRAIMatter is developing a set of usable recommendations for TREs to use to guard against the additional risks when disclosing trained AI models from TREs.
first_indexed 2024-03-09T09:31:28Z
format Article
id doaj.art-edf4041fad1b431f8adc5d9fc067019f
institution Directory Open Access Journal
issn 2399-4908
language English
last_indexed 2024-03-09T09:31:28Z
publishDate 2022-08-01
publisher Swansea University
record_format Article
series International Journal of Population Data Science
spelling doaj.art-edf4041fad1b431f8adc5d9fc067019f2023-12-02T03:57:44ZengSwansea UniversityInternational Journal of Population Data Science2399-49082022-08-017310.23889/ijpds.v7i3.2005GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).Emily Jefferson0Christian Cole1Alba Crespi Boixader2Simon Rogers3Maeve Malone 4Felix Ritchie5Jim Smith6Francesco Tava7Angela Daly8Jillian Beggs9Antony Chuter10University of DundeeUniversity of DundeeUniversity of DundeeNHS ScotlandUniversity of DundeeUniversity of West of EnglandUniversity of West of EnglandUniversity of West of EnglandUniversity of DundeePPIE Co-IPPIE Co-IObjectives To assess a range of tools and methods to support Trusted Research Environments (TREs) to assess output from AI methods for potentially identifiable information, investigate the legal and ethical implications and controls, and produce a set of guidelines and recommendations to support all TREs with export controls of AI algorithms. Approach TREs provide secure facilities to analyse confidential personal data, with staff checking outputs for disclosure risk before publication. Artificial intelligence (AI) has high potential to improve the linking and analysis of population data, and TREs are well suited to supporting AI modelling. However, TRE governance focuses on classical statistical data analysis. The size and complexity of AI models presents significant challenges for the disclosure-checking process. Models may be susceptible to external hacking: complicated methods to reverse engineer the learning process to find out about the data used for training, with more potential to lead to re-identification than conventional statistical methods. Results GRAIMatter is: • Quantitatively assessing the risk of disclosure from different AI models exploring different models, hyper-parameter settings and training algorithms over common data types • Evaluating a range of tools to determine effectiveness for disclosure control • Assessing the legal and ethical implications of TREs supporting AI development and identifying aspects of existing legal and regulatory frameworks requiring reform. • Running 4 PPIE workshops to understand their priorities and beliefs around safeguarding and securing data • Developing a set of recommendations including • suggested open-source toolsets for TREs to use to measure and reduce disclosure risk • descriptions of the technical and legal controls and policies TREs should implement across the 5 Safes to support AI algorithm disclosure control • training implications for both TRE staff and how they validate researchers Conclusion GRAIMatter is developing a set of usable recommendations for TREs to use to guard against the additional risks when disclosing trained AI models from TREs. https://ijpds.org/article/view/2005AIArtificial IntelligenceMachine LearningTrusted Research EnvironmentsSafe Haven EnvironmentsDisclosure Control
spellingShingle Emily Jefferson
Christian Cole
Alba Crespi Boixader
Simon Rogers
Maeve Malone
Felix Ritchie
Jim Smith
Francesco Tava
Angela Daly
Jillian Beggs
Antony Chuter
GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).
International Journal of Population Data Science
AI
Artificial Intelligence
Machine Learning
Trusted Research Environments
Safe Haven Environments
Disclosure Control
title GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).
title_full GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).
title_fullStr GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).
title_full_unstemmed GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).
title_short GRAIMatter: Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMatter).
title_sort graimatter guidelines and resources for ai model access from trusted research environments graimatter
topic AI
Artificial Intelligence
Machine Learning
Trusted Research Environments
Safe Haven Environments
Disclosure Control
url https://ijpds.org/article/view/2005
work_keys_str_mv AT emilyjefferson graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT christiancole graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT albacrespiboixader graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT simonrogers graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT maevemalone graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT felixritchie graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT jimsmith graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT francescotava graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT angeladaly graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT jillianbeggs graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter
AT antonychuter graimatterguidelinesandresourcesforaimodelaccessfromtrustedresearchenvironmentsgraimatter