Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies
To use technology or engage with research or medical treatment typically requires user consent: agreeing to terms of use with technology or services, or providing informed consent for research participation, for clinical trials and medical intervention, or as one legal basis for processing personal...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-05-01
|
Series: | Future Internet |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-5903/13/5/132 |
_version_ | 1797533779978354688 |
---|---|
author | Brian Pickering |
author_facet | Brian Pickering |
author_sort | Brian Pickering |
collection | DOAJ |
description | To use technology or engage with research or medical treatment typically requires user consent: agreeing to terms of use with technology or services, or providing informed consent for research participation, for clinical trials and medical intervention, or as one legal basis for processing personal data. Introducing AI technologies, where explainability and trustworthiness are focus items for both government guidelines and responsible technologists, imposes additional challenges. Understanding enough of the technology to be able to make an informed decision, or consent, is essential but involves an acceptance of uncertain outcomes. Further, the contribution of AI-enabled technologies not least during the COVID-19 pandemic raises ethical concerns about the governance associated with their development and deployment. Using three typical scenarios—contact tracing, big data analytics and research during public emergencies—this paper explores a trust-based alternative to consent. Unlike existing consent-based mechanisms, this approach sees consent as a typical behavioural response to perceived contextual characteristics. Decisions to engage derive from the assumption that all relevant stakeholders including research participants will negotiate on an ongoing basis. Accepting dynamic negotiation between the main stakeholders as proposed here introduces a specifically socio–psychological perspective into the debate about human responses to artificial intelligence. This trust-based consent process leads to a set of recommendations for the ethical use of advanced technologies as well as for the ethical review of applied research projects. |
first_indexed | 2024-03-10T11:19:30Z |
format | Article |
id | doaj.art-e10d43a6dbc140acbfe48fd14923332e |
institution | Directory Open Access Journal |
issn | 1999-5903 |
language | English |
last_indexed | 2024-03-10T11:19:30Z |
publishDate | 2021-05-01 |
publisher | MDPI AG |
record_format | Article |
series | Future Internet |
spelling | doaj.art-e10d43a6dbc140acbfe48fd14923332e2023-11-21T20:09:55ZengMDPI AGFuture Internet1999-59032021-05-0113513210.3390/fi13050132Trust, but Verify: Informed Consent, AI Technologies, and Public Health EmergenciesBrian Pickering0IT Innovation, Electronics and Computing, University of Southampton, University Road, Southampton SO17 1BJ, UKTo use technology or engage with research or medical treatment typically requires user consent: agreeing to terms of use with technology or services, or providing informed consent for research participation, for clinical trials and medical intervention, or as one legal basis for processing personal data. Introducing AI technologies, where explainability and trustworthiness are focus items for both government guidelines and responsible technologists, imposes additional challenges. Understanding enough of the technology to be able to make an informed decision, or consent, is essential but involves an acceptance of uncertain outcomes. Further, the contribution of AI-enabled technologies not least during the COVID-19 pandemic raises ethical concerns about the governance associated with their development and deployment. Using three typical scenarios—contact tracing, big data analytics and research during public emergencies—this paper explores a trust-based alternative to consent. Unlike existing consent-based mechanisms, this approach sees consent as a typical behavioural response to perceived contextual characteristics. Decisions to engage derive from the assumption that all relevant stakeholders including research participants will negotiate on an ongoing basis. Accepting dynamic negotiation between the main stakeholders as proposed here introduces a specifically socio–psychological perspective into the debate about human responses to artificial intelligence. This trust-based consent process leads to a set of recommendations for the ethical use of advanced technologies as well as for the ethical review of applied research projects.https://www.mdpi.com/1999-5903/13/5/132informed consentterms of useAI-technologiestechnology acceptancetrustpublic health emergency |
spellingShingle | Brian Pickering Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies Future Internet informed consent terms of use AI-technologies technology acceptance trust public health emergency |
title | Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies |
title_full | Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies |
title_fullStr | Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies |
title_full_unstemmed | Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies |
title_short | Trust, but Verify: Informed Consent, AI Technologies, and Public Health Emergencies |
title_sort | trust but verify informed consent ai technologies and public health emergencies |
topic | informed consent terms of use AI-technologies technology acceptance trust public health emergency |
url | https://www.mdpi.com/1999-5903/13/5/132 |
work_keys_str_mv | AT brianpickering trustbutverifyinformedconsentaitechnologiesandpublichealthemergencies |