Skip to content
VuFind
    • English
    • Deutsch
    • Español
    • Français
    • Italiano
    • 日本語
    • Nederlands
    • Português
    • Português (Brasil)
    • 中文(简体)
    • 中文(繁體)
    • Türkçe
    • עברית
    • Gaeilge
    • Cymraeg
    • Ελληνικά
    • Català
    • Euskara
    • Русский
    • Čeština
    • Suomi
    • Svenska
    • polski
    • Dansk
    • slovenščina
    • اللغة العربية
    • বাংলা
    • Galego
    • Tiếng Việt
    • Hrvatski
    • हिंदी
    • Հայերէն
    • Українська
    • Sámegiella
    • Монгол
Advanced
  • Co-Design of a Trustworthy AI...
  • Cite this
  • Text this
  • Email this
  • Print
  • Export Record
    • Export to RefWorks
    • Export to EndNoteWeb
    • Export to EndNote
  • Permanent link
Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier

Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier

This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of tr...

Full description

Bibliographic Details
Main Authors: Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund, Renee Wurth
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-07-01
Series:Frontiers in Human Dynamics
Subjects:
artificial intelligence
healthcare
trustworthy AI
ethics
malignant melanoma
Z-inspection®1
Online Access:https://www.frontiersin.org/articles/10.3389/fhumd.2021.688152/full
  • Holdings
  • Description
  • Similar Items
  • Staff View

Internet

https://www.frontiersin.org/articles/10.3389/fhumd.2021.688152/full

Similar Items

  • On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls
    by: Roberto V. Zicari, et al.
    Published: (2021-07-01)
  • Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI
    by: Adriano Lucieri, et al.
    Published: (2023-07-01)
  • From Bit to Bedside: A Practical Framework for Artificial Intelligence Product Development in Healthcare
    by: David Higgins, et al.
    Published: (2020-10-01)
  • The privacy-explainability trade-off: unraveling the impacts of differential privacy and federated learning on attribution methods
    by: Saifullah Saifullah, et al.
    Published: (2024-07-01)
  • To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
    by: Julia Amann, et al.
    Published: (2022-02-01)

Search Options

  • Search History
  • Advanced Search

Find More

  • Browse the Catalog
  • Browse Alphabetically
  • Explore Channels
  • Course Reserves
  • New Items

Need Help?

  • Search Tips
  • Ask a Librarian
  • FAQs