Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools

From a research data repositories’ perspective, offering research data management services in line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. Here, we apply five different available FAI...

Full description

Bibliographic Details
Main Authors: Karsten Peters-von Gehlen, Heinke Höck, Andrej Fast, Daniel Heydebreck, Andrea Lammert, Hannes Thiemann
Format: Article
Language:English
Published: Ubiquity Press 2022-03-01
Series:Data Science Journal
Subjects:
Online Access:https://datascience.codata.org/articles/1390
_version_ 1818188622598242304
author Karsten Peters-von Gehlen
Heinke Höck
Andrej Fast
Daniel Heydebreck
Andrea Lammert
Hannes Thiemann
author_facet Karsten Peters-von Gehlen
Heinke Höck
Andrej Fast
Daniel Heydebreck
Andrea Lammert
Hannes Thiemann
author_sort Karsten Peters-von Gehlen
collection DOAJ
description From a research data repositories’ perspective, offering research data management services in line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. Here, we apply five different available FAIRness evaluation approaches to selected data archived in the World Data Center for Climate (WDCC). Two approaches are purely automatic, two approaches are purely manual and one approach applies a hybrid method (manual and automatic combined). The results of our evaluation show an overall mean FAIR score of WDCC-archived (meta)data of 0.67 of 1, with a range of 0.5 to 0.88. Manual approaches show higher scores than automated ones and the hybrid approach shows the highest score. Computed statistics indicate that the test approaches show an overall good agreement at the data collection level. We find that while neither one of the five valuation approaches is fully fit-for-purpose to evaluate (discipline-specific) FAIRness, all have their individual strengths. Specifically, manual approaches capture contextual aspects of FAIRness relevant for reuse, whereas automated approaches focus on the strictly standardised aspects of machine actionability. Correspondingly, the hybrid method combines the advantages and eliminates the deficiencies of manual and automatic evaluation approaches. Based on our results, we recommend future FAIRness evaluation tools to be based on a mature hybrid approach. Especially the design and adoption of the discipline-specific aspects of FAIRness will have to be conducted in concerted community efforts.
first_indexed 2024-12-11T23:29:51Z
format Article
id doaj.art-fce0de4ec1ef4b0da76539f8467893da
institution Directory Open Access Journal
issn 1683-1470
language English
last_indexed 2024-12-11T23:29:51Z
publishDate 2022-03-01
publisher Ubiquity Press
record_format Article
series Data Science Journal
spelling doaj.art-fce0de4ec1ef4b0da76539f8467893da2022-12-22T00:46:04ZengUbiquity PressData Science Journal1683-14702022-03-0121110.5334/dsj-2022-007855Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation ToolsKarsten Peters-von Gehlen0Heinke Höck1Andrej Fast2Daniel Heydebreck3Andrea Lammert4Hannes Thiemann5Deutsches Klimarechenzentrum GmbH, Bundesstr. 45a, D-20146 HamburgDeutsches Klimarechenzentrum GmbH, Bundesstr. 45a, D-20146 HamburgDeutsches Klimarechenzentrum GmbH, Bundesstr. 45a, D-20146 HamburgDeutsches Klimarechenzentrum GmbH, Bundesstr. 45a, D-20146 HamburgDeutsches Klimarechenzentrum GmbH, Bundesstr. 45a, D-20146 HamburgDeutsches Klimarechenzentrum GmbH, Bundesstr. 45a, D-20146 HamburgFrom a research data repositories’ perspective, offering research data management services in line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. Here, we apply five different available FAIRness evaluation approaches to selected data archived in the World Data Center for Climate (WDCC). Two approaches are purely automatic, two approaches are purely manual and one approach applies a hybrid method (manual and automatic combined). The results of our evaluation show an overall mean FAIR score of WDCC-archived (meta)data of 0.67 of 1, with a range of 0.5 to 0.88. Manual approaches show higher scores than automated ones and the hybrid approach shows the highest score. Computed statistics indicate that the test approaches show an overall good agreement at the data collection level. We find that while neither one of the five valuation approaches is fully fit-for-purpose to evaluate (discipline-specific) FAIRness, all have their individual strengths. Specifically, manual approaches capture contextual aspects of FAIRness relevant for reuse, whereas automated approaches focus on the strictly standardised aspects of machine actionability. Correspondingly, the hybrid method combines the advantages and eliminates the deficiencies of manual and automatic evaluation approaches. Based on our results, we recommend future FAIRness evaluation tools to be based on a mature hybrid approach. Especially the design and adoption of the discipline-specific aspects of FAIRness will have to be conducted in concerted community efforts.https://datascience.codata.org/articles/1390fairfairness evaluationclimate sciencelong-term archivedata curationreusabilitywdcc
spellingShingle Karsten Peters-von Gehlen
Heinke Höck
Andrej Fast
Daniel Heydebreck
Andrea Lammert
Hannes Thiemann
Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
Data Science Journal
fair
fairness evaluation
climate science
long-term archive
data curation
reusability
wdcc
title Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
title_full Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
title_fullStr Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
title_full_unstemmed Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
title_short Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
title_sort recommendations for discipline specific fairness evaluation derived from applying an ensemble of evaluation tools
topic fair
fairness evaluation
climate science
long-term archive
data curation
reusability
wdcc
url https://datascience.codata.org/articles/1390
work_keys_str_mv AT karstenpetersvongehlen recommendationsfordisciplinespecificfairnessevaluationderivedfromapplyinganensembleofevaluationtools
AT heinkehock recommendationsfordisciplinespecificfairnessevaluationderivedfromapplyinganensembleofevaluationtools
AT andrejfast recommendationsfordisciplinespecificfairnessevaluationderivedfromapplyinganensembleofevaluationtools
AT danielheydebreck recommendationsfordisciplinespecificfairnessevaluationderivedfromapplyinganensembleofevaluationtools
AT andrealammert recommendationsfordisciplinespecificfairnessevaluationderivedfromapplyinganensembleofevaluationtools
AT hannesthiemann recommendationsfordisciplinespecificfairnessevaluationderivedfromapplyinganensembleofevaluationtools