Summary: | For a number of years, facial features removal techniques such as ‘defacing’, ‘skull stripping’ and ‘face masking/blurring’, were considered adequate privacy preserving tools to openly share brain images. Scientifically, these measures were already a compromise between data protection requirements and research impact of such data. Now, recent advances in machine learning and deep learning that indicate an increased possibility of re-identifiability from defaced neuroimages, have increased the tension between open science and data protection requirements. Researchers are left pondering how best to comply with the different jurisdictional requirements of anonymization, pseudonymisation or de-identification without compromising the scientific utility of neuroimages even further. In this paper, we present perspectives intended to clarify the meaning and scope of these concepts and highlight the privacy limitations of available pseudonymisation and de-identification techniques. We also discuss possible technical and organizational measures and safeguards that can facilitate sharing of pseudonymised neuroimages without causing further reductions to the utility of the data.
|