Explainable AI for earth observation: A review including societal and regulatory perspectives
Artificial intelligence and machine learning are ubiquitous in the domain of Earth Observation (EO) and Remote Sensing. Congruent to their success in the domain of computer vision, they have proven to obtain high accuracies for EO applications. Yet experts of EO should also consider the weaknesses o...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2022-08-01
|
Series: | International Journal of Applied Earth Observations and Geoinformation |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1569843222000711 |
_version_ | 1817998682065207296 |
---|---|
author | Caroline M. Gevaert |
author_facet | Caroline M. Gevaert |
author_sort | Caroline M. Gevaert |
collection | DOAJ |
description | Artificial intelligence and machine learning are ubiquitous in the domain of Earth Observation (EO) and Remote Sensing. Congruent to their success in the domain of computer vision, they have proven to obtain high accuracies for EO applications. Yet experts of EO should also consider the weaknesses of complex, machine-learning models before adopting them for specific applications. One such weakness is the lack of explainability of complex deep learning models. This paper reviews published examples of explainable ML or explainable AI in the field of Earth Observation. Explainability methods are classified as: intrinsic versus post-hoc, model-specific versus model-agnostic, and global versus local explanations and examples of each type are provided. This paper also identifies key explainability requirements identified the social sciences and upcoming regulatory recommendations from UNESCO Ethics of Artificial Intelligence and requirements from the EU draft Artificial Intelligence Act and analyzes whether these limitations are sufficiently addressed in the field of EO.The findings indicate that there is a lack of clarity regarding which models can be considered interpretable or not. EO applications often utilize Random Forests as an “interpretable” benchmark algorithm to compare to complex deep-learning models even though social sciences clearly argue that large Random Forests cannot be considered as such. Secondly, most explanations target domain experts and not possible users of the algorithm, regulatory bodies, or those who might be affected by an algorithm’s decisions. Finally, publications tend to simply provide explanations without testing the usefulness of the explanation by the intended audience. In light of these societal and regulatory considerations, a framework is provided to guide the selection of an appropriate machine learning algorithm based on the availability of simpler algorithms with a high predictive accuracy as well as the purpose and intended audience of the explanation. |
first_indexed | 2024-04-14T02:56:42Z |
format | Article |
id | doaj.art-35a6d823095644088ba8ac702b360498 |
institution | Directory Open Access Journal |
issn | 1569-8432 |
language | English |
last_indexed | 2024-04-14T02:56:42Z |
publishDate | 2022-08-01 |
publisher | Elsevier |
record_format | Article |
series | International Journal of Applied Earth Observations and Geoinformation |
spelling | doaj.art-35a6d823095644088ba8ac702b3604982022-12-22T02:16:03ZengElsevierInternational Journal of Applied Earth Observations and Geoinformation1569-84322022-08-01112102869Explainable AI for earth observation: A review including societal and regulatory perspectivesCaroline M. Gevaert0Dept. of Earth Observation Science, ITC, University of Twente, Enschede, the NetherlandsArtificial intelligence and machine learning are ubiquitous in the domain of Earth Observation (EO) and Remote Sensing. Congruent to their success in the domain of computer vision, they have proven to obtain high accuracies for EO applications. Yet experts of EO should also consider the weaknesses of complex, machine-learning models before adopting them for specific applications. One such weakness is the lack of explainability of complex deep learning models. This paper reviews published examples of explainable ML or explainable AI in the field of Earth Observation. Explainability methods are classified as: intrinsic versus post-hoc, model-specific versus model-agnostic, and global versus local explanations and examples of each type are provided. This paper also identifies key explainability requirements identified the social sciences and upcoming regulatory recommendations from UNESCO Ethics of Artificial Intelligence and requirements from the EU draft Artificial Intelligence Act and analyzes whether these limitations are sufficiently addressed in the field of EO.The findings indicate that there is a lack of clarity regarding which models can be considered interpretable or not. EO applications often utilize Random Forests as an “interpretable” benchmark algorithm to compare to complex deep-learning models even though social sciences clearly argue that large Random Forests cannot be considered as such. Secondly, most explanations target domain experts and not possible users of the algorithm, regulatory bodies, or those who might be affected by an algorithm’s decisions. Finally, publications tend to simply provide explanations without testing the usefulness of the explanation by the intended audience. In light of these societal and regulatory considerations, a framework is provided to guide the selection of an appropriate machine learning algorithm based on the availability of simpler algorithms with a high predictive accuracy as well as the purpose and intended audience of the explanation.http://www.sciencedirect.com/science/article/pii/S1569843222000711Earth observationRemote sensingMachine learningExplainable artificial intelligenceEthicsRegulations |
spellingShingle | Caroline M. Gevaert Explainable AI for earth observation: A review including societal and regulatory perspectives International Journal of Applied Earth Observations and Geoinformation Earth observation Remote sensing Machine learning Explainable artificial intelligence Ethics Regulations |
title | Explainable AI for earth observation: A review including societal and regulatory perspectives |
title_full | Explainable AI for earth observation: A review including societal and regulatory perspectives |
title_fullStr | Explainable AI for earth observation: A review including societal and regulatory perspectives |
title_full_unstemmed | Explainable AI for earth observation: A review including societal and regulatory perspectives |
title_short | Explainable AI for earth observation: A review including societal and regulatory perspectives |
title_sort | explainable ai for earth observation a review including societal and regulatory perspectives |
topic | Earth observation Remote sensing Machine learning Explainable artificial intelligence Ethics Regulations |
url | http://www.sciencedirect.com/science/article/pii/S1569843222000711 |
work_keys_str_mv | AT carolinemgevaert explainableaiforearthobservationareviewincludingsocietalandregulatoryperspectives |