Explanation from neural networks

<p>Neural networks have frequently been found to give accurate solutions to hard classification problems. However neural networks do not make explained classifications because the class boundaries are implicitly defined by the network weights, and these weights do not lend themselves to simple...

Ausführliche Beschreibung

Bibliographische Detailangaben
Hauptverfasser: Corbett-Clark, T, Timothy Corbett-Clark
Weitere Verfasser: Tarassenko, L
Format: Abschlussarbeit
Sprache:English
Veröffentlicht: 1998
Schlagworte:
_version_ 1826293222503088128
author Corbett-Clark, T
Timothy Corbett-Clark
author2 Tarassenko, L
author_facet Tarassenko, L
Corbett-Clark, T
Timothy Corbett-Clark
author_sort Corbett-Clark, T
collection OXFORD
description <p>Neural networks have frequently been found to give accurate solutions to hard classification problems. However neural networks do not make explained classifications because the class boundaries are implicitly defined by the network weights, and these weights do not lend themselves to simple analysis. Explanation is desirable because it gives problem insight both to the designer and to the user of the classifier.</p><p> Many methods have been suggested for explaining the classification given by a neural network, but they all suffer from one or more of the following disadvantages:</p><p><ul><li>a lack of equivalence between the network and the explanation;</li><li>the absence of a probability framework required to express the uncertainty present in the data;</li><li>a restriction to problems with binary or coarsely discretised features;</li><li>reliance on axis-aligned rules, which are intrinsically poor at describing the boundaries generated by neural networks.</li></ul></p><p>The structure of the solution presented in this thesis rests on the following steps:</p><p><ol><li>Train a standard neural network to estimate the class conditional probabilities. Bayes’ rule then defines the optimal class boundaries.</li><li>Obtain an explicit representation of these class boundaries using a piece-wise linearisation technique. Note that the class boundaries are otherwise only implicitly defined by the network weights.</li><li>Obtain a safe but possibly partial description of this explicit representation using rules based upon the city-block distance to a prototype pattern.</li></ol></p><p>The methods required to achieve the last two represent novel work which seeks to explain the answers given by a proven neural network solution to the classification problem.</p>
first_indexed 2024-03-07T03:26:47Z
format Thesis
id oxford-uuid:b94d702a-1243-4702-b751-68784c855ab2
institution University of Oxford
language English
last_indexed 2024-03-07T03:26:47Z
publishDate 1998
record_format dspace
spelling oxford-uuid:b94d702a-1243-4702-b751-68784c855ab22022-03-27T05:02:03ZExplanation from neural networksThesishttp://purl.org/coar/resource_type/c_db06uuid:b94d702a-1243-4702-b751-68784c855ab2Pattern recognition (statistics)EnglishOxford University Research Archive - Valet1998Corbett-Clark, TTimothy Corbett-ClarkTarassenko, L<p>Neural networks have frequently been found to give accurate solutions to hard classification problems. However neural networks do not make explained classifications because the class boundaries are implicitly defined by the network weights, and these weights do not lend themselves to simple analysis. Explanation is desirable because it gives problem insight both to the designer and to the user of the classifier.</p><p> Many methods have been suggested for explaining the classification given by a neural network, but they all suffer from one or more of the following disadvantages:</p><p><ul><li>a lack of equivalence between the network and the explanation;</li><li>the absence of a probability framework required to express the uncertainty present in the data;</li><li>a restriction to problems with binary or coarsely discretised features;</li><li>reliance on axis-aligned rules, which are intrinsically poor at describing the boundaries generated by neural networks.</li></ul></p><p>The structure of the solution presented in this thesis rests on the following steps:</p><p><ol><li>Train a standard neural network to estimate the class conditional probabilities. Bayes’ rule then defines the optimal class boundaries.</li><li>Obtain an explicit representation of these class boundaries using a piece-wise linearisation technique. Note that the class boundaries are otherwise only implicitly defined by the network weights.</li><li>Obtain a safe but possibly partial description of this explicit representation using rules based upon the city-block distance to a prototype pattern.</li></ol></p><p>The methods required to achieve the last two represent novel work which seeks to explain the answers given by a proven neural network solution to the classification problem.</p>
spellingShingle Pattern recognition (statistics)
Corbett-Clark, T
Timothy Corbett-Clark
Explanation from neural networks
title Explanation from neural networks
title_full Explanation from neural networks
title_fullStr Explanation from neural networks
title_full_unstemmed Explanation from neural networks
title_short Explanation from neural networks
title_sort explanation from neural networks
topic Pattern recognition (statistics)
work_keys_str_mv AT corbettclarkt explanationfromneuralnetworks
AT timothycorbettclark explanationfromneuralnetworks