Learning logic programs by explaining their failures
<p>Scientists form hypotheses and experimentally test them. If a hypothesis fails (is refuted), scientists try to <em>explain</em> the failure to eliminate other hypotheses. The more precise the failure analysis the more hypotheses can be eliminated. Thus inspired...
Main Authors: | , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
Springer
2023
|
_version_ | 1826313022531960832 |
---|---|
author | Morel, R Cropper, A |
author_facet | Morel, R Cropper, A |
author_sort | Morel, R |
collection | OXFORD |
description | <p>Scientists form hypotheses and experimentally test them. If a hypothesis fails (is refuted), scientists try to <em>explain</em> the failure to eliminate other hypotheses. The more precise the failure analysis the more hypotheses can be eliminated. Thus inspired, we introduce failure explanation techniques for inductive logic programming. Given a hypothesis represented as a logic program, we test it on examples. If a hypothesis fails, we explain the failure in terms of failing sub-programs. In case a positive example fails, we identify failing sub-programs at the granularity of literals. We introduce a failure explanation algorithm based on analysing branches of SLD-trees. We integrate a meta-interpreter based implementation of this algorithm with the test-stage of the POPPER ILP system. We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space. Our experimental results show that explaining failures can drastically reduce hypothesis space exploration and learning times.</p> |
first_indexed | 2024-09-25T04:04:23Z |
format | Journal article |
id | oxford-uuid:ef44dde5-3671-4945-963f-7b97fbad8a7f |
institution | University of Oxford |
language | English |
last_indexed | 2024-09-25T04:04:23Z |
publishDate | 2023 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:ef44dde5-3671-4945-963f-7b97fbad8a7f2024-05-13T11:33:45ZLearning logic programs by explaining their failuresJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:ef44dde5-3671-4945-963f-7b97fbad8a7fEnglishSymplectic ElementsSpringer2023Morel, RCropper, A<p>Scientists form hypotheses and experimentally test them. If a hypothesis fails (is refuted), scientists try to <em>explain</em> the failure to eliminate other hypotheses. The more precise the failure analysis the more hypotheses can be eliminated. Thus inspired, we introduce failure explanation techniques for inductive logic programming. Given a hypothesis represented as a logic program, we test it on examples. If a hypothesis fails, we explain the failure in terms of failing sub-programs. In case a positive example fails, we identify failing sub-programs at the granularity of literals. We introduce a failure explanation algorithm based on analysing branches of SLD-trees. We integrate a meta-interpreter based implementation of this algorithm with the test-stage of the POPPER ILP system. We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space. Our experimental results show that explaining failures can drastically reduce hypothesis space exploration and learning times.</p> |
spellingShingle | Morel, R Cropper, A Learning logic programs by explaining their failures |
title | Learning logic programs by explaining their failures |
title_full | Learning logic programs by explaining their failures |
title_fullStr | Learning logic programs by explaining their failures |
title_full_unstemmed | Learning logic programs by explaining their failures |
title_short | Learning logic programs by explaining their failures |
title_sort | learning logic programs by explaining their failures |
work_keys_str_mv | AT morelr learninglogicprogramsbyexplainingtheirfailures AT croppera learninglogicprogramsbyexplainingtheirfailures |