Learning logic programs by explaining their failures

<p>Scientists form hypotheses and experimentally test them. If a hypothesis fails (is refuted), scientists try to&nbsp;<em>explain</em>&nbsp;the failure to eliminate other hypotheses. The more precise the failure analysis the more hypotheses can be eliminated. Thus inspired...

Full description

Bibliographic Details
Main Authors: Morel, R, Cropper, A
Format: Journal article
Language:English
Published: Springer 2023
Description
Summary:<p>Scientists form hypotheses and experimentally test them. If a hypothesis fails (is refuted), scientists try to&nbsp;<em>explain</em>&nbsp;the failure to eliminate other hypotheses. The more precise the failure analysis the more hypotheses can be eliminated. Thus inspired, we introduce failure explanation techniques for inductive logic programming. Given a hypothesis represented as a logic program, we test it on examples. If a hypothesis fails, we explain the failure in terms of failing sub-programs. In case a positive example fails, we identify failing sub-programs at the granularity of literals. We introduce a failure explanation algorithm based on analysing branches of SLD-trees. We integrate a meta-interpreter based implementation of this algorithm with the test-stage of the&nbsp;POPPER&nbsp;ILP system. We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space. Our experimental results show that explaining failures can drastically reduce hypothesis space exploration and learning times.</p>