סיכום: | Rule learning involves developing machine learning models
that can be applied to a set of logical facts to predict additional
facts, as well as providing methods for extracting from the
learned model a set of logical rules that explain symbolically
the model’s predictions. Existing such approaches, however,
do not describe formally the relationship between the model’s
predictions and the derivations of the extracted rules; rather,
it is often claimed without justification that the extracted rules
‘approximate’ or ‘explain’ the model, and rule quality is evaluated by manual inspection. In this paper, we study the formal properties of Neural-LP—a prominent rule learning approach. We show that the rules extracted from Neural-LP
models can be both unsound and incomplete: on the same input dataset, the extracted rules can derive facts not predicted
by the model, and the model can make predictions not derived
by the extracted rules. We also propose a modification to the
Neural-LP model that ensures that the extracted rules are always sound and complete. Finally, we show that, on several
prominent benchmarks, the classification performance of our
modified model is comparable to that of the standard NeuralLP model. Thus, faithful learning of rules is feasible from
both a theoretical and practical point of view.
|