An experimental study measuring human annotator categorization agreement on commonsense sentences

Developing agents capable of commonsense reasoning is an important goal in Artificial Intelligence (AI) research. Because commonsense is broadly defined, a computational theory that can formally categorize the various kinds of commonsense knowledge is critical for enabling fundamental research in th...

Full description

Bibliographic Details
Main Authors: Henrique Santos, Mayank Kejriwal, Alice M. Mulvehill, Gretchen Forbush, Deborah L. McGuinness, Adín Ramírez Rivera
Format: Article
Language:English
Published: Cambridge University Press 2021-01-01
Series:Experimental Results
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S2516712X21000095/type/journal_article
Description
Summary:Developing agents capable of commonsense reasoning is an important goal in Artificial Intelligence (AI) research. Because commonsense is broadly defined, a computational theory that can formally categorize the various kinds of commonsense knowledge is critical for enabling fundamental research in this area. In a recent book, Gordon and Hobbs described such a categorization, argued to be reasonably complete. However, the theory’s reliability has not been independently evaluated through human annotator judgments. This paper describes such an experimental study, whereby annotations were elicited across a subset of eight foundational categories proposed in the original Gordon-Hobbs theory. We avoid bias by eliciting annotations on 200 sentences from a commonsense benchmark dataset independently developed by an external organization. The results show that, while humans agree on relatively concrete categories like time and space, they disagree on more abstract concepts. The implications of these findings are briefly discussed.
ISSN:2516-712X