Using Neural Networks to Generate Inferential Roles for Natural Language

Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how...

Full description

Bibliographic Details
Main Authors: Peter Blouw, Chris Eliasmith
Format: Article
Language:English
Published: Frontiers Media S.A. 2018-01-01
Series:Frontiers in Psychology
Subjects:
Online Access:http://journal.frontiersin.org/article/10.3389/fpsyg.2017.02335/full
_version_ 1819170848743882752
author Peter Blouw
Chris Eliasmith
author_facet Peter Blouw
Chris Eliasmith
author_sort Peter Blouw
collection DOAJ
description Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's “inferential role.” We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition.
first_indexed 2024-12-22T19:41:55Z
format Article
id doaj.art-621bb1b777cc45c5b6e5ff8b7c0b0b1e
institution Directory Open Access Journal
issn 1664-1078
language English
last_indexed 2024-12-22T19:41:55Z
publishDate 2018-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Psychology
spelling doaj.art-621bb1b777cc45c5b6e5ff8b7c0b0b1e2022-12-21T18:14:49ZengFrontiers Media S.A.Frontiers in Psychology1664-10782018-01-01810.3389/fpsyg.2017.02335295741Using Neural Networks to Generate Inferential Roles for Natural LanguagePeter BlouwChris EliasmithNeural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's “inferential role.” We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition.http://journal.frontiersin.org/article/10.3389/fpsyg.2017.02335/fullnatural language inferencerecursive neural networkslanguage comprehensionsemantics
spellingShingle Peter Blouw
Chris Eliasmith
Using Neural Networks to Generate Inferential Roles for Natural Language
Frontiers in Psychology
natural language inference
recursive neural networks
language comprehension
semantics
title Using Neural Networks to Generate Inferential Roles for Natural Language
title_full Using Neural Networks to Generate Inferential Roles for Natural Language
title_fullStr Using Neural Networks to Generate Inferential Roles for Natural Language
title_full_unstemmed Using Neural Networks to Generate Inferential Roles for Natural Language
title_short Using Neural Networks to Generate Inferential Roles for Natural Language
title_sort using neural networks to generate inferential roles for natural language
topic natural language inference
recursive neural networks
language comprehension
semantics
url http://journal.frontiersin.org/article/10.3389/fpsyg.2017.02335/full
work_keys_str_mv AT peterblouw usingneuralnetworkstogenerateinferentialrolesfornaturallanguage
AT chriseliasmith usingneuralnetworkstogenerateinferentialrolesfornaturallanguage