What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?
AbstractThere is a growing interest in investigating what neural NLP models learn about language. A prominent open question is the question of whether or not it is necessary to model hierarchical structure. We present a linguistic investigation of a neural parser adding insights to t...
Main Authors: | Miryam de Lhoneux, Sara Stymne, Joakim Nivre |
---|---|
Format: | Article |
Language: | English |
Published: |
The MIT Press
2021-12-01
|
Series: | Computational Linguistics |
Online Access: | https://direct.mit.edu/coli/article/46/4/763/97325/What-Should-Do-Can-LSTMs-Learn-When-Parsing |
Similar Items
-
Greedy Transition-Based Dependency Parsing with Stack LSTMs
by: Miguel Ballesteros, et al.
Published: (2017-03-01) -
Nucleus Composition in Transition-based Dependency Parsing
by: Joakim Nivre, et al.
Published: (2022-07-01) -
On coordination and clitic climbing in Spanish auxiliary verb constructions
by: Krivochen, DG, et al.
Published: (2022) -
Intransitive verbs and Italian auxiliaries
by: Burzio, Luigi
Published: (2009) -
Modal auxiliary verb constructions in East African Bantu languages
by: Rasmus Bernander, et al.
Published: (2022-05-01)