Improving statistical parsing by linguistic regularization
Statistically-based parsers for large corpora, in particular the Penn Tree Bank (PTB), typically have not used all the linguistic information encoded in the annotated trees on which they are trained. In particular, they have not in general used information that records the effects of derivations, su...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2012
|
Online Access: | http://hdl.handle.net/1721.1/71163 https://orcid.org/0000-0002-1061-1871 https://orcid.org/0000-0002-9207-4888 |
Summary: | Statistically-based parsers for large corpora, in particular the Penn Tree Bank (PTB), typically have not used all the linguistic information encoded in the annotated trees on which they are trained. In particular, they have not in general used information that records the effects of derivations, such as empty categories and the representation of displaced phrases, as is the case with passive, topicalization, and wh-constructions. Here we explore ways to use this information to “unwind” derivations, yielding a regularized underlying syntactic structure that can be used as an additional source of information for more accurate parsing. In effect, we make use of two joint sets of tree structures for parsing: the surface structure and its corresponding underlying structure where arguments have been restored to their canonical positions. We present a pilot experiment on passives in the PTB indicating that through the use of these two syntactic representations we can improve overall parsing performance by exploiting transformational regularities, in this way paring down the search space of possible syntactic analyses. |
---|