Do neural nets learn statistical laws behind natural language?
The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language e...
Main Authors: | Shuntaro Takahashi, Kumiko Tanaka-Ishii |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2017-01-01
|
Series: | PLoS ONE |
Online Access: | http://europepmc.org/articles/PMC5747447?pdf=render |
Similar Items
-
Evaluating Computational Language Models with Scaling Properties of Natural Language
by: Takahashi, Shuntaro, et al.
Published: (2019-09-01) -
Cross Entropy of Neural Language Models at Infinity—A New Bound of the Entropy Rate
by: Shuntaro Takahashi, et al.
Published: (2018-11-01) -
Menzerath’s Law in the Syntax of Languages Compared with Random Sentences
by: Kumiko Tanaka-Ishii
Published: (2021-05-01) -
Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora
by: Ryosuke Takahira, et al.
Published: (2016-10-01) -
Behind the Veil of Language
by: Bokhtar Bakozoda
Published: (2018-10-01)