Scalable syntactic inductive biases for neural language models
<p>Natural language has a sequential surface form, although its underlying structure has been argued to be hierarchical and tree-structured in nature, whereby smaller linguistic units like words are recursively composed to form larger ones, such as phrases and sentences. This thesis aims to an...
Автор: | Kuncoro, AS |
---|---|
Інші автори: | Blunsom, P |
Формат: | Дисертація |
Мова: | English |
Опубліковано: |
2022
|
Предмети: |
Схожі ресурси
Схожі ресурси
-
Incremental generative models for syntactic and semantic natural language processing
за авторством: Buys, J
Опубліковано: (2017) -
Large Language Models are Not Models of Natural Language: They are Corpus Models
за авторством: Csaba Veres
Опубліковано: (2022-01-01) -
Understanding video through the lens of language
за авторством: Bain, M
Опубліковано: (2023) -
Simplicity and learning to distinguish arguments from modifiers
за авторством: Leon Bergen, та інші
Опубліковано: (2022-12-01) -
Non-parametric deep learning with applications in active learning
за авторством: Band, N
Опубліковано: (2022)