Neural attentions for natural language understanding and modeling
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
1. autor: | Luo, Hongyin. |
---|---|
Kolejni autorzy: | James Glass. |
Format: | Praca dyplomowa |
Język: | eng |
Wydane: |
Massachusetts Institute of Technology
2019
|
Hasła przedmiotowe: | |
Dostęp online: | https://hdl.handle.net/1721.1/122760 |
Podobne zapisy
-
Neural architectures for natural language understanding
od: Tay, Yi
Wydane: (2019) -
Interpretable neural models for natural language processing
od: Lei, Tao, Ph. D. Massachusetts Institute of Technology
Wydane: (2017) -
Self-Training for Natural Language Processing
od: Luo, Hongyin
Wydane: (2022) -
Attention mechanism optimization for sub-symbolic-based and neural-symbolic-based natural language processing
od: Ni, Jinjie
Wydane: (2023) -
Towards human-like natural language understanding with language models
od: Yordanov, Y
Wydane: (2024)