Unsupervised Lexicon Discovery from Acoustic Input
We present a model of unsupervised phonological lexicon discovery -- the problem of simultaneously learning phoneme-like and word-like units from acoustic input. Our model builds on earlier models of unsupervised phone-like unit discovery from acoustic data (Lee and Glass, 2012), and unsupervised sy...
Main Authors: | Lee, Chia-ying, O'Donnell, Timothy John, Glass, James R. |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | en_US |
Published: |
Association for Computational Linguistics
2015
|
Online Access: | http://hdl.handle.net/1721.1/98523 https://orcid.org/0000-0002-3097-360X https://orcid.org/0000-0002-5711-977X |
Similar Items
-
The Unsupervised Acquisition of a Lexicon from Continuous Speech
by: Marcken, Carl de
Published: (2004) -
Modelling the Lexicon in Unsupervised Part of Speech Induction
by: Dubbin, G, et al.
Published: (2015) -
Towards multilingual lexicon discovery from visually grounded speech
by: Azuh, Emmanuel Mensah
Published: (2020) -
Unsupervised part discovery from contrastive reconstruction
by: Choudhury, S, et al.
Published: (2021) -
Unsupervised discovery of parts, structure, and dynamics
by: Xu, Zhenjia, et al.
Published: (2020)