Global–Local Self-Attention Based Transformer for Speaker Verification

Transformer models are now widely used for speech processing tasks due to their powerful sequence modeling capabilities. Previous work determined an efficient way to model speaker embeddings using the Transformer model by combining transformers with convolutional networks. However, traditional globa...

Full description

Bibliographic Details
Main Authors: Fei Xie, Dalong Zhang, Chengming Liu
Format: Article
Language:English
Published: MDPI AG 2022-10-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/12/19/10154