Fast back-propagation learning methods for large phonemic neural networks /

Several improvements in the Back-Propagation procedure are proposed to increase training speed, and we discuss their limitations with respect to generalization performance. The error surface is modeled to avoid local minima and flat areas. The synaptic weights are updated as often as possible. Both...

Full description

Bibliographic Details
Main Author: Haffner, P.
Format:
Subjects:
_version_ 1796723436297388032
author Haffner, P.
author_facet Haffner, P.
author_sort Haffner, P.
collection OCEAN
description Several improvements in the Back-Propagation procedure are proposed to increase training speed, and we discuss their limitations with respect to generalization performance. The error surface is modeled to avoid local minima and flat areas. The synaptic weights are updated as often as possible. Both the step size and the momentum are dynamically scaled to the largest possible values that do not result in overshooting. Training for the speaker-dependent recognition of the phonemes /b/, /d/ and /g/ has been reduced from 2 days to 1minute on an Alliant parallel computer, delivering the same 98.6% recognition performance. With a 55000-connection TDNN, the same algorithm needs 1 hour and 5000 training tokens to recognize the 18 Japanese consonants with 96.7% correct.
first_indexed 2024-03-05T07:30:19Z
format
id KOHA-OAI-TEST:380174
institution Universiti Teknologi Malaysia - OCEAN
last_indexed 2024-03-05T07:30:19Z
record_format dspace
spelling KOHA-OAI-TEST:3801742020-12-19T17:13:15ZFast back-propagation learning methods for large phonemic neural networks / Haffner, P. Several improvements in the Back-Propagation procedure are proposed to increase training speed, and we discuss their limitations with respect to generalization performance. The error surface is modeled to avoid local minima and flat areas. The synaptic weights are updated as often as possible. Both the step size and the momentum are dynamically scaled to the largest possible values that do not result in overshooting. Training for the speaker-dependent recognition of the phonemes /b/, /d/ and /g/ has been reduced from 2 days to 1minute on an Alliant parallel computer, delivering the same 98.6% recognition performance. With a 55000-connection TDNN, the same algorithm needs 1 hour and 5000 training tokens to recognize the 18 Japanese consonants with 96.7% correct.Several improvements in the Back-Propagation procedure are proposed to increase training speed, and we discuss their limitations with respect to generalization performance. The error surface is modeled to avoid local minima and flat areas. The synaptic weights are updated as often as possible. Both the step size and the momentum are dynamically scaled to the largest possible values that do not result in overshooting. Training for the speaker-dependent recognition of the phonemes /b/, /d/ and /g/ has been reduced from 2 days to 1minute on an Alliant parallel computer, delivering the same 98.6% recognition performance. With a 55000-connection TDNN, the same algorithm needs 1 hour and 5000 training tokens to recognize the 18 Japanese consonants with 96.7% correct.12PSZJBLNeural circuitryPhonemicsDelay lines
spellingShingle Neural circuitry
Phonemics
Delay lines
Haffner, P.
Fast back-propagation learning methods for large phonemic neural networks /
title Fast back-propagation learning methods for large phonemic neural networks /
title_full Fast back-propagation learning methods for large phonemic neural networks /
title_fullStr Fast back-propagation learning methods for large phonemic neural networks /
title_full_unstemmed Fast back-propagation learning methods for large phonemic neural networks /
title_short Fast back-propagation learning methods for large phonemic neural networks /
title_sort fast back propagation learning methods for large phonemic neural networks
topic Neural circuitry
Phonemics
Delay lines
work_keys_str_mv AT haffnerp fastbackpropagationlearningmethodsforlargephonemicneuralnetworks