•
NETTalk. In 1987, Sejnowski and Rosenberg developed a network connected to a speech
synthesizer that was able to utter English words, being trained to produce phonemes from English
text. The architecture consisted of an input layer window of seven characters. The characters were
part of English text that was scrolled by. The network was trained to pronounce the letter at the center
of the window. The middle layer had 80 neurons, while the output layer consisted of 26 neurons. With
1024 training patterns and 10 cycles, the network started making intelligible speech, similar to the
process of a child learning to talk. After 50 cycles, the network was about 95% accurate. You could
purposely damage the network with the removal of neurons, but this did not cause performance to
drop off a cliff; instead, the performance degraded gracefully. There was rapid recovery with
retraining using fewer neurons also. This shows the fault tolerance of neural networks.
Do'stlaringiz bilan baham: |