C++ Neural Networks and Fuzzy Logic: Preface


•  Pattern classification •



Download 1,14 Mb.
Pdf ko'rish
bet135/443
Sana29.12.2021
Hajmi1,14 Mb.
#77367
1   ...   131   132   133   134   135   136   137   138   ...   443
Bog'liq
C neural networks and fuzzy logic

  Pattern classification

  Pattern completion

  Optimization

  Data clustering

  Approximation

  Function evaluation

A neural network, in any of the previous tasks, maps a set of inputs to a set of outputs. This nonlinear

mapping can be thought of as a multidimensional mapping surface. The objective of learning is to mold the

mapping surface according to a desired response, either with or without an explicit training process.

Learning and Training

A network can learn when training is used, or the network can learn also in the absence of training. The

difference between supervised and unsupervised training is that, in the former case, external prototypes are

used as target outputs for specific inputs, and the network is given a learning algorithm to follow and calculate

new connection weights that bring the output closer to the target output. Unsupervised learning is the sort of

learning that takes place without a teacher. For example, when you are finding your way out of a labyrinth, no

teacher is present. You learn from the responses or events that develop as you try to feel your way through the

maze. For neural networks, in the unsupervised case, a learning algorithm may be given but target outputs are

not given. In such a case, data input to the network gets clustered together; similar input stimuli cause similar

responses.

C++ Neural Networks and Fuzzy Logic:Preface

Chapter 6 Learning and Training

101



When a neural network model is developed and an appropriate learning algorithm is proposed, it would be

based on the theory supporting the model. Since the dynamics of the operation of the neural network is under

study, the learning equations are initially formulated in terms of differential equations. After solving the

differential equations, and using any initial conditions that are available, the algorithm could be simplified to

consist of an algebraic equation for the changes in the weights. These simple forms of learning equations are

available for your neural networks.

At this point of our discussion you need to know what learning algorithms are available, and what they look

like. We will now discuss two main rules for learning—Hebbian learning, used with unsupervised learning

and the delta rule, used with supervised learning. Adaptations of these by simple modifications to suit a

particular context generate many other learning rules in use today. Following the discussion of these two

rules, we present variations for each of the two classes of learning: supervised learning and unsupervised

learning.



Hebb’s Rule

Learning algorithms are usually referred to as learning rules. The foremost such rule is due to Donald Hebb.

Hebb’s rule is a statement about how the firing of one neuron, which has a role in the determination of the

activation of another neuron, affects the first neuron’s influence on the activation of the second neuron,

especially if it is done in a repetitive manner. As a learning rule, Hebb’s observation translates into a formula

for the difference in a connection weight between two neurons from one iteration to the next, as a constant

[mu] times the product of activations of the two neurons. How a connection weight is to be modified is what

the learning rule suggests. In the case of Hebb’s rule, it is adding the quantity [mu]a



i

a

j

, where a



i

 is the


activation of the ith neuron, and a

j

 is the activation of the jth neuron to the connection weight between the ith

and jth neurons. The constant [mu] itself is referred to as the learning rate. The following equation using the

notation just described, states it succinctly:

[Delta]w

ij

 = [mu]a



i

a

j



As you can see, the learning rule derived from Hebb’s rule is quite simple and is used in both simple and more

involved networks. Some modify this rule by replacing the quantity a



i

 with its deviation from the average of

all as and, similarly, replacing a

j

 by a corresponding quantity. Such rule variations can yield rules better suited

to different situations.

For example, the output of a neural network being the activations of its output layer neurons, the Hebbian

learning rule in the case of a perceptron takes the form of adjusting the weights by adding [mu] times the

difference between the output and the target. Sometimes a situation arises where some unlearning is required

for some neurons. In this case a reverse Hebbian rule is used in which the quantity [mu]a

i

a

j

 is subtracted from

the connection weight under question, which in effect is employing a negative learning rate.

In the Hopfield network of Chapter 1, there is a single layer with all neurons fully interconnected. Suppose

each neuron’s output is either a + 1 or a – 1. If we take [mu] = 1 in the Hebbian rule, the resulting

modification of the connection weights can be described as follows: add 1 to the weight, if both neuron

outputs match, that is, both are +1 or –1. And if they do not match (meaning one of them has output +1 and

the other has –1), then subtract 1 from the weight.

Previous Table of Contents Next

Copyright ©

 IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic:Preface

Hebb’s Rule

102




Download 1,14 Mb.

Do'stlaringiz bilan baham:
1   ...   131   132   133   134   135   136   137   138   ...   443




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish