In one of the many interesting paradigms you encounter in neural network models and theory, is the strategy
there needs to be competition. Since everybody is for himself in such a competition, in this case every neuron
for itself, it would be necessary to have lateral connections that indicate this circumstance. The lateral
connections from any neuron to the others should have a negative weight. Or, the neuron with the highest
activation is considered the winner and only its weights are modified in the training process, leaving the
weights of others the same. Winner takes all means that only one neuron in that layer fires and the others do
not. This can happen in a hidden layer or in the output layer.
In another situation, when a particular category of input is to be identified
from among several groups of
inputs, there has to be a subset of the neurons that are dedicated to seeing it happen. In this case, inhibition
increases for distant neurons, whereas excitation increases for the neighboring ones, as far as such a subset of
neurons is concerned. The phrase on center, off surround describes this phenomenon of distant inhibition and
near excitation.
Weights also are the prime components in a neural network, as they reflect on the one hand the memory stored
by the network, and on the other hand the basis for learning and training.
Inputs
You have seen that mutually orthogonal or almost orthogonal patterns are required as stable stored patterns for
the Hopfield network, which we discussed before for pattern matching. Similar restrictions are found also
with other neural networks. Sometimes it is not a restriction, but the purpose of the model makes natural a
certain type of input. Certainly, in the context of pattern classification, binary input patterns make problem
setup simpler. Binary, bipolar, and analog signals are the varieties of inputs. Networks that accept analog
signals as inputs are for continuous models, and those that require binary or bipolar inputs are for discrete
models. Binary inputs can be fed to networks for continuous models, but analog signals cannot be input to
networks for discrete models (unless they are fuzzified). With input possibilities being discrete or analog, and
the model possibilities being discrete or continuous, there are potentially four situations, but one of them
where analog inputs are considered for a discrete model is untenable.
An example of a continuous model is where a network is to adjust the angle by which the steering wheel of a
truck is to be turned to back up the truck into a parking space. If a network is supposed to recognize characters
of the alphabet, a means of discretization of a character allows the use of a discrete model.
What are the types of inputs for problems like image processing or handwriting analysis? Remembering that
artificial neurons, as processing elements, do aggregation of their inputs by using connection weights, and that
the output neuron uses a threshold function, you know that the inputs have to be numerical. A handwritten
character can be superimposed on a grid, and the input can consist of the cells in each row of the grid where a
part of the character is present. In other words, the input corresponding to one character will be a set of binary
or gray−scale sequences containing one sequence for each row of the grid. A 1 in a particular position in the
sequence for a row shows that the corresponding pixel is present(black) in that part of the grid, while 0 shows
it is not. The size of the grid has to be big enough to accommodate the largest character under study, as well as
the most complex features.
Previous Table of Contents Next
Copyright ©
IDG Books Worldwide, Inc.
C++ Neural Networks and Fuzzy Logic:Preface
Inputs
93