W, then W
T
is the matrix of weights for the output layer to input layer connections. As you recall, the
transpose of a matrix is obtained simply by interchanging the rows and the columns of the matrix.
There are two layers of neurons, an input layer and an output layer. There are no lateral connections, that is,
no two neurons within the same layer are connected. Recurrent connections, which are feedback connections
to a neuron from itself, may or may not be present. The architecture is quite simple. Figure 8.1 shows the
layout for this neural network model, using only three input neurons and two output neurons. There are
feedback connections from Field A to Field B and vice−versa. This figure also indicates the presence of inputs
and outputs at each of the two fields for the bidirectional associative memory network. Connection weights
are also shown as labels on only a few connections in this figure, to avoid cluttering. The general case is
analogous.
Figure 8.1
Layout of a BAM network
Do'stlaringiz bilan baham: |