b (called bias) is added
to the dot product result. From there, it passes through an
activation function that
decides if the entire node sends data or not. In this case, the activation function only
varies between 0 and 1 depending on whether or not the dot product plus the bias
reaches a certain value or not (threshold). It is possible to have other activation functions
such as a sigmoid function, which outputs some value between 0 and 1.
Calling the output y and the input x, the basic function for each node can be
represented by the equation in Figure
3-5
.
An artificial neural network is comprised of interlinked layers of these nodes and can
look like Figure
3-6
.
Do'stlaringiz bilan baham: |