Categorical Cross Entropy
keras.losses.categorical_crossentropy(y_true, y_pred)
See the equation in Figure
A-16
.
Figure A-16. The equation for categorical cross entropy
Figure A-17. Another way to write the equation for categorical cross entropy
In this case,
n is the number of samples in the whole data set. The parameter h
θ
represents the model with the weight parameter
θ passed in, so h
θ
(x
i
) gives the predicted
value for x
i
with model’s weights
θ. Finally, y
i
represents the true label for data point
at index i. The data needs to be regularized to be between 0 and 1, so for categorical
cross entropy, it must be piped through a softmax activation layer. The categorical cross
entropy loss is also called
softmax loss.
Equivalently, you can write the previous equation as shown in Figure
A-17
.
In this case,
m is the number of classes.
The categorical cross entropy loss is a commonly used metric in classification tasks,
especially in computer vision with convolutional neural networks.
Do'stlaringiz bilan baham: |