Hands-On Deep Learning for Images with TensorFlow



Download 5,72 Mb.
Pdf ko'rish
bet20/32
Sana22.12.2022
Hajmi5,72 Mb.
#893820
1   ...   16   17   18   19   20   21   22   23   ...   32
Bog'liq
Hands On Deep Learning for Images

Hyperparameters
In this section, we'll explore hyperparameters, or parameters that can't quite be machine
learned.
We'll also cover trainable parameters (these are the parameters that are learned by the
solver), nontrainable parameters (additional parameters in the models that don't require
training), and then finally, hyperparameters (parameters that aren't learned by a traditional
solver).
In our Model summary output screenshot, pay attention to the number of trainable
parameters in the highlighted section of code at the bottom of the screenshot. That is the
number of individual floating-point numbers that are contained inside of our model that
our 
adam
optimizer, in conjunction with our categorical cross-entropy 
loss
function, will
be exploring in order to find the best parameter values possible. So, this trainable
parameter number is the only set of numbers that is learned by our 
optimizer
function.
There are, however, many other numbers in this code and in the preceding screenshot.
What about these nontrainable parameters? In our current model, there are zero
nontrainable parameters. However, different kinds of layers in Keras may have constant
values, and so they'll show up as nontrainable. Again, this simply means that there's no
need for them to be trained, and that our 
optimizer
function will not try to vary their
values.


Classical Neural Network
Chapter 3
[ 47 ]
So, what is a hyperparameter? Well, very simply, a hyperparameter is a value

a
parameter

that is outside of the model itself. So the simplest thing to think of as a
hyperparameter is the actual model structure. In this case, the number of times we've
created layers is a hyperparameter, the size of the layers is a hyperparameter, the 
32
units
we select here in our dense layers is a hyperparameter, the 
0.1
dropout setting is a
hyperparameter, and even the activation function itself

the choice of 
relu
, say, rather
than 
sigmoid

is a hyperparameter. Now you may be thinkingwait a minute, I'm having to
pick an awful lot of parameters here; I thought the machine was supposed to be learning. It is! The
trick, though, is that 
optimizer
is not able to learn everything we need to know to put
together an optimal model.

Download 5,72 Mb.

Do'stlaringiz bilan baham:
1   ...   16   17   18   19   20   21   22   23   ...   32




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2025
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish