Hands-On Machine Learning with Scikit-Learn and TensorFlow


| Chapter 9: Unsupervised Learning Techniques



Download 26,57 Mb.
Pdf ko'rish
bet219/225
Sana16.03.2022
Hajmi26,57 Mb.
#497859
1   ...   215   216   217   218   219   220   221   222   ...   225
Bog'liq
Hands on Machine Learning with Scikit Learn Keras and TensorFlow

270 | Chapter 9: Unsupervised Learning Techniques


possible values of 
x
, you always get 1, but if you integrate the likelihood function over
all possible values of 
θ
, the result can be any positive value.
Figure 9-20. A model’s parametric function (top left), and some derived functions: a PDF
(lower left), a likelihood function (top right) and a log likelihood function (lower right)
Given a dataset X, a common task is to try to estimate the most likely values for the
model parameters. To do this, you must find the values that maximize the likelihood
function, given X. In this example, if you have observed a single instance 
x
=2.5, the
maximum likelihood estimate
(MLE) of 
θ
is 
θ
=1.5. If a prior probability distribution 
g
over 
θ
exists, it is possible to take it into account by maximizing 

(
θ
|
x
)g(
θ
) rather
than just maximizing 

(
θ
|
x
). This is called maximum a-posteriori (MAP) estimation.
Since MAP constrains the parameter values, you can think of it as a regularized ver‐
sion of MLE.
Notice that it is equivalent to maximize the likelihood function or to maximize its
logarithm (represented in the lower right hand side of 
): indeed, the loga‐
rithm is a strictly increasing function, so if 
θ
maximizes the log likelihood, it also
maximizes the likelihood. It turns out that it is generally easier to maximize the log
likelihood. For example, if you observed several independent instances 
x
(1)
to 
x
(
m
)
, you
would need to find the value of 
θ
that maximizes the product of the individual likeli‐
hood functions. But it is equivalent, and much simpler, to maximize the sum (not the
product) of the log likelihood functions, thanks to the magic of the logarithm which
converts products into sums: log(
ab
)=log(
a
)+log(
b
).
Once you have estimated 
θ
, the value of 
θ
that maximizes the likelihood function,
then you are ready to compute 
L
=

θ
,

. This is the value which is used to com‐
pute the AIC and BIC: you can think of it as a measure of how well the model fits the
data.
To compute the BIC and AIC, just call the 
bic()
or 
aic()
methods:

Download 26,57 Mb.

Do'stlaringiz bilan baham:
1   ...   215   216   217   218   219   220   221   222   ...   225




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish