Machine Learning Applications on Agricultural Datasets for Smart Farm Enhancement


, 6 , 38 9 of 22 Machines  2018



Download 2,12 Mb.
Pdf ko'rish
bet9/29
Sana27.09.2022
Hajmi2,12 Mb.
#850453
1   ...   5   6   7   8   9   10   11   12   ...   29
Bog'liq
machines-06-00038

2018
,
6
, 38
9 of 22
Machines 
2018

6
, x FOR PEER REVIEW
9 of 22 
(
a
) (
b

Figure 2.
Two consecutive steps of the K-nearest neighbors (KNN) algorithm (K = 3) in a
bi-dimensional feature space; (
a
): a blue item has ambiguous clustering, (
b
) the green cluster is 
assigned to it according to its number and proximity. 
A decision tree (or DT or D-tree) is a machine learning classifier based on the data structure of 
the tree that can be used for supervised learning with a predictive modeling approach; each internal 
node (split) is labeled with an input feature, while the arcs that link a node to many others (children) 
are labeled with a condition on the input feature that determines the descending path that leads 
from the root node to the leaves (nodes without children). 
Considering the simplest binary tree (Figure 3), a node can have almost two children; each leaf 
is labeled with a class name in a discrete set of values or with a probability distribution over the 
classes that predict the value of the target variable. 
In this way, the decision tree classifier results are characterized by the following: 

nodes (root/parent/child/leaf/split) and arcs (descending, directed) 

no-cycles between nodes 

feature vector 
v

R
n

split function 
f
n
(
v
): 
R
n

R

thresholds 
T
n

R
n

set of classes (labels) 
C

classifications 
P
n
(
c
)
, where 
c
is a class label. 
Figure 3.
A (binary) decision tree used to classify and predict values with numerical features. 
Also, for this classifier, the split functions are very important: classification (or clustering) tree 
analysis consists of the prediction of the class to which the data belongs 
P
n
(
c
), 
with a process called 
recursive partitioning
repeated recursively on each split subset. The algorithms that navigate and 
build decision trees usually work top-down by choosing a value for the variable at each step that 
best splits the set of items. To decide which feature to split at each node in the navigation of the tree, 
the 
information gain
value is used. 
Experimental design: It is the same as that of Task 3, but employing the decision tree, KNN 
(K-nearest neighbors), and polynomial regression models. 

Download 2,12 Mb.

Do'stlaringiz bilan baham:
1   ...   5   6   7   8   9   10   11   12   ...   29




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish