Modelling, prediction and classification of student academic performance using artificial neural networks


where Y and X are the output and input vectors. W is a vector of weight parameters representing the connections within the ANN



Download 302,61 Kb.
bet3/6
Sana08.07.2022
Hajmi302,61 Kb.
#757549
1   2   3   4   5   6
Bog'liq
Modelling2222

where Y and X are the output and input vectors. W is a vector of weight parameters representing the connections within the ANN.
The input layer gathers data with feature sets and the input values are fed to the hidden layer. The output values


WijXi

У

MSE = — У У (у,- - t
N N. i-i i-i " 4 :

E

(5)

wij+1

w-

(6)

The в is the activation function (transfer function), N,• is the total number of ith connection lines to the jth neuron and xj is the output value from the previous layer of rth neuron. The activation function (в) of hyperbolic tangent is used to transfer the value of weighted sum of inputs to the output layer. The resultant activated node for the next input layer is therefore:
Xj = в(У;). (4)
In order to reduce the dimensionality of the predictor space and to prevent the possibility of over-fitting, prin­cipal component analysis (PCA) is employed [14]. PCA is a data reduction technique that transforms predictors lin­early, removes any redundant dimensions, and generates new sets of variables called as principal components [14].
The BP-based supervised learning approach is applied where both inputs and outputs parameters are supplied to the ANN model. BP is used as the learning rule for the ANN model that adjusts the weights of neurons Wj through the errors computed that further produces desired outputs. The error function (E) of computed BP-based ANN is calcu­lated as the sum of square difference difference between the and the target values and the desired outputs:

where tj is the target value for neuron i in the output layer and Nj is the total number of output neurons.
In ANN weights are updated recursively. The BP-based Levenberg-Marquardt optimisation algorithm is applied in the ANN training. The Levenberg-Marquardt is a hybrid- based training method using the steepest descent (gradi­ent descent) and Gauss-Newton method. It speeds up the convergence to an optimal solution and therefore is effec­tive in solving non-linear problems over the other train­ing algorithms [15, 16]. The algorithm introduces another approach of approximation to Hessian Matrix, which is similar to the Gauss-Netwon method [15, 17]:

where J denotes the Jacobian matrix, ek is the error in the network [17], Wj is the current weight and w^- is the updated weight, f is the damping factor.
When f is small, the Levenberg-Marquardt training algorithm in Eq. (6) is based on Gauss-Newton method,

[J'J + С IT1J'ek,

!>i


Download 302,61 Kb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish