C++ Neural Networks and Fuzzy Logic: Preface


•  QuoteCom,  http://www.quote.com •



Download 1,14 Mb.
Pdf ko'rish
bet298/443
Sana29.12.2021
Hajmi1,14 Mb.
#77367
1   ...   294   295   296   297   298   299   300   301   ...   443
Bog'liq
C neural networks and fuzzy logic

  QuoteCom,

 http://www.quote.com



  Philadelphia Fed,

 http://compstat.wharton.upenn.edu:8001/~siler/ fedpage.html



  Ohio state Financial Data Finder,

 http://cob.ohio−state.edu/dept/fin/ osudata.html

Previous Table of Contents Next

C++ Neural Networks and Fuzzy Logic:Preface

Choosing the Right Inputs

304



Copyright ©

 IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic:Preface

Choosing the Right Inputs

305



C++ Neural Networks and Fuzzy Logic

by Valluru B. Rao

MTBooks, IDG Books Worldwide, Inc.



ISBN: 1558515526   Pub Date: 06/01/95

Previous Table of Contents Next



Choosing a Network Architecture

The input and output layers are fixed by the number of inputs and outputs we are using. In our case, the output

is a single number, the expected change in the S&P 500 index 10 weeks from now. The input layer size will

be dictated by the number of inputs we have after preprocessing. You will see more on this soon. The middle

layers can be either 1 or 2. It is best to choose the smallest number of neurons possible for a given problem to

allow for generalization. If there are too many neurons, you will tend to get memorization of patterns. We will

use one hidden layer. The size of the first hidden layer is generally recommended as between one−half to

three times the size of the input layer. If a second hidden layer is present, you may have between three and ten

times the number of output neurons. The best way to determine optimum size is by trial and error.

NOTE:  You should try to make sure that there are enough training examples for your

trainable weights. In other words, your architecture may be dictated by the number of input

training examples, or facts, you have. In an ideal world, you would want to have about 10 or

more facts for each weight. For a 10−10−1 architecture, there are (10X10 + 10X1 = 110

weights), so you should aim for about 1100 facts. The smaller the ratio of facts to weights, the

more likely you will be undertraining your network, which will lead to very poor

generalization capability.


Download 1,14 Mb.

Do'stlaringiz bilan baham:
1   ...   294   295   296   297   298   299   300   301   ...   443




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish