Machine Learning: 2 Books in 1: Machine Learning for Beginners, Machine Learning Mathematics. An Introduction Guide to Understand Data Science Through the Business Application



Download 1,94 Mb.
Pdf ko'rish
bet75/96
Sana22.06.2022
Hajmi1,94 Mb.
#692449
1   ...   71   72   73   74   75   76   77   78   ...   96
Bog'liq
2021272010247334 5836879612033894610

Forecasting: In day-to-day enterprise decisions, for example, sales
forecast, capital distribution between commodities, capacity utilization),
economic and monetary policy, finance and inventory markets, forecasting
is the tool of choice across the industrial spectrum. For instance, predicting
inventory prices is a complicated issue with a host of underlying variables
which can be concealed in the depths of big data or readily available.
Traditional forecasting models tend to have various restrictions to take these
complicated, non-linear associations into consideration. Due to its capacity
to model and extract hidden characteristics and interactions, implementation
of ANNs in the correct manner can provide a reliable solution to the
problem at hand.
ANNs are also free of any restrictions on input and residual
distributions, unlike the traditional forecast models. For instance,
ongoing progress in this area has resulted in recent advancements in
predictive use of "LSTM" and "Recurrent Neural Networks" to generate
forecasts from the model. For example, forecasting the weather; foreign
exchange systems used by “Citibank London” are driven by neural
networks.


Chapter 4: Learning Through
Uniform Convergence
The most fundamental issue of statistical learning theory is the issue of
characterizing the ability of the model to learn. In the case models driven by
"supervised classification and regression" techniques, the learnability can
be assumed to be equal to the "uniform convergence" of empirical risk to
population risk. Meaning if a problem can be trained on, it can only
be learned through minimization of empirical risk of the data. “Uniform
convergence in probability", In the context of statistical asymptotic theory
and probability theory, is a type of convergence in probability. It implies
that within a particular event-family, the empirical frequencies of all
occurrences converge to their theoretical probabilities under certain
circumstances. “Uniform convergence in probability” as part of
the statistical learning theory, is widely applicable to machine learning.
Uniform convergence is defined as "a mode of convergence of features
stronger than pointwise convergence, in the mathematical field of analysis".
In 1995, Vapnik, published the “General Setting of Learning”, as it pertains
to the subject of statistical learnability. The General Learning Setting
addresses issues of learning. Conventionally, a learning problem can be
defined using a “hypothesis class ‘H’, an instance set ‘Z’ (with a sigma-
algebra), and an objective function (e.g., loss or cost)” i.e. “f: H × Z→ R”.
This theory can be used “to minimize a population risk functional over


some hypothesis class H, where the distribution of is unknown, based
on sample z
1
,...,z
m
 drawn from D”.
“F (h) = E
Z


[f (h;Z)]”
This General Setting encompasses “supervised classification and
regression” 
techniques, 
some 
“unsupervised 
learning
algorithms”, “density estimation”, among others. In supervised learning, "z
= (x, y)" is an instance-label pair, "h" is a predictor, and "f (h; (x, y)) =
loss(h(x), y)" is the loss function. In terms of statistical learnability, the goal
is to minimize “F (h) = E
Z


[f (h;Z)]”, within experimental accuracy
based on a finite sample only (z
1
,…z
m
). The concern, in this case, does not
pertain to the problem’s computational aspects, meaning whether this
approximate minimization can be performed quickly and effectively, but
whether this can be achieved statistically based on the sample (z
1
,…z
m
)
only.
It is common knowledge that a “supervised classification and regression”
problem can only be learned when the empirical risks for the whole 

H” uniformly connect to the population risk. According to the research
done by Blumer (1989) and Alon (1997), “if uniform convergence holds,
then the empirical risk minimizer (ERM) is consistent, that is, the
population risk of the ERM converges to the optimal population risk, and
the problem is learnable using the ERM”. This suggests that “uniform
convergence” of the empirical risks is a required and satisfactory condition
for learnability, which can be depicted as an equivocal to a “combinatorial
condition” which, when it comes to classification algorithms, it has a finite
“VC-dimension” and when in regression algorithms, it has a finite “fat-


shattering dimension”. The picture below shows a scenario for “supervised
classification and regression”:
Besides “uniform convergence”, “stability’s” specific concepts were
proposed to be learnability’s prerequisite. Inherently, the concepts of
“stability” rely on specific “learning laws” or algorithms, and evaluate how
sensitive they are to the training data set’s fluctuations. It is recognized in
specific that ERM stability would be enough for learnability. It is asserted
in “Mukherjee et al. (2006)”, that stability is also essential to learning. On
the basis of the assumption that “uniform convergence is equal to
learnability, stability has been shown to characterize learning skill only
where uniform convergence characterizes learning skill”.
Only in a "supervised classification and regression" environment, was the
equivalence of uniform convergence and learning officially established.
More broadly, the implications on the "right" in the picture above are valid:
"finite fat-shattering dimensions", "uniform convergence", and even "ERM
stability", can be deemed suitable for learnability using the “ERM”.
Concerning the opposite implications, Vapnik has shown that a concept of
"non-trivial" or "rigid" learnability associated with the ERM amounts to
“uniform convergence of the empirical risks”. The concept was intended to
banish some of the learning’s "trivial" issues that can be learned without
uniform convergence. Even with these issues, empirical risk minimization
can make learning feasible. Thus, in the “general learning setting”


or “supervised classification and regression”, an issue would appear to be
learnable only when it can be learned by empirical risk minimization.
This framework is not very specific and can sufficiently cover a significant
Download 1,94 Mb.

Do'stlaringiz bilan baham:
1   ...   71   72   73   74   75   76   77   78   ...   96




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish