Local limit theorem



Download 128,5 Kb.
bet3/3
Sana02.01.2022
Hajmi128,5 Kb.
#312001
1   2   3
Bog'liq
Local limit theorem

Lindeberg–Lévy CLT. Suppose {\textstyle \{X_{1},\ldots ,X_{n}\}}  is a sequence of i.i.d. random variables with {\textstyle \mathbb {E} [X_{i}]=\mu }  and {\textstyle \operatorname {Var} [X_{i}]=\sigma ^{2}<\infty } . Then as {\textstyle n}  approaches infinity, the random variables {\textstyle {\sqrt {n}}({\bar {X}}_{n}-\mu )}  converge in distribution to a normal {\textstyle {\mathcal {N}}(0,\sigma ^{2})} :[4]

{\displaystyle {\sqrt {n}}\left({\bar {X}}_{n}-\mu \right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\sigma ^{2}\right).}

In the case {\textstyle \sigma >0} , convergence in distribution means that the cumulative distribution functionsof {\textstyle {\sqrt {n}}({\bar {X}}_{n}-\mu )}  converge pointwise to the cdf of the {\textstyle {\mathcal {N}}(0,\sigma ^{2})}  distribution: for every realnumber {\textstyle z} ,



{\displaystyle \lim _{n\to \infty }\mathbb {P} \left[{\sqrt {n}}({\bar {X}}_{n}-\mu )\leq z\right]=\lim _{n\to \infty }\mathbb {P} \left[{\frac {{\sqrt {n}}({\bar {X}}_{n}-\mu )}{\sigma }}\leq {\frac {z}{\sigma }}\right]=\Phi \left({\frac {z}{\sigma }}\right),}

where {\textstyle \Phi (z)}  is the standard normal cdf evaluated at {\textstyle z} . The convergence is uniform in {\textstyle z}  in the sense that



{\displaystyle \lim _{n\to \infty }\;\sup _{z\in \mathbb {R} }\;\left|\mathbb {P} \left[{\sqrt {n}}({\bar {X}}_{n}-\mu )\leq z\right]-\Phi \left({\frac {z}{\sigma }}\right)\right|=0~,}where {\textstyle \sup }  denotes the least upper bound (or supremum) of the set

Limit theorems for densities, that is, theorems that establish the convergence of the densities of a sequence of distributions to the density of the limit distribution (if the given densities exist), or a classical version of local limit theorems, namely local theorems for lattice distributions, the simplest of which is the local Laplace theorem.

Let X1,X2…X1,X2… be a sequence of independent random variables that have a common distribution function F(x)F(x) with mean aa and finite positive variance σ2σ2. Let Fn(x)Fn(x) be the distribution function of the normalized sum

Zn=1σn−−√∑j=1n(Xj−a)Zn=1σn∑j=1n(Xj−a)

and let Φ(x)Φ(x) be the normal (0,1)(0,1)- distribution function. The assumptions ensure that Fn(x)→Φ(x)Fn(x)→Φ(x) as n→∞n→∞ for any xx. It can be shown that this relation does not imply the convergence of the density pn(x)pn(x) of the distribution of the random variable ZnZn to the normal density

12π−−√e−x2/2,12πe−x2/2,

even if the distribution FF has a density. If ZnZn, for some n=n0n=n0, has a bounded density pn0(x)pn0(x), then

pn(x)→ 12π−−√e−x2/2(*)(*)pn(x)→ 12πe−x2/2

uniformly with respect to xx. The condition that pn0(x)pn0(x) is bounded for some n0n0 is necessary for (*) to hold uniformly with respect to xx.

Let X1,X2…X1,X2… be a sequence of independent random variables that have the same non-degenerate distribution, and suppose that X1X1 takes values of the form b+Nhb+Nh, N=0,±1,±2…N=0,±1,±2… with probability 1, where h>0h>0 and bb are constants (that is, X1X1 has a lattice distribution with step hh).

Suppose that X1X1 has finite variance σ2σ2, let a=EX1a=EX1 and let

Pn(N)=P{∑j=1nXj=nb+Nh}.Pn(N)=P{∑j=1nXj=nb+Nh}.

In order that

supN ∣∣∣∣σn−−√hPn(N)−12π−−√exp{−12(nb+Nh−naσn−−√)2}∣∣∣∣→0supN |σnhPn(N)−12πexp⁡{−12(nb+Nh−naσn)2}|→0

as n→∞n→∞ it is necessary and sufficient that the step hh should be maximal. This theorem of B.V. Gnedenko is a generalization of the local Laplace theorem.

Local limit theorems for sums of independent non-identically distributed random variables serve as a basic mathematical tool in classical statistical mechanics and quantum statistics (see [7], [8]).

Local limit theorems have been intensively studied for sums of independent random variables and vectors, together with estimates of the rate of convergence in these theorems. The case of a limiting normal distribution has been most fully investigated (see [3], Chapt. 7); a number of papers have been devoted to local limit theorems for the case of an arbitrary stable distribution (see [2]). Similar investigations have been carried out for sums of dependent random variables, in particular for sums of random variables that form a Markov chain (see [5], ).

The central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.[citation needed]

The convergence in the central limit theorem is uniform because the limiting cumulative distribution function is continuous. If the third central moment {\textstyle \operatorname {E} \left[(X_{1}-\mu )^{3}\right]}  exists and is finite, then the speed of convergence is at least on the order of {\textstyle 1/{\sqrt {n}}}  (see Berry–Esseen theorem). Stein's method[18] can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics.[19]

The convergence to the normal distribution is monotonic, in the sense that the entropy of {\textstyle Z_{n}}  increases monotonically to that of the normal distribution.[20]



The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables. A sum of discrete random variables is still a discrete random variable, so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution). This means that if we build ahistogram of the realizations of the sum of n independent identical discrete variables, the curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity, this relation is known as de Moivre–Laplace theorem. The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values.

he density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov[24] for a particular local limit theorem for sums of independent and identically distributed random variables.

he density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov[24] for a particular local limit theorem for sums of independent and identically distributed random variables.

he density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov[24] for a particular local limit theorem for sums of independent and identically distributed random variables.

Published literature contains a number of useful and interesting examples and applications relating to the central limit theorem.[39] One source[40] states the following examples:



  • The probability distribution for total distance covered in a random walk (biased or unbiased) will tend toward a normal distribution.

  • Flipping many coins will result in a normal distribution for the total number of heads (or equivalently total number of tails).

From another viewpoint, the central limit theorem explains the common appearance of the "bell curve" in density estimates applied to real world data. In cases like electronic noise, examination grades, and so on, we can often regard a single measured value as the weighted average of many small effects. Using generalisations of the central limit theorem, we can then see that this would often (though not always) produce a final distribution that is approximately normal.

In general, the more a measurement is like the sum of independent variables with equal influence on the result, the more normality it exhibits. This justifies the common use of this distribution to stand in for the effects of unobserved variables in models like the linear model.



Published literature contains a number of useful and interesting examples and applications relating to the central limit theorem.[39] One source[40] states the following examples:

  • The probability distribution for total distance covered in a random walk (biased or unbiased) will tend toward a normal distribution.

  • Flipping many coins will result in a normal distribution for the total number of heads (or equivalently total number of tails).

From another viewpoint, the central limit theorem explains the common appearance of the "bell curve" in density estimates applied to real world data. In cases like electronic noise, examination grades, and so on, we can often regard a single measured value as the weighted average of many small effects. Using generalisations of the central limit theorem, we can then see that this would often (though not always) produce a final distribution that is approximately normal.

In general, the more a measurement is like the sum of independent variables with equal influence on the result, the more normality it exhibits. This justifies the common use of this distribution to stand in for the effects of unobserved variables in models like the linear model.
Download 128,5 Kb.

Do'stlaringiz bilan baham:
1   2   3




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish