মনে করুন যে এলোমেলো ভেরিয়েবলের একটি নিম্ন এবং উপরের বাউন্ড [0,1] রয়েছে। এ জাতীয় চলকের বৈকল্পিক কীভাবে গণনা করা যায়?
মনে করুন যে এলোমেলো ভেরিয়েবলের একটি নিম্ন এবং উপরের বাউন্ড [0,1] রয়েছে। এ জাতীয় চলকের বৈকল্পিক কীভাবে গণনা করা যায়?
উত্তর:
আপনি নীচে Popoviciu এর বৈষম্য প্রমাণ করতে পারেন। চিহ্নিতকরণটি এবং । G ( t ) = E দ্বারা
একটি ফাংশন সংজ্ঞায়িত করুন [ ( এক্স - টি ) 2 ]
এখন, ফাংশনের মান বিবেচনা বিশেষ সময়ে । এটি অবশ্যই হওয়া উচিত যে
Let be a distribution on . We will show that if the variance of is maximal, then can have no support in the interior, from which it follows that is Bernoulli and the rest is trivial.
As a matter of notation, let be the th raw moment of (and, as usual, we write and for the variance).
We know does not have all its support at one point (the variance is minimal in that case). Among other things, this implies lies strictly between and . In order to argue by contradiction, suppose there is some measurable subset in the interior for which . Without any loss of generality we may assume (by changing to if need be) that : in other words, is obtained by cutting off any part of above the mean and has positive probability.
Let us alter to by taking all the probability out of and placing it at . In so doing, changes to
As a matter of notation, let us write for such integrals, whence
Calculate
The second term on the right, , is non-negative because everywhere on . The first term on the right can be rewritten
The first term on the right is strictly positive because (a) and (b) because we assumed is not concentrated at a point. The second term is non-negative because it can be rewritten as and this integrand is nonnegative from the assumptions on and . It follows that .
We have just shown that under our assumptions, changing to strictly increases its variance. The only way this cannot happen, then, is when all the probability of is concentrated at the endpoints and , with (say) values and , respectively. Its variance is easily calculated to equal which is maximal when and equals there.
Now when is a distribution on , we recenter and rescale it to a distribution on . The recentering does not change the variance whereas the rescaling divides it by . Thus an with maximal variance on corresponds to the distribution with maximal variance on : it therefore is a Bernoulli distribution rescaled and translated to having variance , QED.
If the random variable is restricted to and we know the mean , the variance is bounded by .
Let us first consider the case . Note that for all , , wherefore also . Using this result,
To generalize to intervals with , consider restricted to . Define , which is restricted in . Equivalently, , and thus
At @user603's request....
A useful upper bound on the variance of a random variable that takes on values in with probability is . A proof for the special case (which is what the OP asked about) can be found here on math.SE, and it is easily adapted to the more general case. As noted in my comment above and also in the answer referenced herein, a discrete random variable that takes on values and with equal probability has variance and thus no tighter general bound can be found.
Another point to keep in mind is that a bounded random variable has finite variance, whereas for an unbounded random variable, the variance might not be finite, and in some cases might not even be definable. For example, the mean cannot be defined for Cauchy random variables, and so one cannot define the variance (as the expectation of the squared deviation from the mean).
are you sure that this is true in general - for continuous as well as discrete distributions? Can you provide a link to the other pages? For a general distibution on it is trivial to show that
On the other hand one can find it with the factor under the name Popoviciu's_inequality on wikipedia.
This article looks better than the wikipedia article ...
For a uniform distribution it holds that