যদি এক্স
ইঙ্গিতটি বলেছিল যে সম্ভাব্য রাজ্য হিসাবে 1
এই গণিতটি কীভাবে করবেন তা অনুমানের মতো
যদি এক্স
ইঙ্গিতটি বলেছিল যে সম্ভাব্য রাজ্য হিসাবে 1
এই গণিতটি কীভাবে করবেন তা অনুমানের মতো
উত্তর:
বাইনারি ভেরিয়েবলের জন্য তাদের প্রত্যাশিত মানটি তাদের সমান হওয়ার সম্ভাবনা সমান। অতএব,
ই ( এক্স ওয়াই ) = পি ( এক্স ওয়াই = 1 ) = পি ( এক্স = 1 ∩ ওয়াই = 1 )ই ( এক্স ) = পি ( এক্স = 1 )E ( Y ) = P ( Y = 1 )
যদি দু'জনের শূন্য থাকে তবে এর অর্থ E ( X Y ) = E ( X ) E ( Y )
P(X=1∩Y=1)=P(X=1)⋅P(Y=1)
It is trivial to see all other joint probabilities multiply as well, using the basic rules about independent events (i.e. if A
Both correlation and covariance measure linear association between two given variables and it has no obligation to detect any other form of association else.
So those two variables might be associated in several other non-linear ways and covariance (and, therefore, correlation) could not distinguish from independent case.
As a very didactic, artificial and non realistic example, one can consider X
EDIT
Indeed, as indicated by @whuber, the above original answer was actually a comment on how the assertion is not universally true if both variables were not necessarily dichotomous. My bad!
So let's math up. (The local equivalent of Barney Stinson's "Suit up!")
If both X
Notice that r=P(X=1,Y=1)
Yes, r
Well, from the above joint distribution, we would have E(X)=0⋅P(X=0)+1⋅P(X=1)=P(X=1)=pE(Y)=0⋅P(Y=0)+1⋅P(Y=1)=P(Y=1)=qE(XY)=0⋅P(XY=0)+1⋅P(XY=1)=P(XY=1)=P(X=1,Y=1)=rCov(X,Y)=E(XY)−E(X)E(Y)=r−pq
Now, notice then that X
About the without loss of generality clause above, if X
Also, we would have E(X′)=E(X−ab−a)=E(X)−ab−aE(Y′)=E(Y−cd−c)=E(Y)−cd−cE(X′Y′)=E(X−ab−aY−cd−c)=E[(X−a)(Y−c)](b−a)(d−c)=E(XY−Xc−aY+ac)(b−a)(d−c)=E(XY)−cE(X)−aE(Y)+ac(b−a)(d−c)Cov(X′,Y′)=E(X′Y′)−E(X′)E(Y′)=E(XY)−cE(X)−aE(Y)+ac(b−a)(d−c)−E(X)−ab−aE(Y)−cd−c=[E(XY)−cE(X)−aE(Y)+ac]−[E(X)−a][E(Y)−c](b−a)(d−c)=[E(XY)−cE(X)−aE(Y)+ac]−[E(X)E(Y)−cE(X)−aE(Y)+ac](b−a)(d−c)=E(XY)−E(X)E(Y)(b−a)(d−c)=1(b−a)(d−c)Cov(X,Y).
=D
IN GENERAL:
The criterion for independence is F(x,y)=FX(x)FY(y). Or fX,Y(x,y)=fX(x)fY(y)
This is nicely explained by Macro here, and in the Wikipedia entry for independence.
independence⇒zero cov, yet
zero cov⇏independence.
Great example: X∼N(0,1), and Y=X2. Covariance is zero (and E(XY)=0, which is the criterion for orthogonality), yet they are dependent. Credit goes to this post.
IN PARTICULAR (OP problem):
These are Bernoulli rv's, X and Y with probability of success Pr(X=1), and Pr(Y=1).
cov(X,Y)=E[XY]−E[X]E[Y]=∗Pr(X=1∩Y=1)−Pr(X=1)Pr(Y=1)⟹Pr(X=1,Y=1)=Pr(X=1)Pr(Y=1).
This is equivalent to the condition for independence in Eq. (1).
(∗):
E[XY]=∗∗∑domain X, YPr(X=x∩Y=y)xy=≠0 iff x×y≠0Pr(X=1∩Y=1).
(∗∗): by LOTUS.
As pointed out below, the argument is incomplete without what Dilip Sarwate had pointed out in his comments shortly after the OP appeared. After searching around, I found this proof of the missing part here:
If events A and B are independent, then events Ac and B are independent, and events Ac and Bc are also independent.
Proof By definition,
A and B are independent ⟺P(A∩B)=P(A)P(B).
But B=(A∩B)+(Ac∪B), so P(B)=P(A∩B)+P(Ac∪B), which yields:
P(Ac∩B)=P(B)−P(A∩B)=P(B)−P(A)P(B)=P(B)[1−P(A)]=P(B)P(Ac).
Repeat the argument for the events Ac and Bc, this time starting from the statement that Ac and B are independent and taking the complement of B.
Similarly. A and Bc are independent events.
So, we have shown already that Pr(X=1,Y=1)=Pr(X=1)Pr(Y=1)