¿La covarianza igual a cero implica independencia para las variables aleatorias binarias?


14

Si e son dos variables aleatorias que solo pueden tomar dos estados posibles, ¿cómo puedo mostrar que implica independencia? Esto va en contra de lo que aprendí en el día en que no implica independencia ...XXYYCov(X,Y)=0Cov(X,Y)=0Cov(X,Y)=0Cov(X,Y)=0

La sugerencia dice que comience con y como posibles estados y generalice a partir de ahí. ¿Y puedo hacer eso y mostrar , pero esto no implica independencia?1100E(XY)=E(X)E(Y)E(XY)=E(X)E(Y)

Un poco confundido cómo hacer esto matemáticamente, supongo.


No es cierto en general como sugiere el encabezado de su pregunta ..
Michael R. Chernick

55
La afirmación que está tratando de demostrar es realmente cierta. Si XX e YY son variables aleatorias de Bernoulli con los parámetros p 1p1 y p 2p2 respectivamente, entonces E [ X ] = p 1E[X]=p1 y E [ Y ] = p 2E[Y]=p2 . Entonces, cov ( X , Y ) = E [ X Y ] - E [ X ] E [ Y ]cov(X,Y)=E[XY]E[X]E[Y] es igual a 00solo si es igual a mostrando que { X = 1 } y { Y = 1 } son eventos independientes . Es un resultado estándar que si A y B son un par de eventos independientes, también lo son A , B c y A c , B y A c , B c eventos independientes, es decir, X e YE[XY]=P{X=1,Y=1}E[XY]=P{X=1,Y=1}p1p2=P{X=1}P{Y=1}p1p2=P{X=1}P{Y=1}{X=1}{Y=1} ABA,BcAc,BAc,BcXYson variables aleatorias independientes Ahora generaliza.
Dilip Sarwate

Respuestas:


23

Para las variables binarias, su valor esperado es igual a la probabilidad de que sean iguales a uno. Por lo tanto,

E ( X Y ) = P ( X Y = 1 ) = P ( X = 1 Y = 1 )E ( X ) = P ( X = 1 )E ( Y ) = P ( Y = 1 )

E(XY)=P(XY=1)=P(X=1Y=1)E(X)=P(X=1)E(Y)=P(Y=1)

Si los dos tienen cero covarianza, esto significa E ( X Y ) = E ( X ) E ( Y )E(XY)=E(X)E(Y) , lo que significa

P ( X = 1 Y = 1 ) = P ( X = 1 ) P ( Y = 1 )

P(X=1Y=1)=P(X=1)P(Y=1)

Es trivial ver que todas las demás probabilidades conjuntas también se multiplican, utilizando las reglas básicas sobre eventos independientes (es decir, si AA y BB son independientes, entonces sus complementos son independientes, etc.), lo que significa que la función de masa conjunta se factoriza, que es la definición de dos variables aleatorias que son independientes.


2
Conciso y elegante. ¡De buen tono! +1 = D
Marcelo Ventura

9

Tanto la correlación como la covarianza miden la asociación lineal entre dos variables dadas y no tiene la obligación de detectar ninguna otra forma de asociación.

Por lo tanto, esas dos variables podrían estar asociadas de varias otras formas no lineales y la covarianza (y, por lo tanto, la correlación) no podría distinguir del caso independiente.

Como muy didáctica, artificial y ejemplo no realista, se puede considerar XX de tal manera que P ( X = x ) = 1 / 3P(X=x)=1/3 para x = - 1 , 0 , 1x=1,0,1 y también considerar Y = X 2Y=X2 . Tenga en cuenta que no solo están asociados, sino que uno es una función del otro. No obstante, su covarianza es 0, ya que su asociación es ortogonal a la asociación que la covarianza puede detectar.

EDITAR

De hecho, como lo indica @whuber, la respuesta original anterior fue en realidad un comentario sobre cómo la afirmación no es universalmente cierta si ambas variables no son necesariamente dicotómicas. ¡Culpa mía!

Así que vamos a matemática. (El equivalente local de "Suit up!" De Barney Stinson)

Caso particular

Si XX e YY fueran dicotómicos, entonces puede suponer, sin pérdida de generalidad, que ambos asumen solo los valores 00 y 11 con probabilidades arbitrarias pp , qq y rr dadas por P ( X = 1 ) = p [ 0 , 1 ] P ( Y = 1 ) = q [ 0 , 1 ] P ( X = 1 , Y= 1 ) = r [ 0 , 1 ] , que caracterizan completamente la distribución conjunta deXyY. Tomando la sugerencia de @ DilipSarwate, observe que esos tres valores son suficientes para determinar la distribución conjunta de(X,Y), ya que P ( X = 0 , Y = 1 )

P(X=1)=p[0,1]P(Y=1)=q[0,1]P(X=1,Y=1)=r[0,1],
XY(X,Y)= P ( Y = 1 ) - P ( X = 1 , Y = 1 ) = q - r P ( X = 1 , Y = 0 )= P ( X = 1 ) - P ( X = 1 , Y = 1 ) = p - r P ( X = 0 , Y = 0 )= 1 - P ( X = 0 , Y = 1 ) - P ( X = 1 , Y = 0 ) - P ( X = 1 , Y = 1 )= 1 - ( q - r ) - ( p - r ) - r = 1 - p - q - r . (En una nota al margen, por supuesto,restá obligado a respetar tantop-r[0,1],q-r[0,1]como1-p-q-r[
P(X=0,Y=1)P(X=1,Y=0)P(X=0,Y=0)=P(Y=1)P(X=1,Y=1)=qr=P(X=1)P(X=1,Y=1)=pr=1P(X=0,Y=1)P(X=1,Y=0)P(X=1,Y=1)=1(qr)(pr)r=1pqr.
rpr[0,1]qr[0,1]0 , 1 ] más allá de r [ 0 , 1 ] , es decir r [ 0 , min ( p , q , 1 - p - q ) ] .)1pqr[0,1]r[0,1]r[0,min(p,q,1pq)]

Notice that r=P(X=1,Y=1)r=P(X=1,Y=1) might be equal to the product pq=P(X=1)P(Y=1)pq=P(X=1)P(Y=1), which would render XX and YY independent, since P(X=0,Y=0)=1pqpq=(1p)(1q)=P(X=0)P(Y=0)P(X=1,Y=0)=ppq=p(1q)=P(X=1)P(Y=0)P(X=0,Y=1)=qpq=(1p)q=P(X=0)P(Y=1).

P(X=0,Y=0)P(X=1,Y=0)P(X=0,Y=1)=1pqpq=(1p)(1q)=P(X=0)P(Y=0)=ppq=p(1q)=P(X=1)P(Y=0)=qpq=(1p)q=P(X=0)P(Y=1).

Yes, rr might be equal to pqpq, BUT it can be different, as long as it respects the boundaries above.

Well, from the above joint distribution, we would have E(X)=0P(X=0)+1P(X=1)=P(X=1)=pE(Y)=0P(Y=0)+1P(Y=1)=P(Y=1)=qE(XY)=0P(XY=0)+1P(XY=1)=P(XY=1)=P(X=1,Y=1)=rCov(X,Y)=E(XY)E(X)E(Y)=rpq

E(X)E(Y)E(XY)Cov(X,Y)=0P(X=0)+1P(X=1)=P(X=1)=p=0P(Y=0)+1P(Y=1)=P(Y=1)=q=0P(XY=0)+1P(XY=1)=P(XY=1)=P(X=1,Y=1)=r=E(XY)E(X)E(Y)=rpq

Now, notice then that XX and YY are independent if and only if Cov(X,Y)=0Cov(X,Y)=0. Indeed, if XX and YY are independent, then P(X=1,Y=1)=P(X=1)P(Y=1)P(X=1,Y=1)=P(X=1)P(Y=1), which is to say r=pqr=pq. Therefore, Cov(X,Y)=rpq=0Cov(X,Y)=rpq=0; and, on the other hand, if Cov(X,Y)=0Cov(X,Y)=0, then rpq=0rpq=0, which is to say r=pqr=pq. Therefore, XX and YY are independent.

General Case

About the without loss of generality clause above, if XX and YY were distributed otherwise, let's say, for a<ba<b and c<dc<d, P(X=b)=pP(Y=d)=qP(X=b,Y=d)=r

P(X=b)=pP(Y=d)=qP(X=b,Y=d)=r
then XX and YY given by X=XabaandY=Ycdc
X=XabaandY=Ycdc
would be distributed just as characterized above, since X=aX=0,X=bX=1,Y=cY=0andY=dY=1.
X=aX=0,X=bX=1,Y=cY=0andY=dY=1.
So XX and YY are independent if and only if XX and YY are independent.

Also, we would have E(X)=E(Xaba)=E(X)abaE(Y)=E(Ycdc)=E(Y)cdcE(XY)=E(XabaYcdc)=E[(Xa)(Yc)](ba)(dc)=E(XYXcaY+ac)(ba)(dc)=E(XY)cE(X)aE(Y)+ac(ba)(dc)Cov(X,Y)=E(XY)E(X)E(Y)=E(XY)cE(X)aE(Y)+ac(ba)(dc)E(X)abaE(Y)cdc=[E(XY)cE(X)aE(Y)+ac][E(X)a][E(Y)c](ba)(dc)=[E(XY)cE(X)aE(Y)+ac][E(X)E(Y)cE(X)aE(Y)+ac](ba)(dc)=E(XY)E(X)E(Y)(ba)(dc)=1(ba)(dc)Cov(X,Y).

E(X)E(Y)E(XY)Cov(X,Y)=E(Xaba)=E(X)aba=E(Ycdc)=E(Y)cdc=E(XabaYcdc)=E[(Xa)(Yc)](ba)(dc)=E(XYXcaY+ac)(ba)(dc)=E(XY)cE(X)aE(Y)+ac(ba)(dc)=E(XY)E(X)E(Y)=E(XY)cE(X)aE(Y)+ac(ba)(dc)E(X)abaE(Y)cdc=[E(XY)cE(X)aE(Y)+ac][E(X)a][E(Y)c](ba)(dc)=[E(XY)cE(X)aE(Y)+ac][E(X)E(Y)cE(X)aE(Y)+ac](ba)(dc)=E(XY)E(X)E(Y)(ba)(dc)=1(ba)(dc)Cov(X,Y).
So Cov(X,Y)=0Cov(X,Y)=0 if and only Cov(X,Y)=0Cov(X,Y)=0.

=D


1
I recycled that answer from this post.
Marcelo Ventura

Verbatim cut and paste from your other post. Love it. +1
gammer

2
The problem with copy-and-paste is that your answer no longer seems to address the question: it is merely a comment on the question. It would be better, then, to post a comment with a link to your other answer.
whuber

2
How is thus an answer to the question asked?
Dilip Sarwate

1
Your edits still don't answer the question, at least not at the level the question is asked. You write "Notice that r r  not necessarily equal to the product pqpq. That exceptional situation corresponds to the case of independence between XX and YY." which is a perfectly true statement but only for the cognoscenti because for the hoi polloi, independence requires not just that P(X=1,Y=1)=P(X=1)P(Y=1)
P(X=1,Y=1)=P(X=1)P(Y=1)(1)
but also P(X=u,Y=v)=P(X=u)P(Y=v), u.v{0,1}.
P(X=u,Y=v)=P(X=u)P(Y=v), u.v{0,1}.(2)
Yes, (1)(2)(1)(2) as the cognoscenti know; for lesser mortals, a proof that (1)(2)(1)(2) is helpful.
Dilip Sarwate

3

IN GENERAL:

The criterion for independence is F(x,y)=FX(x)FY(y)F(x,y)=FX(x)FY(y). Or fX,Y(x,y)=fX(x)fY(y)

fX,Y(x,y)=fX(x)fY(y)(1)

"If two variables are independent, their covariance is 0.0. But, having a covariance of 00 does not imply the variables are independent."

This is nicely explained by Macro here, and in the Wikipedia entry for independence.

independencezero covindependencezero cov, yet

zero covindependence.zero covindependence.

Great example: XN(0,1)XN(0,1), and Y=X2.Y=X2. Covariance is zero (and E(XY)=0E(XY)=0, which is the criterion for orthogonality), yet they are dependent. Credit goes to this post.


IN PARTICULAR (OP problem):

These are Bernoulli rv's, XX and YY with probability of success Pr(X=1)Pr(X=1), and Pr(Y=1)Pr(Y=1).

cov(X,Y)=E[XY]E[X]E[Y]=Pr(X=1Y=1)Pr(X=1)Pr(Y=1)Pr(X=1,Y=1)=Pr(X=1)Pr(Y=1).cov(X,Y)=E[XY]E[X]E[Y]=Pr(X=1Y=1)Pr(X=1)Pr(Y=1)Pr(X=1,Y=1)=Pr(X=1)Pr(Y=1).

This is equivalent to the condition for independence in Eq. (1).


():

E[XY]=domain X, YPr(X=xY=y)xy=0 iff x×y0Pr(X=1Y=1).

(): by LOTUS.


As pointed out below, the argument is incomplete without what Dilip Sarwate had pointed out in his comments shortly after the OP appeared. After searching around, I found this proof of the missing part here:

If events A and B are independent, then events Ac and B are independent, and events Ac and Bc are also independent.

Proof By definition,

A and B are independent P(AB)=P(A)P(B).

But B=(AB)+(AcB), so P(B)=P(AB)+P(AcB), which yields:

P(AcB)=P(B)P(AB)=P(B)P(A)P(B)=P(B)[1P(A)]=P(B)P(Ac).

Repeat the argument for the events Ac and Bc, this time starting from the statement that Ac and B are independent and taking the complement of B.

Similarly. A and Bc are independent events.

So, we have shown already that Pr(X=1,Y=1)=Pr(X=1)Pr(Y=1)

and the above shows that this implies that Pr(X=i,Y=j)=Pr(X=i)Pr(Y=j),  i,j{0,1}
that is, the joint pmf factors into the product of marginal pmfs everywhere, not just at (1,1). Hence, uncorrelated Bernoulli random variables X and Y are also independent random variables.

2
Actually that's not an equivalent condition to Eq (1). All you showed was that fX,Y(1,1)=fX(1)fY(1)
gammer

Please consider replacing that image with your own equations, preferably ones that don't use overbars to denote complements. The overbars in the image are very hard to see.
Dilip Sarwate

@DilipSarwate No problem. Is it better, now?
Antoni Parellada

1
Thanks. Also, note that strictly speaking, you also need to show that A and Bc are independent events since the factorization of the joint pdf into the product of the marginal pmts must hold at all four points. Perhaps adding the sentence "Similarly. A and Bc are independent events" right after the proof that Ac and B are independent events will work.
Dilip Sarwate

@DilipSarwate Thank you very much for your help getting it right. The proof as it was before all the editing seemed self-explanatory, because of all the inherent symmetry, but it clearly couldn't be taken for granted. I am very appreciative of your assistance.
Antoni Parellada
Al usar nuestro sitio, usted reconoce que ha leído y comprende nuestra Política de Cookies y Política de Privacidad.
Licensed under cc by-sa 3.0 with attribution required.