Expectativa del recíproco de una variable


Respuestas:


27

puede ser 1 / E (X)?

No, en general no puede; La desigualdad de Jensen nos dice que si XX es una variable aleatoria y φφ es una función convexa, entonces φ ( E [ X ] ) E [ φ ( X ) ]φ(E[X])E[φ(X)] . Si XX es estrictamente positivo, entonces 1 / X1/X es convexo, entonces E [ 1 / X ] 1 / E [ X ]E[1/X]1/E[X] , y para una función estrictamente convexa, la igualdad solo ocurre si XXtiene cero varianza ... así que en los casos en que tendemos a interesarnos, los dos son generalmente desiguales.

Suponiendo que estamos tratando con una variable positiva, si está claro para usted que XX y 1 / X1/X estarán inversamente relacionados ( Cov ( X , 1 / X ) 0Cov(X,1/X)0 ), entonces esto implicaría E ( X 1 / X ) - E ( X ) E ( 1 / X ) 0E(X1/X)E(X)E(1/X)0 que implica E ( X ) E ( 1 / X ) 1E(X)E(1/X)1 , entonces E ( 1 / X ) 1 / E ( X )E(1/X)1/E(X) .

Estoy confundido al aplicar la expectativa en el denominador.

Usa la ley del estadístico inconsciente

E [ g ( X ) ] = - g ( x ) f X ( x ) d x

E[g(X)]=g(x)fX(x)dx

(en el caso continuo)

entonces cuando g ( X ) = 1Xg(X)=1X ,E[1X ]=- f(x)x dxE[1X]=f(x)xdx

En algunos casos, la expectativa se puede evaluar mediante inspección (por ejemplo, con variables aleatorias gamma), o derivando la distribución de la inversa, o por otros medios.


14

Como Glen_b dice que probablemente eso esté mal, porque el recíproco es una función no lineal. Si desea una aproximación a E ( 1 / X ),E(1/X) tal vez pueda usar una expansión de Taylor alrededor de E ( X )E(X) :

E ( 1X )E(1E ( X ) -1E ( X ) 2 (X-E(X))+1E ( X ) 3 (X-E(X))2)== 1E ( X ) +1E ( X ) 3 Var(X),

E(1X)E(1E(X)1E(X)2(XE(X))+1E(X)3(XE(X))2)==1E(X)+1E(X)3Var(X)
por lo que solo necesita la media y la varianza de X, y si la distribución deXXes simétrica, esta aproximación puede ser muy precisa.

EDITAR: lo que tal vez arriba es bastante crítico, vea el comentario de BioXX a continuación.


oh yes yes...I am very sorry that I could not apprehend that fact...I have one more q...Is this applicable to any kind of function???actually I am stuck with |x||x|...How can the expectation of |x||x| can be deduced in terms of E(x)E(x) and V(x)V(x)
Sandipan Karmakar

2
No creo que puedas usarlo para | X | ya que esa función no es diferenciable. Prefiero dividir el problema en los casos y decir E ( | X | ) = E ( X | X > 0 ) p ( X > 0 ) + E ( - X | X < 0 ) p ( X < 0 ) , I adivinar. |X|E(|X|)=E(X|X>0)p(X>0)+E(X|X<0)p(X<0)
Matteo Fasiolo

1
@MatteoFasiolo Can you please explain why the symmetry of the distribution of XX (or lack thereof) has an effect on the accuracy of the Taylor approximation? Do you have a source that you could point me to that explains why this is?
Aaron Hendrickson

1
@AaronHendrickson my reasoning is simply that the next term in the expansion is proportional to E{(XE(X))3}E{(XE(X))3} which is related to the skewness of the distribution of XX. Skewness is an asymmetry measure. However, zero skewness does not guarantee symmetry and I am not sure whether symmetry guarantees zero skewness. Hence, this is all heuristic and there might be plenty of counterexamples.
Matteo Fasiolo

4
I don't understand how this solution gets so many upvotes. For a single random variable XX there is no justificiation about the quality of this approximation. The third derivative f(x)=1/xf(x)=1/x is not bounded. Moreover the remainder of the approx. is 1/6f(ξ)(Xμ)31/6f′′′(ξ)(Xμ)3 where ξξ is itself a random variable between XX and μμ. The remainder won't vanish in general and may be very huge. Taylor approx. may only be useful if one has sequence of random variables Xnμ=Op(an)Xnμ=Op(an) where an0an0. Even then uniform integrability is needed additionally if interested in the expectation.
BloXX

8

Others have already explained that the answer to the question is NO, except trivial cases. Below we give an approach to finding E1XE1X when X>0X>0 with probability one, and the moment generating function MX(t)=EetXMX(t)=EetX do exist. An application of this method (and a generalization) is given in Expected value of 1/x1/x when xx follows a Beta distribution, we will here also give a simpler example.

First, note that 0etxdt=1x0etxdt=1x (simple calculus exercise). Then, write E(1X)=0x1f(x)dx=0(0etxdt)f(x)dx=0(0etxf(x)dx)dt=0MX(t)dt

E(1X)=0x1f(x)dx=0(0etxdt)f(x)dx=0(0etxf(x)dx)dt=0MX(t)dt
A simple application: Let XX have the exponential distribution with rate 1, that is, with density ex,x>0ex,x>0 and moment generating function MX(t)=11t,t<1MX(t)=11t,t<1. Then 0MX(t)dt=011+tdt=ln(1+t)|0=0MX(t)dt=011+tdt=ln(1+t)0=, so definitely do not converge, and is very different from 1EX=11=11EX=11=1.

7

An alternative approach to calculating E(1/X)E(1/X) knowing X is a positive random variable is through its moment generating function E[eλX]E[eλX]. Since by elementary calculas 0eλxdλ=1x

0eλxdλ=1x
we have, by Fubini's theorem 0E[eλX]dλ=E[1X].

2
The idea here is right, but the details wrong. Pleasecheck
kjetil b halvorsen

1
@Kjetil I don't see what the problem is: apart from the inconsequential differences of using tX instead of tX in the definition of the MGF and naming the variable t instead of λ, the answer you just posted is identical to this one.
whuber

1
You are right, the problems was less than I thought. Still this answer would be better withm some more details. I will upvote this tomorrow ( when I have new votes)
kjetil b halvorsen

1

To first give an intuition, what about using the discrete case in finite sample to illustrate that E(1/X)1/E(X) (putting aside cases such as E(X)=0)?

In finite sample, using the term average for expectation is not that abusive, thus if one has on the one hand

E(X)=1NNi=1Xi

and one has on the other hand

E(1/X)=1NNi=11/Xi

it becomes obvious that, with N>1,

E(1/X)=1NNi=11/XiNNi=1Xi=1/E(X)

Which leads to say that, basically, E(1/X)1/E(X) since the inverse of the (discrete) sum is not the (discrete) sum of inverses.

Analogously in the asymptotic 0-centered continuous case, one has

E(1/X)=f(x)xdx1/xf(x)dx=1/E(X).

Al usar nuestro sitio, usted reconoce que ha leído y comprende nuestra Política de Cookies y Política de Privacidad.
Licensed under cc by-sa 3.0 with attribution required.