La intuición acerca de los signos "más" relacionados con la varianza (del hecho de que incluso cuando calculamos la varianza de una diferencia de variables aleatorias independientes, agregamos sus varianzas) es correcta pero fatalmente incompleta: si las variables aleatorias involucradas no son independientes , entonces las covarianzas también están involucradas, y las covarianzas pueden ser negativas. Existe una expresión que es casi igual que la expresión en la pregunta se pensaba que "debería" ser por la OP (y yo), y es la varianza de la predicción de error , denotan que , donde y 0 = β 0 +e0=y0−y^0 :y0=β0+β1x0+u0
Var(e0)=σ2⋅(1+1n+(x0−x¯)2Sxx)
La diferencia crítica entre la varianza del error de predicción y la varianza de la estimación de error (es decir, de la residual), es que el término de error de la observación predicho no se correlaciona con el estimador , ya que el valor se no utilizados en la construcción el estimador y el cálculo de las estimaciones, siendo un valor fuera de muestra.y0
El álgebra para ambos procede exactamente de la misma manera hasta un punto (usando 0 en lugar de i ), pero luego diverge. Específicamente:0i
En el sencillo de regresión lineal , Var ( u i ) = σ 2 , la varianza del estimador β = ( β 0 , β 1 ) ' es todavíayi=β0+β1xi+uiVar(ui)=σ2β^=(β^0,β^1)′
Var ( β^) = σ2( X′X )- 1
Tenemos
X′X = [ n∑ xyo∑ xyo∑ x2yo]
y entonces
( X′X )- 1= [ ∑ x2yo- ∑ xyo- ∑ xyonorte] ⋅ [ n ∑ x2yo- ( ∑xyo)2]- 1
Tenemos
[n∑x2i−(∑xi)2]=[n∑x2i−n2x¯2]=n[∑x2i−nx¯2]=n∑(x2i−x¯2)≡nSxx
Entonces
(X′X)−1=[(1/n)∑x2i−x¯−x¯1]⋅(1/Sxx)
Lo que significa que
Var(β^0)=σ2(1n∑x2i)⋅ (1/Sxx)=σ2nSxx+nx¯2Sxx=σ2(1n+x¯2Sxx)
Var(β^1)=σ2(1/Sxx)
Cov(β^0,β^1)=−σ2(x¯/Sxx)
El residual-ésimo se define comoi
u^i=yi−y^i=(β0−β^0)+(β1−β^1)xi+ui
Los coeficientes reales se tratan como constantes, el regresor es fijo (o está condicionado a él) y tiene cero covarianza con el término de error, pero los estimadores están correlacionados con el término de error, porque los estimadores contienen la variable dependiente y la variable dependiente contiene el término de error. Entonces tenemos
Var(u^i)=[Var(ui)+Var(β^0)+x2iVar(β^1)+2xiCov(β^0,β^1)]+2Cov([(β0−β^0)+(β1−β^1)xi],ui)
=[σ2+σ2(1n+x¯2Sxx)+x2iσ2(1/Sxx)+2Cov([(β0−β^0)+(β1−β^1)xi],ui)
Pack it up a bit to obtain
Var(u^i)=[σ2⋅(1+1n+(xi−x¯)2Sxx)]+2Cov([(β0−β^0)+(β1−β^1)xi],ui)
The term in the big parenthesis has exactly the same structure with the variance of the prediction error, with the only change being that instead of xi we will have x0 (and the variance will be that of e0 and not of u^i). The last covariance term is zero for the prediction error because y0 and hence u0 is not included in the estimators, but not zero for the estimation error because yi and hence ui is part of the sample and so it is included in the estimator. We have
2Cov([(β0−β^0)+(β1−β^1)xi],ui)=2E([(β0−β^0)+(β1−β^1)xi]ui)
=−2E(β^0ui)−2xiE(β^1ui)=−2E([y¯−β^1x¯]ui)−2xiE(β^1ui)
the last substitution from how β^0 is calculated. Continuing,
...=−2E(y¯ui)−2(xi−x¯)E(β^1ui)=−2σ2n−2(xi−x¯)E[∑(xi−x¯)(yi−y¯)Sxxui]
=−2σ2n−2(xi−x¯)Sxx[∑(xi−x¯)E(yiui−y¯ui)]
=−2σ2n−2(xi−x¯)Sxx[−σ2n∑j≠i(xj−x¯)+(xi−x¯)σ2(1−1n)]
=−2σ2n−2(xi−x¯)Sxx[−σ2n∑(xi−x¯)+(xi−x¯)σ2]
=−2σ2n−2(xi−x¯)Sxx[0+(xi−x¯)σ2]=−2σ2n−2σ2(xi−x¯)2Sxx
Inserting this into the expression for the variance of the residual, we obtain
Var(u^i)=σ2⋅(1−1n−(xi−x¯)2Sxx)
So hats off to the text the OP is using.
(I have skipped some algebraic manipulations, no wonder OLS algebra is taught less and less these days...)
SOME INTUITION
So it appears that what works "against" us (larger variance) when predicting, works "for us" (lower variance) when estimating. This is a good starting point for one to ponder why an excellent fit may be a bad sign for the prediction abilities of the model (however counter-intuitive this may sound...).
The fact that we are estimating the expected value of the regressor, decreases the variance by 1/n. Why? because by estimating, we "close our eyes" to some error-variability existing in the sample,since we essentially estimating an expected value. Moreover, the larger the deviation of an observation of a regressor from the regressor's sample mean, the smaller the variance of the residual associated with this observation will be... the more deviant the observation, the less deviant its residual... It is variability of the regressors that works for us, by "taking the place" of the unknown error-variability.
But that's good for estimation. For prediction, the same things turn against us: now, by not taking into account, however imperfectly, the variability in y0 (since we want to predict it), our imperfect estimators obtained from the sample show their weaknesses: we estimated the sample mean, we don't know the true expected value -the variance increases. We have an x0 that is far away from the sample mean as calculated from the other observations -too bad, our prediction error variance gets another boost, because the predicted y^0 will tend to go astray... in more scientific language "optimal predictors in the sense of reduced prediction error variance, represent a shrinkage towards the mean of the variable under prediction". We do not try to replicate the dependent variable's variability -we just try to stay "close to the average".