Presentaré las condiciones bajo las cuales un estimador imparcial permanece imparcial, incluso después de que esté limitado. Pero no estoy seguro de que sean algo interesante o útil.
Deje un estimador θ^ del parámetro desconocido θ de una distribución continua, y E(θ^)=θ.
Suponga que, por algunas razones, bajo muestreo repetido queremos que el estimador produzca estimaciones que oscilan en [δl,δu]. Asumimos queθ∈[δl,δu] y así podemos escribir cuando sea conveniente el intervalo como [θ−a,θ+b] con {a,b} números positivos pero por supuesto desconocidos.
Entonces el estimador restringido es
θ^c=⎧⎩⎨⎪⎪⎪⎪δlθ^δuθ^<δlδl≤θ^≤δuδu<θ^⎫⎭⎬⎪⎪⎪⎪
y su valor esperado es
E(θ^c)=δl⋅P[θ^≤δl]+E(θ^∣δl≤θ^≤δu)⋅P[δl≤θ^≤δu]+δu⋅P[θ^>δu]
Definir ahora las funciones del indicador
Il=I(θ^≤δl),Im=I(δl≤θ^≤δl),Iu=I(θ^>δu)
y tenga en cuenta que
Il+Iu=1−Im(1)
usando estas funciones indicadoras e integrales, podemos escribir el valor esperado del estimador restringido como (f(θ^) es la función de densidad de θ^),
E(θ^c)=∫∞−∞δlf(θ^)Ildθ^+∫∞−∞θ^f(θ^)Imdθ^+∫∞−∞δuf(θ^)Iudθ^
=∫∞−∞f(θ^)[δlIl+θ^Im+δuIu]dθ^
=E[δlIl+θ^Im+δuIu](2)
Descomponiendo el límite superior e inferior, tenemos
E(θ^c)=E[(θ−a)Il+θ^Im+(θ+b)Iu]
=E[θ⋅(Il+Iu)+θ^Im]−aE(Il)+bE(Iu)
y usando (1),
=E[θ⋅(1−Im)+θ^Im]−aE(Il)+bE(Iu)
⇒E(θ^c)=θ+E[(θ^−θ)Im]−aE(Il)+bE(Iu)(3)
Ahora desde E(θ^)=θ tenemos
E[(θ^−θ)Im]=E(θ^Im)−E(θ^)E(Im)
Pero
E(θ^Im)=E(θ^Im∣Im=1)E(Im)=E(θ^)E(Im)
Por lo tanto, E[(θ^−θ)Im]=0 y entonces
E(θ^c)=θ−aE(Il)+bE(Iu)=θ−aP(θ^≤δl)+bP(θ^>δu)(4)
o alternativamente
E(θ^c)=θ−(θ−δl)P(θ^≤δl)+(δu−θ)P(θ^>δu)(4a)
Por lo tanto de (4), we see that for the constrained estimator to also be unbiased, we must have
aP(θ^≤δl)=bP(θ^>δu)(5)
What is the problem with condition (5)? It involves the unknown numbers {a,b}, so in practice we will not be able to actually determine an interval to bound the estimator and keep it unbiased.
But let's say this is some controlled simulation experiment, where we want to investigate other properties of estimators, given unbiasedness. Then we can "neutralize" a and b by setting a=b, which essentially creates a symmetric interval around the value of θ... In this case, to achieve unbiasedness, we must more over have P(θ^≤δl)=P(θ^>δu), i.e. we must have that the probability mass of the unconstrained estimator is equal to the left and to the right of the (symmetric around θ) interval...
...and so we learn that (as sufficient conditions), if the distribution of the unconstrained estimator is symmetric around the true value, then the estimator constrained in an interval symmetric around the true value will also be unbiased... but this is almost trivially evident or intuitive, isn't it?
It becomes a little more interesting, if we realize that the necessary and sufficient condition (given a symmetric interval) a) does not require a symmetric distribution, only equal probability mass "in the tails" (and this in turn does not imply that the distribution of the mass in each tail has to be identical) and b) permits that inside the interval, the estimator's density can have any non-symmetric shape that is consistent with maintaining unbiasedness -it will still make the constrained estimator unbiased.
APPLICATION: The OP's case
Our estimator is θ^=θ+w,w∼N(0,1) and so θ^∼N(θ,1).
Then, using (4) while writing a,b in terms of θ,δ, we have, for bounding interval [0,1],
E[θ^c]=θ−θP(θ^≤0)+(1−θ)P(θ^>1)
The distribution is symmetric around θ. Transforming (Φ() is the standard normal CDF)
E[θ^c]=θ−θP(θ^−θ≤−θ)+(1−θ)P(θ^−θ>1−θ)
=θ−θΦ(−θ)+(1−θ)[1−Φ(1−θ)]
One can verify that the additional terms cancel off only if θ=1/2, namely, only if the bounding interval is also symmetric around θ.