¿Cómo calcular el error relativo cuando el valor verdadero es cero?


32

¿Cómo calculo el error relativo cuando el valor verdadero es cero?

Digamos que tengo xtrue=0 y . Si defino error relativo como:xtest

relative error=xtruextestxtrue

Entonces el error relativo siempre está indefinido. Si en cambio uso la definición:

relative error=xtruextestxtest

Entonces el error relativo es siempre 100%. Ambos métodos parecen inútiles. ¿Hay otra alternativa?


Tuve exactamente la misma pregunta con respecto al sesgo de parámetros en las simulaciones de Monte Carlo, usando su primera definición. Uno de mis valores de parámetro era 0, así que no calculé el sesgo de parámetro para este parámetro en particular ...
Patrick Coulombe

2
La solución es no usar un error relativo en este caso.
Marc Claesen

2
Una opción, que responde a la intención, si no es la letra de su pregunta, es utilizar una medida ligeramente diferente que coincida estrechamente con el error relativo cuando el error relativo es pequeño, como . (Use2(xtruextest)/(|xtrue|+|xtest|)0 cuandoxtrue=xtest=0 ) Esta solución particular es universal en el sentido de que es invariante bajo un cambio en la unidad de medida (porque no implica constantes arbitrarias).
whuber

@whuber Creo que debería considerar publicar ese comentario como respuesta, ya que parece superior a los existentes.
Silverfish

@Silver Tienes razón: pido disculpas por publicar una respuesta como comentario. Por lo tanto, he ampliado ligeramente ese comentario en una respuesta.
whuber

Respuestas:


39

Hay muchas alternativas, dependiendo del propósito.


Una común es la "diferencia porcentual relativa", o RPD, que se utiliza en los procedimientos de control de calidad de laboratorio. Aunque puede encontrar muchas fórmulas aparentemente diferentes, todas se reducen a comparar la diferencia de dos valores con su magnitud promedio:

d1(x,y)=xy(|x|+|y|)/2=2xy|x|+|y|.

Esta es una expresión con signo , positiva cuando excede y y negativa cuando y excede x . Su valor siempre se encuentra entre - 2 y 2 . Al usar valores absolutos en el denominador, maneja los números negativos de manera razonable. La mayoría de las referencias que puedo encontrar, como la Guía técnica de evaluación de calidad de datos y evaluación de usabilidad de datos del Programa DEP Site Remediation Site de New Jersey , usan el valor absoluto de d 1 porque solo están interesados ​​en la magnitud del error relativo.xyyx22d1


Un artículo de Wikipedia sobre cambio relativo y diferencia observa que

d(x,y)=|xy|max(|x|,|y|)

se usa con frecuencia como prueba de tolerancia relativa en algoritmos numéricos de coma flotante. El mismo artículo también señala que las fórmulas como y d pueden generalizarse ad1d

df(x,y)=xyf(x,y)

donde la función depende directamente de las magnitudes de x e y (generalmente suponiendo que x e y son positivas). Como ejemplos que ofrece su max, min, y la media aritmética (con y sin tomar los valores absolutos de x y y ellos mismos), pero se podría contemplar otros tipos de promedios, tales como la media geométrica fxyxyxy, la media armónica2/(1/|x|+1/|y|)yLpsignifica((|x|p+|y|p)/2)1 / p. (d1corresponde ap=1ydcorresponde al límite comop|xy|2/(1/|x|+1/|y|)Lp((|x|p+|y|p)/2)1/pd1p=1dp.) One might choose an f based on the expected statistical behavior of x and y. For instance, with approximately lognormal distributions the geometric mean would be an attractive choice for f because it is a meaningful average in that circumstance.


Most of these formulas run into difficulties when the denominator equals zero. In many applications that either is not possible or it is harmless to set the difference to zero when x=y=0.

Tenga en cuenta que todas estas definiciones comparten una propiedad de invariancia fundamental: cualquiera que sea la función de diferencia relativa , no cambia cuando los argumentos son reescalados uniformemente por λ > 0 :dλ>0

d(x,y)=d(λx,λy).

Es esta propiedad la que nos permite considerar como una diferencia relativa . Así, en particular, una función no invariante comod

d(x,y)=? |xy|1+|y|

simply does not qualify. Whatever virtues it might have, it does not express a relative difference.


The story does not end here. We might even find it fruitful to push the implications of invariance a little further.

The set of all ordered pairs of real numbers (x,y)(0,0) where (x,y) is considered to be the same as (λx,λy) is the Real Projective Line RP1. In both a topological sense and an algebraic sense, RP1 is a circle. Any (x,y)(0,0) determines a unique line through the origin (0,0). When x0 its slope is y/x; otherwise we may consider its slope to be "infinite" (and either negative or positive). A neighborhood of this vertical line consists of lines with extremely large positive or extremely large negative slopes. We may parameterize all such lines in terms of their angle θ=arctan(y/x), with π/2<θπ/2. Associated with every such θ is a point on the circle,

(ξ,η)=(cos(2θ),sin(2θ))=(x2y2x2+y2,2xyx2+y2).

Any distance defined on the circle can therefore be used to define a relative difference.

As an example of where this can lead, consider the usual (Euclidean) distance on the circle, whereby the distance between two points is the size of the angle between them. The relative difference is least when x=y, corresponding to 2θ=π/2 (or 2θ=3π/2 when x and y have opposite signs). From this point of view a natural relative difference for positive numbers x and y would be the distance to this angle:

dS(x,y)=|2arctan(yx)π/2|.

To first order, this is the relative distance El |X-yEl |/ /El |yEl |--but it works even when y=0 0. Moreover, it doesn't blow up, but instead (as a signed distance) is limited between -π/ /2 and π/ /2, as this graph indicates:

Figura

This hints at how flexible the choices are when selecting a way to measure relative differences.


Thanks for the comprehensive answer, what do you think is the best reference for this line : "is frequently used as a relative tolerance test in floating point numerical algorithms. The same article also points out that formulas like d1d1 and d∞d∞ may be generalized to"
Hammad Haleem

1
btw, nevermind I found an academic reference for this :) tandfonline.com/doi/abs/10.1080/00031305.1985.10479385
Hammad Haleem

4
Why has this not been selected as the answer? (sorry if this is not an appropriate comment, but this is the better answer by far)
Brash Equilibrium

2
@Brash I appreciate the sentiment. Acceptance is uniquely the province of the original proposer: nobody can override that (except by deleting the accepted post). On some occasions when I feel as you do, I post comments that point out explicitly how and why I think some answers are better or more noteworthy than others. Even if that fails to change anything, such comments may make the material a little more useful or understandable to future readers: and that, ultimately, is the point of our work on this site.
whuber

1
@KutalmisB Thank you for noticing that: the "min" doesn't belong there at all. It looks like it may have been a vestige of a more complex formula that handled all possible signs of x and y that I later simplified. I have removed it.
whuber

11

First, note that you typically take the absolute value in computing the relative error.

A common solution to the problem is to compute

relative error=|xtruextest|1+|xtrue|.

3
This is problematic in that it varies depending on the units of measure chosen for the values.
whuber

1
That's absolutely true. This isn't a perfect solution to the problem, but it is a common approach that works reasonably well when x is well scaled.
Brian Borchers

Could you elaborate in your answer on what you mean by "well scaled"? For instance, suppose the data arise from calibration of an aqueous chemical measurement system designed for concentrations between 0 and 0.000001 moles/liter which can achieve a precision of, say, three significant digits. Your "relative error" would therefore be constantly zero except for obviously erroneous measurements. In light of this, how exactly would you rescale such data?
whuber

1
Your example is one where the variable isn't well scaled. By "well scaled", I mean that that variable is scaled so that it takes on values in a small range (of e.g. a couple of orders of magnitude) near 1. If your variable takes on values over many orders of magnitude than you've got more serious scaling issues and this simple approach isn't going to be adequate.
Brian Borchers

2
Any reference for this approach? The name of this method? Thank you.
CroCo

0

I was a bit confused on this for a while. In the end, its because if you are trying to measure relative error with respect to zero then you are trying to force something that simply does not exist.

If you think about it, you're comparing apples to oranges when you compare relative error to the error measured from zero, because the error measured from zero is equivalent to the measured value (that's why you get 100% error when you divide by the test number).

For example, consider measuring error of gauge pressure (the relative pressure from atmospheric) vs absolute pressure. Say that you use an instrument to measure the gauge pressure at perfect atmospheric conditions, and your device measured atmospheric pressure spot on so that it should record 0% error. Using the equation you provided, and first assuming we used the measured gauge pressure, to calculate relative error:

relative error=Pgauge,truePgauge,testPgauge,true
Then Pgauge,true=0 and Pgauge,test=0 and you do not get 0% error, instead it is undefined. That is because the actual percent error should be using the absolute pressure values like this:
relative error=Pabsolute,truePabsolute,testPabsolute,true
Now Pabsolute,true=1atm and Pabsolute,test=1atm and you get 0% error. This is the proper application of relative error. The original application that used gauge pressure was more like "relative error of the relative value" which is a different thing than "relative error". You need to convert the gauge pressure to absolute before measuring the relative error.

The solution to your question is to make sure you are dealing with absolute values when measuring relative error, so that zero is not a possibility. Then you are actually getting relative error, and can use that as an uncertainty or a metric of your real percent error. If you must stick with relative values, than you should be using absolute error, because the relative (percent) error will change depending on your reference point.

It's hard to put a concrete definition on 0... "Zero is the integer denoted 0 that, when used as a counting number, means that no objects are present." - Wolfram MathWorld http://mathworld.wolfram.com/Zero.html

Feel free to nit pick, but zero essentially means nothing, it is not there. This is why it does not make sense to use gauge pressure when calculating relative error. Gauge pressure, though useful, assumes there is nothing at atmospheric pressure. We know this is not the case though, because it has an absolute pressure of 1 atm. Thus, the relative error with respect to nothing, just does not exist, it's undefined.

Feel free to argue against this, simply put: any quick fixes, such as adding one to the bottom value, are faulty and not accurate. They can be still be usefully if you are simply trying to minimize error. If you are trying to make accurate measurements of uncertainty though, not so much...


0

MAPE Formula

Finding MAPE,

It is very debatable topic and many opensource contributors have discussed on the above topic. The most efficient approach till now is followed by the developers. Please refer to this PR to know more.

Al usar nuestro sitio, usted reconoce que ha leído y comprende nuestra Política de Cookies y Política de Privacidad.
Licensed under cc by-sa 3.0 with attribution required.