Theapproximation errorin a data value is the discrepancy between an exact value and someapproximationto it. This error can be expressed as anabsolute error(the numerical amount of the discrepancy) or as arelative error(the absolute error divided by the data value).

Graph of(blue) with its linear approximation(red) at a = 0. The approximation error is the gap between the curves, and it increases for x values further from 0.

An approximation error can occur for a variety of reasons, among them a computingmachine precisionormeasurement error(e.g. the length of a piece of paper is 4.53 cm but the ruler only allows you to estimate it to the nearest 0.1 cm, so you measure it as 4.5 cm).

In themathematicalfield ofnumerical analysis,thenumerical stabilityof analgorithmindicates the extent to which errors in the input of the algorithm will lead to large errors of the output; numerically stable algorithms do not yield a significant error in output when the input is malformed and vice versa.[1]

Formal definition

edit

Given some valuev,we say thatvapproxapproximatesvwithabsolute errorε>0 if[2][3]

where the vertical bars denote theabsolute value.

We say thatvapproxapproximatesvwithrelative errorη>0 if

.

Ifv≠ 0, then

.

Thepercent error(an expression of the relative error) is[3]

Anerror boundis an upper limit on the relative or absolute size of an approximation error.[4]

Examples

edit
Best rational approximants forπ(green circle),e(blue diamond),ϕ(pink oblong), (√3)/2 (grey hexagon), 1/√2 (red octagon) and 1/√3 (orange triangle) calculated from their continued fraction expansions, plotted as slopesy/xwith errors from their true values (black dashes)

As an example, if the exact value is 50 and the approximation is 49.9, then the absolute error is 0.1 and the relative error is 0.1/50 = 0.002 = 0.2%. As a practical example, when measuring a 6 mL beaker, the value read was 5 mL. The correct reading being 6 mL, this means the percent error in that particular situation is, rounded, 16.7%.

The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is, in most applications, much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is 0.003 while in the second it is only 0.000003.

There are two features of relative error that should be kept in mind. First, relative error is undefined when the true value is zero as it appears in the denominator (see below). Second, relative error only makes sense when measured on aratio scale,(i.e. a scale which has a true meaningful zero), otherwise it is sensitive to the measurement units. For example, when an absolute error in atemperaturemeasurement given inCelsius scaleis 1 °C, and the true value is 2 °C, the relative error is 0.5. But if the exact same approximation is made with theKelvin scale,a 1 K absolute error with the same true value of 275.15 K = 2 °C gives a relative error of 3.63×10−3.

Comparison

edit

Statements aboutrelative errorsare sensitive to addition of constants, but not to multiplication by constants. Forabsolute errors,the opposite is true: are sensitive to multiplication by constants, but not to addition of constants.[5]: 34 

Polynomial-time approximation of real numbers

edit

We say that a real valuevispolynomially computable with absolute errorfrom an input if, for every rational numberε>0, it is possible to compute a rational numbervapproxthat approximatesvwith absolute errorε,in time polynomial in the size of the input and the encoding size ofε(which is O(log(1/ε)). Analogously,vispolynomially computable with relative errorif, for every rational numberη>0, it is possible to compute a rational numbervapproxthat approximatesvwith relative errorη,in time polynomial in the size of the input and the encoding size ofη.

Ifvis polynomially computable with relative error (by some algorithm called REL), then it is also polynomially computable with absolute error.Proof.Letε>0 be the desired absolute error. First, use REL with relative errorη=1/2; find a rational numberr1such that |v-r1| ≤ |v|/2, and hence|v|≤ 2 |r1|. Ifr1=0, thenv=0 and we are done. Since REL is polynomial, the encoding length ofr1is polynomial in the input. Now, run REL again with relative errorη=ε/(2|r1|). This yields a rational numberr2that satisfies |v-r2| ≤ε|v| / (2r1) ≤ε,so it has absolute errorεas wished.[5]: 34 

The reverse implication is usually not true. But, if we assume that some positive lower bound on |v| can be computed in polynomial time, e.g. |v| >b> 0, andvis polynomially computable with absolute error (by some algorithm called ABS), then it is also polynomially computable with relative error, since we can simply call ABS with absolute errorε = η b.

An algorithm that, for every rational numberη>0, computes a rational numbervapproxthat approximatesvwith relative errorη,in time polynomial in the size of the input and 1/η(rather than log(1/η)), is called anFPTAS.

Instruments

edit

In most indicating instruments, the accuracy is guaranteed to a certain percentage of full-scale reading. The limits of these deviations from the specified values are known as limiting errors or guarantee errors.[6]

Generalizations

edit

The definitions can be extended to the case whenandaren-dimensional vectors,by replacing the absolute value with ann-norm.[7]

See also

edit

References

edit
  1. ^Weisstein, Eric W."Numerical Stability".mathworld.wolfram.Retrieved2023-06-11.
  2. ^Weisstein, Eric W."Absolute Error".mathworld.wolfram.Retrieved2023-06-11.
  3. ^ab"Absolute and Relative Error | Calculus II".courses.lumenlearning.Retrieved2023-06-11.
  4. ^"Approximation and Error Bounds".math.wpi.edu.Retrieved2023-06-11.
  5. ^abGrötschel, Martin;Lovász, László;Schrijver, Alexander(1993),Geometric algorithms and combinatorial optimization,Algorithms and Combinatorics, vol. 2 (2nd ed.), Springer-Verlag, Berlin,doi:10.1007/978-3-642-78240-4,ISBN978-3-642-78242-8,MR1261419
  6. ^Helfrick, Albert D. (2005)Modern Electronic Instrumentation and Measurement Techniques.p. 16.ISBN81-297-0731-4
  7. ^Golub, Gene;Charles F. Van Loan (1996).Matrix Computations – Third Edition.Baltimore: The Johns Hopkins University Press. p. 53.ISBN0-8018-5413-X.
edit