Jump to content

2Sum

From Wikipedia, the free encyclopedia

2Sum[1]is afloating-pointalgorithm for computing the exactround-off errorin a floating-point addition operation.

2Sum and its variant Fast2Sum were first published by Ole Møller in 1965.[2] Fast2Sum is often used implicitly in other algorithms such ascompensated summation algorithms;[1]Kahan's summation algorithm was published first in 1965,[3]and Fast2Sum was later factored out of it byDekkerin 1971 fordouble-double arithmeticalgorithms.[4] The names2SumandFast2Sumappear to have been applied retroactively by Shewchuk in 1997.[5]

Algorithm

[edit]

Given two floating-point numbersand,2Sum computes the floating-point sumrounded to nearestand the floating-point errorso that,whereandrespectively denote the addition and subtraction rounded to nearest. The erroris itself a floating-point number.

Inputsfloating-point numbers
Outputsrounded sumand exact error
  1. return

Provided the floating-point arithmetic is correctly rounded to nearest (with ties resolved any way), as is the default inIEEE 754,and provided the sum does not overflow and, if it underflows,underflows gradually,it can be proven that.[1][6][2]

A variant of 2Sum calledFast2Sumuses only three floating-point operations, for floating-point arithmetic in radix 2 or radix 3, under the assumption that the exponent ofis at least as large as the exponent of,such as when:[1][6][7][4]

Inputsradix-2 or radix-3 floating-point numbersand,of which at least one is zero, or which respectively have normalized exponents
Outputsrounded sumand exact error
  1. return

Even if the conditions are not satisfied, 2Sum and Fast2Sum often provide reasonable approximations to the error, i.e.,which enables algorithms for compensated summation, dot-product, etc., to have low error even if the inputs are not sorted or the rounding mode is unusual.[1][2] More complicated variants of 2Sum and Fast2Sum also exist for rounding modes other than round-to-nearest.[1]

See also

[edit]

References

[edit]
  1. ^abcdefMuller, Jean-Michel; Brunie, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Joldes, Mioara; Lefèvre, Vincent; Melquiond, Guillaume;Revol, Nathalie;Torres, Serge (2018).Handbook of Floating-Point Arithmetic(2nd ed.). Cham, Switzerland: Birkhäuser. pp. 104–111.doi:10.1007/978-3-319-76526-6.ISBN978-3-319-76525-9.Archivedfrom the original on 2023-04-28.Retrieved2020-09-20.
  2. ^abcMøller, Ole (March 1965). "Quasi double-precision in floating point addition".BIT Numerical Mathematics.5:37–50.doi:10.1007/BF01975722.S2CID119991676.
  3. ^Kahan, W.(January 1965)."Further remarks on reducing truncation errors".Communications of the ACM.8(1). Association for Computing Machinery: 40.doi:10.1145/363707.363723.ISSN0001-0782.S2CID22584810.
  4. ^abDekker, T.J.(June 1971)."A floating-point technique for extending the available precision".Numerische Mathematik.18(3): 224–242.doi:10.1007/BF01397083.S2CID63218464.Archivedfrom the original on 2020-07-19.Retrieved2020-09-24.
  5. ^Shewchuk, Jonathan Richard (October 1997)."Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates".Discrete & Computational Geometry.18(3): 305–363.doi:10.1007/PL00009321.
  6. ^abKnuth, Donald E.(1998).The Art of Computer Programming, Volume II: Seminumerical Algorithms(3rd ed.). Addison–Wesley. p. 236.ISBN978-0-201-89684-8.Archivedfrom the original on 2017-07-16.Retrieved2020-09-20.
  7. ^Sterbenz, Pat H. (1974).Floating-Point Computation.Englewood Cliffs, NJ, United States: Prentice-Hall. pp. 138–143.ISBN0-13-322495-3.