Floating point error mitigation

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Floating-point error arises because real numbers cannot, in general, be accurately represented in a fixed space. By definition, floating-point error cannot be eliminated, and, at best, can only be managed.

H. M. Sierra noted in his 1956 patent "Floating Decimal Point Arithmetic Control Means for Calculator":

"Thus under some conditions, the major portion of the significant data digits may lie beyond the capacity of the registers. Therefore, the result obtained may have little meaning if not totally erroneous."

The first computer (relays) developed by Zuse in 1936 with floating point arithmetic and was thus susceptible to floating point error. Early computers, however, with operation times measured in milliseconds, were incapable of solving large, complex problems[1] and thus were seldom plagued with floating point error. Today, however, with super computer system performance measured in petaflops, (1015) floating-point operations per second, floating point error is a major concern for computational problem solvers. Further, there are two types of floating point error, cancellation and rounding. Cancellation occurs when subtracting two similar numbers and rounding occurs with significant bits cannot be saved and are rounded or truncated. Cancellation error is exponential relative to rounding error.

The following sections describe the strengths and weaknesses of various means of mitigating floating point error.

Numerical error analysis[edit]

Though not the primary focus of numerical analysis,[2][3]:5 numerical error analysis for the analysis and minimization of floating point rounding error. Numerical error analysis generally does not account for cancellation error.[3]:5

Monte Carlo arithmetic[edit]

Error analysis by Monte Carlo arithmetic is accomplished by repeatedly injecting small errors into an algorithms data values and determining the relative effect on the results.

Extension of precision[edit]

Extension of precision is the use of larger representations of real values. The ISO standard define precision as the number of digits available to represent real numbers, and typically including single precision (32-bits), double precision (64-bits), and quad precision (128-bits). While extension of precision makes the effects of error less likely or less important, the true accuracy of the results are still unknown.

Variable length arithmetic[edit]

Variable length arithmetic represents numbers as a string of digits of variable length limited only by the memory available. Variable length arithmetic operations are considerably slower than fixed length format floating point instructions. When high performance is not a requirement, but high precision is, variable length arithmetic can prove useful, thought the actual accuracy of the result may not be known.

Interval arithmetic[edit]

Interval arithmetic is an algorithm for bounding rounding and measurement errors. The algorithm results in two floating point numbers representing the minimum and maximum limits for the real value represented.

"Instead of using a single floating-point number as approximation for the value of a real variable in the mathematical model under investigation, interval arithmetic acknowledges limited precision by associating with the variable a set of reals as possible values. For ease of storage and computation, these sets are restricted to intervals."[4]

The evaluation of interval arithmetic expression may provide a large range of values,[4] and may seriously overestimate the true error boundaries.[5]:8

Gustafson's unums[edit]

Unums ("Universal Numbers") are an extension of variable length arithmetic proposed by John Gustafson.[6] Unums have variable length fields for the exponent and significand lengths and error information is carried in a single bit, the ubit, representing possible error in the least significant bit of the significand (ULP).[6]:4

The efficacy of unums is questioned by William Kahan.[5]

References[edit]

  1. ^ "History of Computer Development & Generation of Computer". WikiEducator. September 2014. Retrieved 2018-02-17.
  2. ^ Trefethen, Lloyd N. (1992). "The Definition of Numerical Analysis" (PDF). SIAM. Retrieved 2018-02-16.
  3. ^ a b Higham, Nicholas John (2002). Accuracy and Stability of Numerical Algorithms (2 ed.). Society for Industrial and Applied Mathematics (SIAM). ISBN 0-89871-521-0.
  4. ^ a b Hickey, T.; Ju, Q.; van Emden, M.H. (September 2001). "Interval Arithmetic: from Principles to Implementation" (PDF). Journal of the ACM. 48 (5): 1038–1068. doi:10.1145/502102.502106. Retrieved 2018-02-16.
  5. ^ a b Kahan, William (July 2016). "A Critique of John L. Gustafson's THE END of ERROR — Unum Computation and his A Radical Approach to Computation with Real Numbers" (PDF). Retrieved 2018-02-17.
  6. ^ a b Gustafson, John Leroy (2016-02-04) [2015-02-05]. The End of Error: Unum Computing. Chapman & Hall / CRC Computational Science. 24 (2nd corrected printing, 1st ed.). CRC Press. ISBN 978-1-4822-3986-7. Retrieved 2016-05-30. [1] [2]