Machine epsilon
Machine epsilon gives an upper bound on the relative error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps or unit roundoff, and it has the symbols Greek epsilon or bold Roman u, respectively.
Contents
Values for standard hardware floating point arithmetics[edit]
The following values of machine epsilon apply to standard floating point formats:
IEEE 754 - 2008 | Common name | C++ data type | Base | Precision | Machine epsilon[a] | Machine epsilon[b] |
---|---|---|---|---|---|---|
binary16 | half precision | N/A | 2 | 11 (one bit is implicit) | 2−11 ≈ 4.88e-04 | 2−10 ≈ 9.77e-04 |
binary32 | single precision | float | 2 | 24 (one bit is implicit) | 2−24 ≈ 5.96e-08 | 2−23 ≈ 1.19e-07 |
binary64 | double precision | double | 2 | 53 (one bit is implicit) | 2−53 ≈ 1.11e-16 | 2−52 ≈ 2.22e-16 |
extended precision, long double | _float80[1] | 2 | 64 | 2−64 ≈ 5.42e-20 | 2−63 ≈ 1.08e-19 | |
binary128 | quad(ruple) precision | _float128[1] | 2 | 113 (one bit is implicit) | 2−113 ≈ 9.63e-35 | 2−112 ≈ 1.93e-34 |
decimal32 | single precision decimal | _Decimal32[2] | 10 | 7 | 5 × 10−7 | 10−6 |
decimal64 | double precision decimal | _Decimal64[2] | 10 | 16 | 5 × 10−16 | 10−15 |
decimal128 | quad(ruple) precision decimal | _Decimal128[2] | 10 | 34 | 5 × 10−34 | 10−33 |
Formal definition[edit]
Rounding is a procedure for choosing the representation of a real number in a floating point number system. For a number system and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure.
Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix which is also called the base, , and by the precision , i.e. the number of radix digits of the significand (including any leading implicit bit). All the numbers with the same exponent, , have the spacing, . The spacing changes at the numbers that are perfect powers of ; the spacing on the side of larger magnitude is times larger than the spacing on the side of smaller magnitude.
Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent . It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form where is between and . All these numbers round to with relative error . The maximum occurs when is at the upper end of its range. The in the denominator is negligible compared to the numerator, so it is left off for expediency, and just is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to , so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value".
Thus, the maximum spacing between a normalised floating point number, , and an adjacent normalised number is x .[3]
Arithmetic model[edit]
Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating point operations are done as if it were possible to perform the infinite-precision operation, and then, the result is rounded to a floating point number. Suppose (1) , are floating point numbers, (2) is an arithmetic operation on floating point numbers such as addition or multiplication, and (3) is the infinite precision operation. According to the standard, the computer calculates:
By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so:
where in absolute magnitude is at most or u. The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination.
Variant definitions[edit]
The IEEE standard does not define the terms machine epsilon and unit roundoff, so differing definitions of these terms are in use, which can cause some confusion.
The definition given here for machine epsilon is the one used by Prof. James Demmel in lecture scripts[4], the LAPACK linear algebra package,[5] numerics research papers[6] and some scientific computing software.[7] Most numerical analysts use the words machine epsilon and unit roundoff interchangeably with this meaning.
The following different definition is much more widespread outside academia: Machine epsilon is defined as the difference between 1 and the next larger floating point number. By this definition, equals the value of the unit in the last place relative to 1, i.e. ,[8] and the unit roundoff is u, assuming round-to-nearest mode. The prevalence of this definition is rooted in its use in the ISO C Standard for constants relating to floating-point types[9][10] and corresponding constants in other programming languages.[11][12] It is also widely used in scientific computing software,[13][14][15] and in the numerics and computing literature[16][17][18][19].
How to determine machine epsilon[edit]
Where standard libraries do not provide precomputed values (as <float.h> does with FLT_EPSILON
, DBL_EPSILON
and LDBL_EPSILON
for C and <limits> does with std::numeric_limits<T>::epsilon()
in C++), the best way to determine machine epsilon is to refer to the table, above, and use the appropriate power formula. Computing machine epsilon is often given as a textbook exercise. The following examples compute machine epsilon in the sense of the spacing of the floating point numbers at 1 rather than in the sense of the unit roundoff.
Note that results depend on the particular floating-point format used, such as float, double, long double, or similar as supported by the programming language, the compiler, and the runtime library for the actual platform.
Some formats supported by the processor might not be supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic available in some languages and libraries.
In a strict sense the term machine epsilon means the 1+eps accuracy directly supported by the processor (or coprocessor), not some 1+eps accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.
IEEE 754 floating-point formats have the property that, when reinterpreted as a two's complement integer of the same width, they monotonically increase over positive values and monotonically decrease over negative values (see the binary representation of 32 bit floats). They also have the property that 0 < |f(x)| < ∞, and |f(x+1) − f(x)| ≥ |f(x) − f(x−1)| (where f(x) is the aforementioned integer reinterpretation of x). In languages that allow type punning and always use IEEE 754-1985, we can exploit this to compute a machine epsilon in constant time. For example, in C:
typedef union {
long long i64;
double d64;
} dbl_64;
double machine_eps (double value)
{
dbl_64 s;
s.d64 = value;
s.i64++;
return s.d64 - value;
}
This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with:
return (s.i64 < 0 ? value - s.d64 : s.d64 - value);
64-bit doubles give 2.220446e-16, which is 2−52 as expected.
Approximation[edit]
The following simple algorithm can be used to approximate the machine epsilon, to within a factor of two (one order of magnitude) of its true value, using a linear search.
epsilon = 1.0; while (1.0 + 0.5 * epsilon) ≠ 1.0: epsilon = 0.5 * epsilon
See also[edit]
- Floating point, general discussion of accuracy issues in floating point arithmetic
- Unit in the last place (ULP)
Notes and references[edit]
- ^ a b Floating Types - Using the GNU Compiler Collection (GCC)
- ^ a b c Decimal Float - Using the GNU Compiler Collection (GCC)
- ^ Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed). SIAM. p. 37.
- ^ "Basic Issues in Floating Point Arithmetic and Error Analysis". 21 Oct 1999. Retrieved 11 Apr 2013.
- ^ "LAPACK Users' Guide Third Edition". 22 August 1999. Retrieved 9 March 2012.
- ^ "David Goldberg: What Every Computer Scientist Should Know About Floating-Point Arithmetic, ACM Computing Surveys, Vol 23, No 1, March 1991" (PDF). Retrieved 11 Apr 2013.
- ^ "Scilab documentation - number_properties - determine floating-point parameters". Retrieved 11 Apr 2013.
- ^ note that here p is defined as the precision, i.e. the total number of bits in the significand including implicit leading bit, as used in the table above
- ^ Jones, Derek M. (2009). The New C Standard - An Economic and Cultural Commentary (PDF). p. 377.
- ^ "float.h reference at cplusplus.com". Retrieved 11 Apr 2013.
- ^ "std::numeric_limits reference at cplusplus.com". Retrieved 11 Apr 2013.
- ^ "Python documentation - System-specific parameters and functions". Retrieved 11 Apr 2013.
- ^ "Mathematica documentation: $MachineEpsilon". Retrieved 11 Apr 2013.
- ^ "Matlab documentation - eps - Floating-point relative accuracy". Archived from the original on 2013-08-07. Retrieved 11 Apr 2013.
- ^ "Octave documentation - eps function". Retrieved 11 Apr 2013.
- ^ Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed). SIAM. pp. 27–28.
- ^ Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000). Numerical Mathematics (PDF). Springer. p. 49. ISBN 0-387-98959-5.
- ^ Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. Numerical Recipes. p. 890.
- ^ Engeln-Müllges, Gisela; Reutter, Fritz (1996). Numerik-Algorithmen. p. 6. ISBN 3-18-401539-4.
- Anderson, E.; LAPACK Users' Guide, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, third edition, 1999.
- Cody, William J.; MACHAR: A Soubroutine to Dynamically Determine Machine Parameters, ACM Transactions on Mathematical Software, Vol. 14(4), 1988, 303-311.
- Besset, Didier H.; Object-Oriented Implementation of Numerical Methods, Morgan & Kaufmann, San Francisco, CA, 2000.
- Demmel, James W., Applied Numerical Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.
- Higham, Nicholas J.; Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edition, 2002.
- Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; and Flannery, Brian P.; Numerical Recipes in Fortran 77, 2nd ed., Chap. 20.2, pp. 881–886
- Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B.; "Computer Methods for Mathematical Computations", Prentice-Hall, ISBN 0-13-165332-6, 1977
External links[edit]
- MACHAR, a routine (in C and Fortran) to "dynamically compute machine constants" (ACM algorithm 722)
- Diagnosing floating point calculations precision, Implementation of MACHAR in Component Pascal and Oberon based on the Fortran 77 version of MACHAR published in Numerical Recipes (Press et al., 1992).