Next: Low-level Functions, Previous: Rational Number Functions, Up: Top [Index]
GMP floating point numbers are stored in objects of type mpf_t
and
functions operating on them have an mpf_
prefix.
The mantissa of each float has a user-selectable precision, limited only by available memory. Each variable has its own precision, and that can be increased or decreased at any time.
The exponent of each float is a fixed precision, one machine word on most
systems. In the current implementation the exponent is a count of limbs, so
for example on a 32-bit system this means a range of roughly
2^-68719476768 to 2^68719476736, or on a 64-bit system
this will be greater. Note however that mpf_get_str
can only return an
exponent which fits an mp_exp_t
and currently mpf_set_str
doesn’t accept exponents bigger than a long
.
Each variable keeps a size for the mantissa data actually in use. This means that if a float is exactly represented in only a few bits then only those bits will be used in a calculation, even if the selected precision is high.
All calculations are performed to the precision of the destination variable. Each function is defined to calculate with “infinite precision” followed by a truncation to the destination precision, but of course the work done is only what’s needed to determine a result under that definition.
The precision selected by the user for a variable is a minimum value, GMP may increase it to facilitate efficient calculation. Currently this means rounding up to a whole limb, and then sometimes having a further partial limb, depending on the high limb of the mantissa.
The mantissa is stored in binary. One consequence of this is that decimal
fractions like 0.1 cannot be represented exactly. The same is true of
plain IEEE double
floats. This makes both highly unsuitable for
calculations involving money or other values that should be exact decimal
fractions. (Suitably scaled integers, or perhaps rationals, are better
choices.)
The mpf
functions and variables have no special notion of infinity or
not-a-number, and applications must take care not to overflow the exponent or
results will be unpredictable. This might change in a future release.
Note that the mpf
functions are not intended as a smooth
extension to IEEE P754 arithmetic. In particular results obtained on one
computer often differ from the results on a computer with a different word
size.
The GMP extension library MPFR (http://mpfr.org) is an alternative to
GMP’s mpf
functions. MPFR provides well-defined precision and accurate
rounding, and thereby naturally extends IEEE P754.
• Initializing Floats: | ||
• Assigning Floats: | ||
• Simultaneous Float Init & Assign: | ||
• Converting Floats: | ||
• Float Arithmetic: | ||
• Float Comparison: | ||
• I/O of Floats: | ||
• Miscellaneous Float Functions: |
Next: Low-level Functions, Previous: Rational Number Functions, Up: Top [Index]