|
Authored by: Anonymous on Thursday, March 28 2013 @ 10:05 PM EDT |
I was doing rounding on floating point numbers in the early sixties.
Using a slide rule seems (on a quick skim) to be pretty similar.
Chris B[ Reply to This | Parent | # ]
|
|
Authored by: jrl on Monday, April 01 2013 @ 05:43 PM EDT |
The IEEE floating point standard is rarely able
to represent a value exactly. The same goes for decimal.
1.0/3.0 = 0.333333333 but I have to stop before running
out of paper.
A PowerPC for instance uses 64-bit floating point format.
A "mantissa" (the numeric part of the number) and an
"exponent" are used to create a representation of the number.
Unless there is an exact representation of the desired
number (rare) the only way to get the number into a
register is to round it. The "Floating point Status/Control
Register" includes control bits which configure what
rounding method will be used - round to nearest, round
towards zero, round up, and round down are the alternatives.
When coding the above example of 1.0/3.0, I might say
double non_exact_number = 1.0/3.0 - the compiler
rounds this to the correct number of bits and constructs
the desired bit pattern, and emits code to load that
bit-pattern into a register. I can then multiply this
by another floating point register, and the result is
that a pre-rounded number had another floating point
operation performed on it. This style of floating point
operations dates back before the standard, which came
out in 1987 - the general form of the idea has been
available in hardware since 1967.[ Reply to This | Parent | # ]
|
|
|
|
|