You had better make that "convert to integer" instead be "round
to
integer."
And "converted to binary" may itself be problematic: some
systems
(Intel x87, IBM POWER) have different binary floating point formats
"on
the processor" vs "in memory" -- for the x87, on-processor arithemtic
is
80-bit and in-memory is either 32-bit or 64-bit The instruction for
"store to
memory" is really "round 80-bit floating point to (less
precise) memory format
and store the result". For IBM POWER version
4 or earlier, on-processor is
64-bit while in-memory is either 32-bit
or 64-bit. For DEC VAX, there were two
distinct 64-bit formats,
one with a large exponent-range and about 17
significant digits
of fraction, the other with a smaller exponent-range but
about
20 significant digits of fraction.)
Consequently, especially with
optimized code you can get surprises like
the following:
- X =
5.0/3.0
and store to memory as 32-bit
- ...
- Y = 5.0/3.0
never stores to memory
- IF ( X equals Y ) THEN...
Fetch
32-bit value for X from memory, compare
with 80-bit on-processor value
for "Y", and find that
the two are unequal
-
...code that never uses Y again...
Furthermore, if you compile with full
debug flags, results
are always stored to memory, and you will get
the
result that naive users expect... but much slower,
which makes a
difference if you write stuff like I do
(e.g., weather models) that runs for
hours at a time
on supercomputers ;-(
[ Reply to This | Parent | # ]
|