|
Authored by: mvs_tomm on Tuesday, May 14 2013 @ 03:33 PM EDT |
No, it isn't wrong.
It all depends on who/what is interpreting the
value.
No, it depends upon the way that the data are intended to
be used.
If you are comparing two floating point numbers in a
sequence like "a =
2.5; b=4.5; if (a == b)"... guess what the final code
uses...
Integer data movement
Integer comparison.
Now you
are talking about the way that the compiler might optimize the code. So what?
Yes, the compiler can do an integer move or a character move of a literal
containing a floating-point value to initialize a floating-point value.
Likewise a comparison for equality could be performed using an integer compare
or a character compare. That doesn't mean that the data are integers or
characters. It simply reflects the fact the the computer doesn't care (in many
cases) what the data look like.
If you are examining a data dump
(archaic, I know), guess what you see - either
hex or octal representations of a
binary number.
I look at dumps all the time. You are correct
that what I see is hexadecimal. It is not necessarily "representations of a
binary number." Certainly, it is a string of bits, and it could represent a
binary number, but it can also represent something quite different. The meaning
of the data depends upon the way that the programs that process those data are
programmed.
If you generate a disassembly of the executable, guess
what you see... Only hex
or octal values of the number.
No, you
see some of the instructions in the executable. Not (usually) the data that it
operates upon.
The numbers in a program have no meaning at all to
the computer.
I didn't say that they do. I said that they have
meaning to the program. And to the programmer.
Only the
reader.
If you mean to a human who is looking at a dump, I
disagree.
Tom Marchant [ Reply to This | Parent | # ]
|
|
|
|
|