Authored by: Anonymous on Wednesday, July 10 2013 @ 05:03 AM EDT |
It's normal to distinguish between +0.0 and -0.0 in floating-point arithmetic.
I've never knowingly come across computer hardware that distinguishes between +0
and -0 in representing integers though the C standard is written to allow such
representations so presumably such hardware existed at some time.[ Reply to This | Parent | # ]
|
|
Authored by: bprice on Wednesday, July 10 2013 @ 05:08 AM EDT |
on some computers you can actually test for and tell the difference
between +0 and -0 (one's complement arithmetic.)
Not just ones
complement — there's also signed-magnitude, used by many early computers
(and at least one still on the market, last time I looked), as well as most
people. When you write -1, you're using it. --- --Bill. NAL: question
the answers, especially mine. [ Reply to This | Parent | # ]
|
|
Authored by: Anonymous on Wednesday, July 10 2013 @ 05:24 AM EDT |
In maths, 0 is NEVER signed. [ Reply to This | Parent | # ]
|
|
Authored by: Anonymous on Thursday, July 11 2013 @ 12:15 AM EDT |
On one system I've used -0 is used to represent "infinity" [when
printer]; I can't remember now how many bytes it used for fixed point
arithmetic, but as an example, if it had been 2 bytes (16 bits), -0 would
represent 32767 + 1 which is the same [bit pattern] as -32767 - 1.
cm[ Reply to This | Parent | # ]
|
|
Authored by: Anonymous on Thursday, July 11 2013 @ 12:19 PM EDT |
If the hardware (as in most modern floating point hardware)
supports the IEEE floating point
standards then signed
zeros must be available. However, as the second
Wikipedia points
out that you probably will not see it
without special effort. [ Reply to This | Parent | # ]
|
|