I wonder if the bitwise complement (NOT) of a floating point number's binary representation of a number is the same number but with a change in its sign?
Thank you to all possible references to articles or documentation.
-
3\$\begingroup\$ Do you mean bitwise NOT? "Bitwise" encompasses many operations. Which one in particular are you referring to? \$\endgroup\$Connor Wolf– Connor Wolf2011年02月10日 11:01:45 +00:00Commented Feb 10, 2011 at 11:01
-
\$\begingroup\$ I´m refering to change the ones by zeros, and the zeros by ones. \$\endgroup\$Peterstone– Peterstone2011年02月10日 18:36:24 +00:00Commented Feb 10, 2011 at 18:36
-
\$\begingroup\$ The Binary inversion of a number (what you describe) is called the "logical NOT" operation. \$\endgroup\$Connor Wolf– Connor Wolf2011年02月11日 03:29:15 +00:00Commented Feb 11, 2011 at 3:29
-
1\$\begingroup\$ i believe the operation in question is also referred to as the "ones complement" :) \$\endgroup\$vicatcu– vicatcu2011年02月11日 04:34:58 +00:00Commented Feb 11, 2011 at 4:34
5 Answers 5
IEEE 754 floating point numbers are represented as a sign, a mantissa and an exponent.
It is possible to work with floats at a bit level, but you need to know what you're doing.
Here are some documents which explain further.
-
\$\begingroup\$ Unlike two's complement integers, IEEE 754 floating point has a positive and a negative! zero. \$\endgroup\$joeforker– joeforker2011年02月11日 02:38:20 +00:00Commented Feb 11, 2011 at 2:38
-
\$\begingroup\$ It also has NaN, Infinity, and 'subnormals' (the little deviants)... oh my! This number format is designed to make life simpler for numerical methods accelerated by hardware. IEEE 754 even supports decimals! (base 10): en.wikipedia.org/wiki/Decimal32_floating-point_format. \$\endgroup\$Eryk Sun– Eryk Sun2011年02月12日 02:23:43 +00:00Commented Feb 12, 2011 at 2:23
I'm not sure what you mean by "the bitwise." There are many bitwise operations. Logical choices are the One's complement/Bitwise NOT (Fake Name's guess) or Two's complement (changes sign for signed integers). Less logical choices are OR, AND, XOR, and their complements NOR, NAND, XNOR. None of these will produce the desired result for a floating point number.
As you can see in Joby's nice diagram, the first bit is the sign bit, then the (biased) exponent occupies 8 bits, then the mantissa. What you want to do is XOR the sign bit:
FloatingPointNumber ^ 0x80000000 == -(FloatingPointNumber)
That constant assumes you're using 32-bit floats. You'll need to add another 8 zeros at the end of the constant for a number of type double.
No, it's not.
You can try it here: http://babbage.cs.qc.edu/IEEE-754/32bit.html
You can for example enter the number from the binary32 wikipedia page:
00111110 00100000 00000000 00000000 binary = 3E200000 hex
and you will see, that it's really 0.15625 decimal.
When you now enter the bitwise NOT (ones' complement) of that binary representation:
11000001 11011111 11111111 11111111 binary = C1DFFFFF hex
you will see that it is -27.999998092651367.
-
1\$\begingroup\$ See Joby Taffey's answer for an explanation why. \$\endgroup\$AndreKR– AndreKR2011年02月11日 01:57:04 +00:00Commented Feb 11, 2011 at 1:57
-
\$\begingroup\$ You only invert the MSB, hence in your example 0x3E20 0000 becomes 0xBE20 0000 which is infact the correct answer \$\endgroup\$BullBoyShoes– BullBoyShoes2011年02月11日 10:40:21 +00:00Commented Feb 11, 2011 at 10:40
-
\$\begingroup\$ You'll need a XOR operation for that. \$\endgroup\$AndreKR– AndreKR2011年02月11日 12:35:58 +00:00Commented Feb 11, 2011 at 12:35
No, it's garbage. floating points have complex internal structure, so you cannot do that.
You may do some manipulations on mantissa, but not whole number.
-
\$\begingroup\$ I disagree with this comment but was not able to down vote \$\endgroup\$BullBoyShoes– BullBoyShoes2011年02月10日 21:04:12 +00:00Commented Feb 10, 2011 at 21:04
-
\$\begingroup\$ @Eddie Hitler To downvote you need a reputation of 125+ electronics.stackexchange.com/faq \$\endgroup\$Toby Jaffey– Toby Jaffey2011年02月10日 23:41:39 +00:00Commented Feb 10, 2011 at 23:41
-
\$\begingroup\$ I explained how the internal structure is affected and why this is roughly -4.0/f, and trust me, I did run some numerical simulations. Anything else I can do to convince you, dear @BarsMonster? \$\endgroup\$Stenzel– Stenzel2018年08月16日 21:34:02 +00:00Commented Aug 16, 2018 at 21:34
The result of inverting all bits of a float f can be seen as a crude approximation of -4.0/f . This works because all three elements of the float behave in a predictable way:
-sign bit is flipped
-exponent is inverted, just off by a factor of one
-new mantissa becomes roughy 3 - mantissa (mind the hidden bit which is always one!)
Example:
hex: 0x3F800000
sign: +
exponent: 127
mantissa: 1.0000000000000000000000 (binary) = 1.0 decimal
value: + 2**(127-127) * 1.0 = 1.0
With each bit flipped:
hex: 0xC07FFFFF
sign: -
exponent: 128
mantissa: 1.111111111111111111111 (binary) = 1.99999 decimal
value: - 2**(128-127) * 1.99999 = -3.999999