I'm seeing a strange (to me) difference in behavior between Clang and GCC when comparing an integer with its negation. Also, pre-v12 GCC behaves like Clang.
Code is below, but also here's a live link on godbolt.org.
int32_t x = 0x80000000U;
cout << " x: " << x << endl;
cout << "-x: " << -x << endl;
if ( x == -x) {
cout << "equal" << endl;
} else {
cout << "NOT equal" << endl;
}
For GCC versions before 12, and for modern Clang, this prints "equal"
For GCC 12+, this prints "NOT equal".
My question is: why?
It does not seem to be a bug, since all GCC versions from 12 on have this behavior. But it is clearly a change from historical behavior.
Is there a reason behind this change? What is actually going on in this code, in the two different cases?
My only guess is that somehow GCC is promoting x to a larger type, since 0x80000000 does not fit in int32_t, and using that for the comparison.
But by what logic?
Note: even adding a int32_t(-x) cast inside the comparison does not change the behavior.
2 Answers 2
This self-answer is just to spell things out, for future reference.
First, there is an assignment of a too-large unsigned value to a signed type:
int32_t x = 0x80000000U;
Before C++20 (see @shananton's comment), that is generally "implementation defined" behavior. Both Clang and GCC (all versions) perform the expected two's-compliment wraparound, here — but that's not guaranteed by the standard, before C++20.
So, x gets a value of -2147483648 (which is INT_MIN). You see this in the printout.
Then, the comparison of x == -x is made.
But -x is undefined, for INT_MIN, since the result will not fit in the signed type.
With undefined behavior, the compiler is free to assume it can never happen. So, in this case, GCC optimizes away the path where they might be equal, since that undefined behavior is "not allowed" to happen.
This behavior can be altered with -fno-strict-overflow (which disables such optimizations), or with the even stronger -fwrapv, which makes signed integer overflow fully-defined.
3 Comments
In your code -x is undefined behavior. Some compilers give an error or warning by changing x to be constexpr:
#include <iostream>
#include <cstdint>
int main() {
constexpr std::int32_t x = 0x80000000u;
std::cout << -x << std::endl;
return 0;
}
My compiler gives a warning, looks like your godbolt example produces an error when constexpr is added
1 Comment
constexpr std::int32_t y = -x; afaik then there must be an error. godbolt.org/z/rEP1rxM98
int32_twith a value of0x80000000uis undefined behavior0x80000000uinitially undefined behavior? (it seems to reliably result in-2147483648). Or is it just the negation of that INT_MIN value, later on?