Timeline for Replacing a 32-bit loop counter with 64-bit introduces crazy performance deviations with _mm_popcnt_u64 on Intel CPUs
Current License: CC BY-SA 3.0
11 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Nov 13, 2018 at 18:03 | comment | added | wizzwizz4 |
@Keno That's because sizeof(uint_fast32_t) has to be defined. If you allow it not to be, you can do that trickery, but that can only be accomplished with a compiler extension.
|
|
| Oct 18, 2018 at 13:15 | comment | added | Keno | "What's more, gcc believes the 64-bit integer [...] to be better, as using uint_fast32_t causes gcc to use a 64-bit uint." Unfortunately, and to my regret, there is no magic and no deep code introspection behind these types. I’ve yet to see them provided any other way than as single typedefs for every possible place and every program on the whole platform. There has likely been put quite some thought behind the exact choice of types, but the one definition for each of them cannot possibly fit to every application there will ever be. Some further reading: stackoverflow.com/q/4116297. | |
| Aug 28, 2014 at 10:18 | audit | First posts | |||
| Aug 28, 2014 at 10:20 | |||||
| Aug 26, 2014 at 6:07 | audit | First posts | |||
| Aug 26, 2014 at 6:07 | |||||
| Aug 25, 2014 at 11:20 | audit | First posts | |||
| Aug 25, 2014 at 11:20 | |||||
| Aug 18, 2014 at 8:49 | audit | First posts | |||
| Aug 18, 2014 at 8:50 | |||||
| Aug 17, 2014 at 15:50 | audit | First posts | |||
| Aug 17, 2014 at 15:50 | |||||
| Aug 13, 2014 at 18:02 | audit | First posts | |||
| Aug 13, 2014 at 18:17 | |||||
| Aug 11, 2014 at 8:29 | audit | First posts | |||
| Aug 11, 2014 at 8:49 | |||||
| Aug 8, 2014 at 2:09 | audit | First posts | |||
| Aug 8, 2014 at 2:23 | |||||
| Aug 1, 2014 at 22:55 | history | answered | user3185968 | CC BY-SA 3.0 |