Message265887
| Author |
vstinner |
| Recipients |
jstasiak, larry, rhettinger, serhiy.storchaka, vstinner, yselivanov |
| Date |
2016年05月19日.19:51:46 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1463687506.5.0.808553170409.issue26814@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
> Result of the benchmark suite:
>
> slower (3):
>
> * raytrace: 1.06x slower
> * etree_parse: 1.03x slower
> * normal_startup: 1.02x slower
Hum, I recompiled the patched Python, again with PGO+LTO, and ran the same benchmark with the same command. In short, I replayed exaclty the same scenario. And... Only raytrace remains slower, etree_parse and normal_startup moved to the "not significant" list.
The difference in the benchmark result doesn't come from the benchmark. For example, I ran gain the normal_startup benchmark 3 times: I got the same result 3 times.
### normal_startup ###
Avg: 0.295168 +/- 0.000991 -> 0.294926 +/- 0.00048: 1.00x faster
Not significant
### normal_startup ###
Avg: 0.294871 +/- 0.000606 -> 0.294883 +/- 0.00072: 1.00x slower
Not significant
### normal_startup ###
Avg: 0.295096 +/- 0.000706 -> 0.294967 +/- 0.00068: 1.00x faster
Not significant
IMHO the difference comes from the data collected by PGO. |
|