Message379399
| Author |
methane |
| Recipients |
larry, lemburg, mark.dickinson, methane, pablogsal, pitrou, rhettinger, scoder, serhiy.storchaka, vstinner, yselivanov |
| Date |
2020年10月23日.03:21:19 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1603423279.43.0.132255733574.issue24165@roundup.psfhosted.org> |
| In-reply-to |
| Content |
I had suspected that pypeformance just don't have enough workload for non-small int.
For example, spectral_norm is integer heavy + some float warkload. But bm_spectral_norm uses `DEFAULT_N = 130`. So most integers are fit into smallint cache.
On the othar hand, spectral_norm in the benchmarkgame uses N=5500.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/program/spectralnorm-python3-8.html
So I ran the benchmark on my machine:
master:
real 1m24.647s
user 5m37.515s
patched:
real 1m19.033s
user 5m14.682s
master+increased small int from [-5, 256] to [-9, 1024]
real 1m23.742s
user 5m33.569s
314.682/337.515 = 0.9323496733478512. So ther is only 7% speedup even when N=5500.
After all, I think it is doubtful. Let's stop this idea until situation is changed. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2020年10月23日 03:21:19 | methane | set | recipients:
+ methane, lemburg, rhettinger, mark.dickinson, pitrou, scoder, vstinner, larry, serhiy.storchaka, yselivanov, pablogsal |
| 2020年10月23日 03:21:19 | methane | set | messageid: <1603423279.43.0.132255733574.issue24165@roundup.psfhosted.org> |
| 2020年10月23日 03:21:19 | methane | link | issue24165 messages |
| 2020年10月23日 03:21:19 | methane | create |
|