Message261503
| Author |
A. Skrobov |
| Recipients |
A. Skrobov, christian.heimes, eryksun, paul.moore, rhettinger, serhiy.storchaka, steve.dower, tim.golden, vstinner, zach.ware |
| Date |
2016年03月10日.14:39:56 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1457620797.17.0.186061788704.issue26415@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
I've now tried it with "perf.py -r -m", and the memory savings are as follows:
### 2to3 ###
Mem max: 45976.000 -> 47440.000: 1.0318x larger
### chameleon_v2 ###
Mem max: 436968.000 -> 401088.000: 1.0895x smaller
### django_v3 ###
Mem max: 23808.000 -> 22584.000: 1.0542x smaller
### fastpickle ###
Mem max: 10768.000 -> 9248.000: 1.1644x smaller
### fastunpickle ###
Mem max: 10988.000 -> 9328.000: 1.1780x smaller
### json_dump_v2 ###
Mem max: 10892.000 -> 10612.000: 1.0264x smaller
### json_load ###
Mem max: 11012.000 -> 9908.000: 1.1114x smaller
### nbody ###
Mem max: 8696.000 -> 7944.000: 1.0947x smaller
### regex_v8 ###
Mem max: 12504.000 -> 9432.000: 1.3257x smaller
### tornado_http ###
Mem max: 27636.000 -> 27608.000: 1.0010x smaller
So, on these benchmarks, the saving is not threefold, of course; but still quite substantial (up to 30%).
The run time difference, on these benchmarks, is between "1.04x slower" and "1.06x faster", for reasons beyond my understanding (variability of background load, possibly?) |
|