Message156564
| Author |
kristjan.jonsson |
| Recipients |
kristjan.jonsson, mark.dickinson, michael.foord, serhiy.storchaka |
| Date |
2012年03月22日.14:43:43 |
| SpamBayes Score |
3.3972707e-09 |
| Marked as misclassified |
No |
| Message-id |
<1332427424.57.0.971526276394.issue14381@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
Yes, there is a measurable performance decrease in pybench arithmetic tests.
Integers don't fall out of arithmetic that often, true. But integral floats are incredibly common in tabular data. In a game such as Eve Online, configuration data contains a lot of 0.0, 1.0, -1.0 and so on. This patch saved us many megabytes on the server. I can check again...
>>> [sys.getrefcount(float(i)) for i in range(-10, 11)]
[777, 4, 38, 9, 215, 691, 627, 185, 98, 603, 73180, 62111, 8326, 6225, 6357, 11737, 2906, 1393, 3142, 1145, 5601]
>>> sum([sys.getrefcount(float(i)) for i in range(-10, 11)])
185340
>>>
This is on an idle server. A server with lots of stuff going on will have this:
[16715, 184, 1477, 34, 1505, 27102, 3878, 1344, 6248, 974, 595889, 313062, 124072, 120054, 65585, 138667, 13265, 2499, 15677, 3175, 24821]
>>> sum([sys.getrefcount(float(i)) for i in range(-10, 11)])
1465155
About half of the interned floats are 0.0
On a 64 bit machine with each float taking 24 bytes, this is 35mb net.
An alternative could be to add a function for manual intering of floating point data, which one can use e.g. when reading tabular data. |
|