This issue tracker has been migrated to GitHub ,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
| Author | pitrou |
|---|---|
| Recipients | christian.heimes, gregory.p.smith, pitrou |
| Date | 2008年03月23日.19:05:32 |
| SpamBayes Score | 0.062273283 |
| Marked as misclassified | No |
| Message-id | <1206299134.66.0.0612444192968.issue2013@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
The problem with choosing a sensible freelist size is that we don't have any reference workloads. However, I just tested with 10000 and it doesn't seem to slow anything down anyway. It doesn't make our microbenchmarks I thought the patch to compact freelists at each full gc collection had been committed, but it doesn't seem there. Perhaps it will change matters quite a bit. On the one hand, it will allow for bigger freelists with less worries of degrading memory footprint (but still, potential cache pollution). On the other hand, the bigger the freelists, the more expensive it is to deallocate them. |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2008年03月23日 19:05:35 | pitrou | set | spambayes_score: 0.0622733 -> 0.062273283 recipients: + pitrou, gregory.p.smith, christian.heimes |
| 2008年03月23日 19:05:34 | pitrou | set | spambayes_score: 0.0622733 -> 0.0622733 messageid: <1206299134.66.0.0612444192968.issue2013@psf.upfronthosting.co.za> |
| 2008年03月23日 19:05:33 | pitrou | link | issue2013 messages |
| 2008年03月23日 19:05:32 | pitrou | create | |