Message78048
| Author |
loewis |
| Recipients |
gregory.p.smith, loewis, pitrou, rhettinger, stutzbach |
| Date |
2008年12月18日.23:17:18 |
| SpamBayes Score |
6.278502e-08 |
| Marked as misclassified |
No |
| Message-id |
<494AD9FD.6000205@v.loewis.de> |
| In-reply-to |
<1229592517.9355.7.camel@localhost> |
| Content |
> But what counts is where tuples can be created in massive numbers or
> sizes: the eval loop, the tuple type's tp_new, and a couple of other
> places. We don't need to optimize every single tuple created by the
> interpreter or extension modules (and even the, one can simply call
> _PyTuple_Optimize()).
Still, I think this patch does too much code duplication. There should
be only a single function that does the optional untracking; this then
gets called from multiple places.
> Also, this approach is more expensive
I'm skeptical. It could well be *less* expensive, namely if many tuples
get deleted before gc even happens. Then you currently check whether you
can untrack them, which is pointless if the tuple gets deallocated
quickly, anyway. |
|