Message145407
| Author |
loewis |
| Recipients |
Christophe Simonis, Garen, Nam.Nguyen, amaury.forgeotdarc, arekm, asvetlov, barry, doko, eric.araujo, georg.brandl, jcea, jeremybanks, lars.gustaebel, leonov, loewis, nadeem.vawda, nicdumz, nikratio, ockham-razor, pitrou, proyvind, rcoyner, shirish, strombrg, thedjatclubrock, tshepang, vstinner, ysj.ray |
| Date |
2011年10月12日.16:16:50 |
| SpamBayes Score |
1.1719457e-08 |
| Marked as misclassified |
No |
| Message-id |
<4E95BD71.6040609@v.loewis.de> |
| In-reply-to |
<1318332919.3277.3.camel@localhost.localdomain> |
| Content |
>> Correct. I copied the algorithm from _io.FileIO, under the assumption
>> that there was a reason for not using a simpler O(n log n) doubling
>> strategy. Do you know of any reason for this? Or is it safe to ignore it?
>
> I don't know, but I'd say it's safe to ignore it.
To elaborate: ISTM that it's actually a bug in FileIO. I can imagine
where it's coming from (i.e. somebody feeling that overhead shouldn't
grow unbounded), but I think that's ill-advised - *if* somebody really
produces multi-gigabyte data (and some people eventually will), they
still deserve good performance, and they will be able to afford the
memory overhead (else they couldn't store the actual output, either).
> Generally we use a less-than-doubling strategy, to conserve memory (see
> e.g. bytearray.c).
Most definitely. In case it isn't clear (but it probably is here):
any constant factor > 1.0 will provide amortized linear complexity. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2011年10月12日 16:16:51 | loewis | set | recipients:
+ loewis, barry, georg.brandl, doko, jcea, amaury.forgeotdarc, arekm, lars.gustaebel, pitrou, vstinner, nadeem.vawda, nicdumz, eric.araujo, Christophe Simonis, rcoyner, proyvind, asvetlov, nikratio, leonov, Garen, ysj.ray, thedjatclubrock, ockham-razor, strombrg, shirish, tshepang, jeremybanks, Nam.Nguyen |
| 2011年10月12日 16:16:51 | loewis | link | issue6715 messages |
| 2011年10月12日 16:16:50 | loewis | create |
|