Message127115
| Author |
pitrou |
| Recipients |
MizardX, antlong, eric.araujo, nadeem.vawda, niemeyer, pitrou, rhettinger, wrobell, xuanji |
| Date |
2011年01月26日.13:33:06 |
| SpamBayes Score |
6.9542715e-05 |
| Marked as misclassified |
No |
| Message-id |
<1296048784.3684.32.camel@localhost.localdomain> |
| In-reply-to |
<1296000439.45.0.320087924509.issue5863@psf.upfronthosting.co.za> |
| Content |
> * The read*() methods are implemented very inefficiently. Since they
> have to deal with the bytes objects returned by
> BZ2Decompressor.decompress(), a large read results in lots of
> allocations that weren't necessary in the C implementation.
It probably depends on the buffer size. Trying to fix this /might/ be
premature optimization.
Also, as with GzipFile one goal should be for BZFile to be wrappable in
a io.BufferedReader, which has its own very fast buffering layer (and
also a fast readline() if you implement peek() in BZFile).
> * Fixed a typo in test_bz2's testReadChunk10() that caused the test to
> pass regardless of whether the data read was correct
> (self.assertEqual(text, text) -> self.assertEqual(text, self.TEXT)).
> This one might be worth committing now, since it isn't dependent on
> the rewrite.
Ah, thank you. Will take a look. |
|