Message233747
| Author |
martin.panter |
| Recipients |
alanmcintyre, benjamin.peterson, martin.panter, nadeem.vawda, pitrou, serhiy.storchaka, stutzbach |
| Date |
2015年01月09日.12:22:55 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1420806176.11.0.957684109278.issue19051@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
For what it’s worth, it would be better if compressed streams did limit the amount of data they decompressed, so that they are not susceptible to decompression bombs; see Issue 15955. But having a flexible-sized buffer could be useful in other cases.
I haven’t looked closely at the code, but I wonder if there is much difference from the existing BufferedReader. Perhaps just that the underlying raw stream in this case can deliver data in arbitrary-sized chunks, but BufferedReader expects its raw stream to deliver data in limited-sized chunks?
If you exposed the buffer it could be useful to do many things more efficiently:
* readline() with custom newline or end-of-record codes, solving Issue 1152248, Issue 17083
* scan the buffer using string operations or regular expressions etc, e.g. to skip whitespace, read a run of unescaped symbols
* tentatively read data to see if a keyword is present, but roll back if the data doesn’t match the keyword |
|