This issue tracker has been migrated to GitHub ,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
| Author | brett.cannon |
|---|---|
| Recipients | brett.cannon, christian.heimes, gvanrossum |
| Date | 2007年10月20日.02:17:02 |
| SpamBayes Score | 0.018028105 |
| Marked as misclassified | No |
| Message-id | <1192846624.43.0.787308902814.issue1267@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
OK, for some reason, when PyTokenizer_FindEncoding() is called and the resulting file is big enough, it starts off with a seek position (according to TextIOWrapper.tell()) of 4096. That happens to be exactly half the size of the buffer used by io. But I am not sure what the magical size is as creating a file of 4095, 4096, and 4097 bytes does not trigger this bug. |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2007年10月20日 02:17:04 | brett.cannon | set | spambayes_score: 0.0180281 -> 0.018028105 recipients: + brett.cannon, gvanrossum, christian.heimes |
| 2007年10月20日 02:17:04 | brett.cannon | set | spambayes_score: 0.0180281 -> 0.0180281 messageid: <1192846624.43.0.787308902814.issue1267@psf.upfronthosting.co.za> |
| 2007年10月20日 02:17:04 | brett.cannon | link | issue1267 messages |
| 2007年10月20日 02:17:03 | brett.cannon | create | |