Message80791
| Author |
vstinner |
| Recipients |
amaury.forgeotdarc, benjamin.peterson, brett.cannon, sjmachin, vstinner |
| Date |
2009年01月29日.23:13:34 |
| SpamBayes Score |
9.401152e-11 |
| Marked as misclassified |
No |
| Message-id |
<1233270818.59.0.883381370647.issue4626@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
> I don't like the change of API to PyTokenizer_FromString.
> I would prefer another function like
PyTokenizer_IgnoreCodingCookie()
Ok, I created a new function PyTokenizer_FromUnicode(). I
choosed "FromUnicode" because the string is encoded in unicode (as
UTF-8, even if it's not the wchar_t* type).
> The (char *) cast in PyTokenizer_FromString is unneeded.
The cast on the decode_str() result? It was already present in the
original code. I removed it in my new patch.
> You need to indent the "else" clause after you test for
ignore_cookie.
Ooops, I always have problems to generate a diff because my editor
removes trailing spaces and then I have to ignore space changes to
create the diff.
> I'd like to see a test that shows that byte strings still have their
cookies examined.
test_pep263 has already two tests using a "#coding:" header. |
|