This issue tracker has been migrated to GitHub ,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
| Author | ocean-city |
|---|---|
| Recipients | ocean-city |
| Date | 2008年03月18日.07:15:56 |
| SpamBayes Score | 0.0036483358 |
| Marked as misclassified | No |
| Message-id | <1205824558.9.0.758785186516.issue2382@psf.upfronthosting.co.za> |
| In-reply-to |
| Content | |
|---|---|
> I tried to fix this problem, but I'm not sure how to fix this. Quick observation... /////////////////////////////////// // Possible Solution 1. Convert err->text to console compatible encoding (not to source encoding like in python2.x) where PyTokenizer_RestoreEncoding is there. 2. err->text is UTF-8, actual output is done in Python/pythonrun.c(print_error_text), so adjust offset there. /////////////////////////////////// // Solution requires... 1. - PyUnicode_DecodeUTF8 in Python/pythonrun.c(err_input) should be changed to some kind of "bytes" API. - The way to write "bytes" to File object directly is needed. 2. - The way to know actual byte length of given unicode + encoding. //////////////////////////////////////////////////// // Experimental patch Attached as experimental patch of solution 2. Looks agly, but seems working on my environment. (I assumed get_length_in_bytes(f, " ", 1) == 1 but I'm not sure this is always true in other platforms. Probably nicer and more general solution may exist) |
|
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2008年03月18日 07:15:59 | ocean-city | set | spambayes_score: 0.00364834 -> 0.0036483358 recipients: + ocean-city |
| 2008年03月18日 07:15:58 | ocean-city | set | spambayes_score: 0.00364834 -> 0.00364834 messageid: <1205824558.9.0.758785186516.issue2382@psf.upfronthosting.co.za> |
| 2008年03月18日 07:15:57 | ocean-city | link | issue2382 messages |
| 2008年03月18日 07:15:56 | ocean-city | create | |