Message99894
| Author |
akuchling |
| Recipients |
akuchling, gpolo |
| Date |
2010年02月23日.02:46:00 |
| SpamBayes Score |
0.00035065933 |
| Marked as misclassified |
No |
| Message-id |
<1266893162.05.0.0182278704766.issue2134@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
Unfortunately I think this will break many users of tokenize.py.
e.g. http://browsershots.googlecode.com/svn/trunk/devtools/pep8/pep8.py
has code like:
if (token_type == tokenize.OP and text in '([' and ...):
If tokenize now returns LPAR, this code will no longer work correctly.
Tools/i18n/pygettext.py, pylint, WebWare, pyfuscate, all have similar code. So I think we can't change the API this radically. Adding a parameter to enable more precise handling of tokens, and defaulting it to off, is probably the only way to change this. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2010年02月23日 02:46:02 | akuchling | set | recipients:
+ akuchling, gpolo |
| 2010年02月23日 02:46:02 | akuchling | set | messageid: <1266893162.05.0.0182278704766.issue2134@psf.upfronthosting.co.za> |
| 2010年02月23日 02:46:00 | akuchling | link | issue2134 messages |
| 2010年02月23日 02:46:00 | akuchling | create |
|